Retrieving digital images from DNA storage
Latest News »
Technology companies routinely build sprawling data centers to store all the baby pictures, financial transactions, funny cat videos and email messages that users hoard. But a new technique developed by the University of Washington and Microsoft researchers details using DNA molecules to shrink the space needed to store digital data that today would fill a Walmart supercentre down to the size of a sugar cube, according to a 7 April press statement.
In an experiment outlined in a paper presented in April at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems, the team successfully encoded digital data from four image files into the nucleotide sequences of synthetic DNA snippets.
More significantly, the researchers were also able to reverse that process—retrieving the correct sequences from a larger pool of DNA and reconstructing the images without losing a single byte of information.
The team from the Molecular Information Systems Lab housed in the UW Electrical Engineering Building is working with Microsoft Research to develop this DNA-based storage system.
How does it work?
DNA molecules can store information many millions of times more densely than existing technologies for digital storage—flash drives, hard drives, magnetic and optical media. Those systems also degrade after a few years or decades, while DNA can reliably preserve information for centuries. DNA is best suited for archival applications, rather than instances where files need to be accessed immediately.
First, the researchers developed a novel approach to convert the long strings of ones and zeroes in digital data into the four basic building blocks of DNA sequences—adenine, guanine, cytosine and thymine, better known as AGCT. The digital data is chopped into pieces and stored by synthesizing a massive number of tiny DNA molecules, which can be dehydrated or otherwise preserved for long-term storage.
To access the stored data later, the researchers also encode the equivalent of zip codes and street addresses into the DNA sequences. Using Polymerase Chain Reaction (PCR) techniques, commonly used in molecular biology, helps them more easily identify the zip codes they are looking for. Using DNA sequencing techniques, the researchers can then “read” the data and convert them back to a video, image or document file by using the street addresses to reorder the data.
Currently, the largest barrier to viable DNA storage is the cost and efficiency with which DNA can be synthesized (or manufactured) and sequenced (or read) on a large scale. But researchers say there’s no technical barrier to achieving those gains if the right incentives are in place.
The digital universe—all the data contained in our computer files, historic archives, movies, photo collections and the exploding volume of digital information collected by businesses and devices worldwide—is expected to hit 44 trillion gigabytes by 2020.
Painting a Rembrandt after his death
Dutch Baroque painter Rembrandt van Rijn died about 350 years ago. However, a team of data scientists and art historians used digital analysis, machine learning, facial recognition and 3D printing to create an entirely new painting in his style.
The project, christened The Next Rembrandt, took about two years to complete. It was a collaboration between ING, Microsoft, Delft University of Technology, The Mauritshuis and Museum Het Rembrandthuis.
To begin with, the researchers worked on an extensive database—150GB of digitally rendered graphics—of Rembrandt’s paintings and analyzed them pixel by pixel. To get this data, they analyzed a broad range of materials like high resolution 3D scans and digital files, which were upscaled (to convert a signal from lower to higher resolution) by deep learning algorithms to maximize resolution and quality.
Because Rembrandt painted more portraits than any other subject, the researchers narrowed down our exploration to these paintings. After studying the demographics, the data led them to a conclusive subject—a portrait of a Caucasian male with facial hair, between the ages of thirty and forty, wearing black clothes with a white collar and a hat, facing to the right.
As ‘The Master of Light and Shadow’, Rembrandt relied on his innovative use of lighting to shape the features in his paintings. To master this style, the researchers designed a software system that could understand Rembrandt based on his use of geometry, composition and painting material. A facial recognition algorithm identified and classified the most typical geometric patterns used by Rembrandt to paint human features. It, then, used the learned principles to replicate the style and generate new facial features for our painting.
Once the researchers generated the individual features, they had to assemble them into a fully formed face and bust, according to Rembrandt’s use of proportions. An algorithm measured the distances between the facial features in Rembrandt’s paintings and calculated them based on percentages. Next, the features were transformed, rotated and scaled, then accurately placed within the frame of the face. Finally, they rendered the light based on gathered data in order to cast authentic shadows on each feature.
Finally, the researchers created a height map using two different algorithms that found texture patterns of canvas surfaces and layers of paint. That information was transformed into height data, allowing them to mimic the brush strokes used by Rembrandt. They, then, used an elevated printing technique on a 3D printer that output multiple layers of paint-based UV (ultraviolet) ink. The final height map determined how much ink was released onto the canvas during each layer of the printing process. In the end, they printed 13 layers of ink, one on top of the other, to create a painting texture true to Rembrandt’s style.
System to track people, objects with Wi-Fi
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a system called Chronos that can accurately detect the position of a person or object inside a room within tens of centimeters, using Wi-Fi signals only. Chronos works without the aid of any secondary sensors, only using a technology called time-of-flight calculation, which measures the time it takes data to travel from the Wi-Fi access point to the user’s device, according to a 3 April press statement.
According to MIT, this new system is 20 times more accurate than the current Wi-Fi-based tracking systems. Researchers say that Chronos was 94% successful in detecting which room a person is currently in, and 97% successful in determining if a shop’s customer was inside or outside the store.
‘Smart’ paint spray
A Dartmouth researcher and his colleagues have invented a “smart” paint spray that can robotically reproduce photographs as large-scale murals. The computerized technique, which basically spray paints a photo, can be used in digital fabrication, digital and visual arts, artistic stylization and other applications. The findings appear in the journal Computer and Graphics. The “smart” spray can system is a novel twist on computer-aided painting, which originated in the early 1960s.
In a bid to help non-artists create accurate reproductions of photographs as large-scale murals using spray painting, the researchers developed a computer-aided system that uses an ordinary paint spray can, tracks the can’s position relative to the wall or canvas and recognizes what image it “wants” to paint.
As the person waves the pre-programmed spray can around the canvas, the system automatically operates the spray on/off button to reproduce the specific image as a spray painting. The prototype is fast and light-weight: it includes two webcams and QR-coded cubes for tracking, and a small actuation device for the spray can, attached via a 3D-printed mount. Paint commands are transmitted via radio directly connected to a servo-motor operating the spray nozzle.
Running on a nearby computer, the real-time algorithm determines the optimal amount of paint of the current colour to spray at the spray can’s tracked location. The end result is that the painting reveals itself as the user waves the can around, without the user necessarily needing to know the image beforehand.