How Things Work: DNA Computers

Breaking and reforming DNA base pairs is the basis of DNA computing. (credit: Courtesy of Wikimedia Commons) Breaking and reforming DNA base pairs is the basis of DNA computing. (credit: Courtesy of Wikimedia Commons)

Today, all computers have the same general structure. They have a processing core, where instructions are executed; some volatile memory, where data is temporarily stored; and non-volatile memory, where data can be stored over longer periods of time. Most computers have processors operating at around 50,000 million instructions per second (MIPS) and non-volatile memory of a few hundred gigabytes. This is significantly greater than the first personal computers, having a mere one MIPS processor, and the capability to store only five megabytes of data back in 1977.

Moore’s Law states that the number of transistors that can be placed on an integrated circuit will double every two years. There are some fundamental limits that will be approached eventually. The size of the transistors would approach those of molecules or even atoms eventually, and this would introduce elements of unpredictability into the circuit. As a result, scientists have decided to look for alternate methods of computing. DNA computing is one such alternate computing method that is catching on in certain scientific circles.

The structure of the DNA molecule was proposed by Nobel laureates James Watson and Francis Crick. It is a double-stranded helix, and the two strands are linked by base pairs in the center, which are held together by hydrogen bonds. There are four base pairs, A, T, C, and G, and they only connect as A-T and C-G. They are repeated millions of times on a single double-helix, and owing to the nature of the pairs, every base on one strand of the helix has a complementary base pair on the second strand. Base pairs are separated by a distance of 0.33 nanometers and, in two dimensions, will amount to over 1 million gigabits of data per square inch, over 2000 times greater than modern data storage systems. This is one of the reasons Leonard Adleman, a professor of computer science and molecular biology at the University of Southern California, proposed in a 1994 issue of the journal Science a new method of computing that involved using the DNA molecule. His proposal and its demonstration as a computing method to solve a seven-point Hamiltonian path problem (a special case of the Traveling Salesman Problem) brought DNA computing to the forefront of unconventional computing.

There are different ways in which a DNA computer can be built. On a basic level, all DNA computers function by pairing bases on the two strands and using certain enzymes to cut or splice the DNA molecules at different locations. The DNA computer can be thought of as having input data, hardware, and software molecules. These, when mixed together, react in specific ways to produce output molecules — or solutions — to a given problem. As of today, DNA computers may exist in test tubes or as DNA-based logic switches, much like the digital logic gates in modern computers. These may even be fully programmable. The main difference between conventional and DNA computing methods is the way instructions are executed. In conventional computers, data is processed sequentially, causing an increase in the time required to solve a problem. DNA computers process data in a parallel manner, thereby dramatically speeding up processes.

There are drawbacks that must be overcome before the DNA computer can become mass produced. DNA computers require some amount of human assistance, which increases the overall time to obtain a solution to a problem. Human interaction is necessary when combining the mixture of input, software, and hardware molecules, and a human must also interpret the output molecules. Furthermore, these computers, although capable of parallel processing, require amounts of DNA that are proportional to the complexity of the problem. This could result in extremely large and impractical quantities of DNA being used to solve equations having hundreds of variables. The DNA computer today, although programmable, only performs rudimentary functions, and significant advances have to be made before the execution of more complex functions.

In 2004, Ehud Shapiro of the Weizmann Institute of Science in Israel published the results of a study in Nature. He created a DNA computer capable of detecting disease in a patient. His study was performed in a laboratory with test tubes, and his work has not been tested on a real patient. Preliminary results seemed to point to his team’s DNA computer as a reliable approach to the early detection of diseases and cancers.

As DNA is abundant, there is no dearth of cheap, raw materials to manufacture these computers. They also use far less energy than traditional computers, according to National Geographic. While DNA computers may not be able to run operating systems, games, or spreadsheet applications, they show great potential for heavy data-crunching supercomputers. Their use would also mean a change in the understanding of computers as we know them.