The Synthetic Biology Equation: Engineering + Bioscience = The Future of Biotech

(Perhaps that title is a bit audacious; I don’t claim to be able to predict the future of anything. But it’s entirely possible that synth bio will play a big role in biotech in the future. Let’s explore that more below. . . .)

Good morning, everyone! I was traveling last week, which prevented my putting up this post on Saturday as usual, and I decided to postpone it till today.

One of the classes I took last semester was Biotechnology and Society, and I decided to write my final paper on synthetic biology after the teacher mentioned the first production of a self-replicating “man-made” cell by a group of scientists in California.

Before I dig into that a bit more, though, let me define synthetic biology (or synth bio for short): it is the full-scale application of engineering techniques to biological systems. How is it different from regular genetic engineering/GMO production, then? The answer lies in the scale of said engineering: for genetic engineering, it’s on the gene level, one or more genes plus regulatory elements (regulating the expression of the gene) within an organism. For synth bio, though, engineering is on the level of an entire chromosome or even a genome, either wholescale editing or rewriting from the ground up. Essentially, synth bio is genetic engineering on steroids.

Image result for stephane leduc
Stephane Leduc, author of La Biologie Synthetique


A little history: Synthetic biology was first conceived, if not put into practice, way back in 1912 when Stephane Leduc, a French scientist, published La Biologie Synthetique. In this book, Leduc stated that the consistent and controlled reproduction of natural processes seen in other sciences, like chemistry, was lacking in biology at his time. Synthetic biology couldn’t take off, though, without the development of molecular biology in the mid-1900s, starting with Crick and Watson’s discovery of DNA structure (a topic for another time). Then, the development of fast, easy sequencing sparked our current age of genomics, the study of whole genomes, and synthetic biology had all the tools it needed to become a practiced discipline.

This brings us up to recent developments. Just last year, a research group at the J. Craig Venter Institute, headed by Venter himself, succeeded in creating a self-replicating bacterium with a synthetic genome, the first of its kind. The bacterium, JCVI-syn3.0, has only what Venter’s team determined was the minimal genome necessary for life, a feat they accomplished by “mixing and matching” genes of the small bacterium Mycoplasma mycoides to find which ones a bacterium could live without. In future, Venter and his team see the use of similar synthetic bacteria not only to learn about life, but to engineer it for specific purposes, like biofuel production.

Image result for jcvi syn 3.0
A colony of JCVI-syn3.0


The question is: how synthetic is JCVI-syn3.0? Technically, it’s not really a man-made bacterium. Only the genome was man-made, and that was really only adapted from the genome of M. mycoides. The “shell” the genome was inserted into was simply a living bacterium with the genome removed. This is a big step for synthetic biology, but it has a long way to go before it is truly dictionary-definition synthetic.

What do you think? Have you heard of synthetic biology? Did you hear about the production of JCVI-syn3.0? Tell me in the comments!


Bacteria and Bioplastics

Hello, everyone! It’s the second Saturday of the month, which means I am here with a science post. I’ve been taking a class about biotechnology (which actually ended this past week), so I’ve been finding various biotech things to give presentations about and so forth. Today’s topic, bacteria-produced bioplastics, was one of those biotech things.

What is bioplastic? I’m glad you asked! Bioplastics are biodegradable plastics which are being investigated to replace petroleum-based plastics, since they could reduce costs and environmental impacts of plastic use. A major type of bioplastic, which I’m going to focus on today, is the polyhydroxyalkoanates (PHAs for short). These are polyesters (a type of organic molecule) naturally produced in bacteria as reserves of carbon and energy. They can then be broken down when the bacterium needs the carbon or energy, which makes them truly biodegradable. (Fun fact: I read about a PhD student who got a certain bacterium to produce 80% of its weight in PHAs by using ice cream as a nutrient medium.)

Image result for alcaligenes eutrophus
Alcaligenes eutrophus, a PHA-producing bacterium.

PHAs have many and varied potential applications. They have been proposed as a packaging for foods like cheese, as biodegradable containers for things like drugs and fertilizers, as a material for disposable items like razors, cups, and shampoo bottles, and in the medical field, to be used as a material for things like sutures and bone replacements. Their properties are similar to those of currently used plastics like polypropylene, which could make the transition smoother if they were to go into use.


The difficulty, up until recently, has not been getting the bacteria to make PHAs, but getting the PHAs out of the bacteria. Last month, however, it was reported that a Spanish research team has developed and patented a method for genetically engineering a predatory bacterium, Bdellovibrio bacteriovorus, to break down the PHA producers, but not the PHAs. A number of companies are already interested in using this method commercially; it could be used for extracting valuable enzymes and other proteins as well as for bioplastic production. This method is much safer and less expensive than previous methods that used things like chemical detergents to extract PHAs. I think it’s a big step forward in making PHAs practical.

Image result for bdellovibrio bacteriovorus
Bdellovibrio bacteriovorus, the predatory bacterium


Here are my sources if you want to learn more:

What do you think of this technology? Would you use a bioplastic? Have you ever heard of this before? Share in the comments!

Mosquitoes, Malaria, and Molecular Biology: How DNA can Help Kill a Disease

Happy second Saturday, everyone! It’s time for a science post. For today’s post, I visited the Science News website and browsed around for something interesting to talk about. There were a lot of options, but I settled on this one. All credit goes to the original authors.

I’m sure you’ve heard of malaria. It is commonly known to be rampant in third-world areas, particularly Africa. It’s caused by microorganisms of the genus Plasmodium, which have part of their life cycle in mosquitoes of the genus Anopheles. This makes it that much harder to eradicate, since any disease or parasite with what’s called an “animal vector” requires health workers to eliminate every animal that could carry the disease. Imagine trying to kill every one of the 30-40 malaria-transmitting Anopheles mosquito species in Africa, and you have some idea of why malaria is so hard to get rid of.

So essentially, to end the disease, end the mosquito. But how?

Using everyone’s favorite molecule, of course: DNA!

Scientists at Imperial College London have developed a “gene drive,” an engineered DNA molecule that disrupts genes’ activity by inserting itself into them, that is capable of sterilizing females of one Anopheles species. That could curb the reproduction of the mosquito, and thus the number of mosquitoes available to carry Plasmodium. 



Plasmodium, the malaria parasite. (Image from the CDC)

This is actually the second Anopheles gene drive to be developed by the same researchers. The other one (findings published 2015) aimed to prevent Anopheles from carrying Plasmodium. So far, both of them work, though neither has yet been released into the wild Anopheles population. But both have potential to help stop malaria.

There you have it: another manifestation of the current genetics revolution. (I must admit I am partial to blogging about said revolution, being a major DNA geek.) What do you think of these gene drives that might help eliminate malaria? Do you like DNA as much as I do? Share in the comments! Also, if you’d like to learn more about today’s topic, be sure to check out the links in the post!

Chernobyl and the Depths of Human Stupidity


As some of you may know, I am currently taking a class called “Myths and Misconceptions about Nuclear Science.” It’s a great class. I’ve been learning a lot: about nuclear physics and radiation and nuclear power and how stupid people can really be.

So today, on that last point, I am going to talk a bit about Chernobyl.

Bit messy, huh? (Image not mine)

Chernobyl was the worst nuclear power disaster in history. You can kind of get a feel for that from the above photo. It involved a nuclear fission reactor in the Soviet Ukraine, one of four reactors on the site. (The other three kept working just fine for years after the accident.) The accident spread radiation all over Europe, killed 31 people either directly in the accident or from acute radiation sickness after it, and to this day, no one can live in a 1,000-square mile “exclusion zone” around the reactor.

So what happened? What was wrong with the reactor that caused this horrible accident?

Initially, nothing.

Obviously, all kinds of things were wrong with it later on, or we wouldn’t have photographs like the one above to show you all the devastation. But initially, the reactor was working just fine, like it was supposed to.

So, we have the same question again. What happened?

Well, on April 25th, 1986, the operators of the reactor started a test to see if they could make the reactor safer.

A diagram of an RBMK reactor, the same type as Chernobyl.

They wanted to find out if they could keep the electricity-generating turbines (in the upper right of the diagram above) going during shutdown, so they could keep the reactor core cool without having to use a generator or the like. (It’s very important to keep the core cool, even when the reactor isn’t running. You’ve heard of nuclear meltdowns? If the fuel gets too hot, it will melt through the floor of the reactor vessel, and sometimes through the building.) At Chernobyl, as in most reactors, the coolant was water, composed of hydrogen and oxygen.

Let me take a moment to explain some other components of a nuclear reactor. The most obvious necessity is fuel; usually, as at Chernobyl, this is uranium, a mixture of two different types, one of which splits more readily when slow neutrons are shot at it. In order to slow the neutrons down so you can keep a chain reaction going, you need a “moderator,” in this case, graphite. As we’ve already discussed, you need coolant to keep the core from overheating. Last but not least, you need a control system, usually rods made of neutron-absorbing material that can be pulled in and out of the core (see diagram). This helps keep the reaction in check.

Back at Chernobyl, the first thing the operators did for the test was to turn off the emergency core-cooling system, a violation of reactor operation guidelines. They then lowered the power of the reactor, but instead of holding it at the recommended level, they let it drop too much and started pulling out control rods to try to get the power back up. They turned on two extra water pumps for the test, then realized there was too much water in the reactor and reduced the flow, causing the reaction to increase. And when, ignoring safety system warnings, they pulled out too many control rods, the reaction rate skyrocketed (by a factor of 10,000 in 5 seconds), the water in the reactor flashed to steam, the reactor exploded twice, and the graphite lit on fire for a couple weeks, ultimately spreading radiation across Europe.

Clearly, this accident was caused largely by human stupidity (don’t turn off the safety systems in a nuclear reactor!), but the reactor’s design played a role as well. The RBMK was a cheaper style of reactor, so the containment building (which in U.S. reactors is often feet-thick concrete that can withstand missile blasts) was not adequate to contain the initial boiler explosion and the hydrogen explosion that followed. Because of the design of the reactor with graphite as the moderator and water as the coolant, instead of water being both moderator and coolant as in other reactors, removing water increased the reaction rate rather than decreasing it. In addition, without graphite, there would have been less chance of a fire and thus less radiation spread.

A reactor building in Seabrook, New Hampshire. Note the huge concrete containment dome.

In sum, Chernobyl, the worst nuclear power accident ever, was caused by a combination of human error and design flaws, but mostly by human error, since humans made the faulty containment and graphite-moderated design that contributed to the severity of the accident. Further, the reactor was working fine before the operators turned off and ignored various safety systems. Does this make nuclear power unsafe? The answer is complicated. Nothing is perfectly safe; humans can make grave errors with any power system. And in fact, chemical accidents have caused more deaths even than this worst nuclear accident. I think what we can learn from Chernobyl is that we need to be smart about safety: use the best designs, don’t skimp on safety systems, and never, ever turn them off.

If you’re interested in learning more about nuclear science and technology, you can check out Nuclear Choices: A Citizen’s Guide to Nuclear Technology by Richard Wolfson. Although it’s a bit dated (it was published when the USSR was still a country), it is easily readable for non-physicists and deals with many aspects of nuclear technology in an unbiased way. It is my textbook for my nuclear science class and my main source for this blog post.

What do you think of Chernobyl? Have you heard much about it before? Are you surprised at how much of a role human error played? How about design flaws? Do you have any questions? What do you think of nuclear power? Tell me in the comments!