(This is a post I started writing in mid-June. I think my ambitions for it may have been too high, as it's been dragging along ever since. It's not as well organized as I would like, but if I don't post it now it'll never see the light of day.)
The full-day trip to the Athabasca oil sands was the highlight of the Canadian Science Writers Association meeting, at least for me. The tour was sponsored by Connacher Oil and Gas Ltd., a relatively small player in the oil sands business. They flew a dozen of us up to Fort McMurray (about 400 miles from Calgary) in the morning, and took us first to the Oil Sands Discovery Center and then to their Algar drilling site.
Biologists are very concerned about the issues surrounding development of Canada's oil sands petroleum reserves. The biggest issue is global climate change, but economics and local ecology are big concerns too.
I'll start with a very simple explanation of how fossil fuels come into existence, using a slide I prepared when I was teaching intro biology. The text at the top of the slide (in italics) is a question posed by a student in the class, asking about the relationship between 'fossils' and 'fossil fuels'. Plants use solar energy to transform carbon dioxide and water into biological molecules (starch, cellulose, lipids etc). When plants (or animals) die the organic matter they contain is usually eaten by other organisms, but sometimes it's inaccessible to or too much for these 'detritus feeders' and instead is buried, where it breaks down very slowly. The oxygen content of the molecules is usually lost. Commonly the hydrogen is lost too (mostly as methane), leaving behind coal, but sometimes the carbon and hydrogen remain together in the hydrocarbon molecules we call oil.
The figure refers to a paper by J. S. Dukes titled Burning buried sunshine: human consumption of ancient solar energy. I found this an excellent resource for the class. It explains that we're now burning oil a million times faster than the rate at which it was produced. Put another way, for hundreds of millions of years, a tiny fraction of the carbon dioxide that plants temporarily sequestered was buried rather than being immediately recycled. Now we're dumping all that CO2 back into the atmosphere at once (relative to the very long time scale over which it was buried). So CO2 that was sequestered very slowly from ancient atmospheres is being returned very quickly as we burn fossil fuel, and this CO2 traps solar energy and disrupts climates all around the world.
In principle, there's only so much buried sunshine, and when it's all gone we'll be able to stop worrying about the causes of CO2 rise and just deal with the consequences. But the consequences look pretty bad, so prevention would be better. Unfortunately, not only is Canada a big producer of CO2, its oil industry is a big enabler of CO2 production by other countries. By extracting oil from the massive oil sands deposits, Canadian industry is making more of this buried sunshine available for burning. Worse, the extraction process itself uses a lot of fossil fuel - separating the oil from the sand requires heat in the form of steam heat, usually generated by burning natural gas.
A separate issue is harm to local environments - extracting oil from oil sands is reported to be particularly destructive, both by ripping away the boreal forest that covers the sands and because of toxic products released both at the mine sites and downstream. Pipelines are another problem - they can disrupt local environments both by their presence and by leaks, and they can block wildlife movement. Oil from the oil sands is delivered to refineries in Illinois and Oklahoma by the Keystone Pipeline, and there's a big debate about running a new pipeline (Keystone XL) all the way to Port Arthur and Houston on the Gulf of Mexico. A final issue is simple economics - if the world is running out of oil, wouldn't Canada be wiser to save its reserves to use or sell later, rather than selling them now to the highest bidder?
So, what did we see on the tour, and what did we learn?
The first thing we saw was the town of Fort McMurray - a little frontier town that's grown very big, very fast because of the influx of oil sands money and workers (see graph). Housing prices are even worse than Vancouver's.
Next we were taken to the Oil Sands Discovery Centre. I'm pretty sure this slick tourist attraction exists to put a positive spin on the enterprise, but it's very well done and I learned a lot. We first watched a cheesy film about the history of attempts to make money by extracting oil from the surface-exposed oil sands. The extraction itself is very easy; the film was followed by a simple demo using a beaker of oil sand and a kettle of hot water. Just pour hot water onto the sand and stir - the oil (thick black bitumen*) rises to the top and the sand settles to the bottom. Initial commercialization attempts were stymied by economic problems, both the cost of the extraction and the poor market for bitumen, especially given the widespread availability of cleaner oil from other sources. But the market for fossil fuels has grown and collossal scale of production has decreased the cost of producing it from the oil sand.
Most of the Discovery Centre exhibits and equipment displays were about the mining techniques used to extract bitumen from oil sands that are close to the surface. Until recently this surface mining was the only way to recover bitumen, and it's a spectacularly nasty process. The accessible oil sands sit under 40-60 meters of 'overburden' - a pit-mining term for the beautiful boreal forest/muskeg and underlying clay and sand. All of this has to be removed to get at the oil-soaked sand, which is scooped up, mixed with hot water, and piped to nearby extraction plants where ~90% of the oil is removed. The sand then goes back to the place it came from (more or less), and eventually the surface is supposed to be restored to some semblance of its original state.
Not surprisingly, this makes a big mess. Here's a Google Earth view of the main mining site north of Fort McMurray; you can get an idea of the enormity of the devastation by the tiny size of Fort McMurray itself, at the lower right (a town of 80,000 residents). Not only is the surface ecosystem destroyed, but the extraction process uses a lot of water from local rivers, and returns contaminated water first into giant tailing 'ponds' (lake-sized) and then into the river system.
Only about 10% of the oil sands are close enough to the surface to be mined. The rest is about 500 m deep, but new oil drilling technology now makes these accessible using a much less destructive technology called steam-assisted gravity drainage (SAGD, pronounced 'sag-D'). (The drawings below are taken from here and here.)
With SAGD, the bitumen is extracted form the oil sands in situ; the sand stays deep in the ground while the oil is liquefied by steam heat and pumped to the surface. This requires drilling two parallel channels about 5 m apart (each about a foot across), first down to the base of the oil sands deposit and then horizontally through the sands. The ability to drill horizonally with high positional accuracy is the technical advance that makes SAGD possible. Steam is pumped into both channels until the bitumen around and above them begins to liquefy (about 3 months). The lower channel (green in the figure) is then switched to extraction mode, pumping the liquid mixture of hot water and bitumen out to the extraction facility on the surface, where the bitumen is recovered and the water is cleaned up and reheated for reuse as steam. This extraction phase can continue for many years (20?), and, if I recall correctly, can remove more than 80% of the oil without disturbing the sands or much of the surface.
Connacher has two SAGD plants, both about 80 km south of Fort McMurray. They're much smaller than the big surface mining operation above, and much cleaner. Each consists of a big array of pairs of wells, as shown in the drawing below. Below that is another Google Earth image; Connacher's Pod One facility is to the left of the road. Notice how small the footprint is - this image is zoomed in tenfold relative to the surface-mining image above. We didn't tour Pod One, but the even newer facility called Algar - it's been in place for only a couple of years and so doesn't yet show up in the satellite photos on Google Earth.
Overall we were very impressed by Connacher's SAGD operation. There are no mine pits or tailings ponds - the only pond we saw was a tiny one that captures surface runoff (rain and snowmelt) from the extraction site so it can be tested for contamination before being released.
Water use is also low. More than 95% of the water used for steam is recovered and reused. The water doesn't come from the surface (lakes, rivers) but from a deep aquifer of brackish water that's not otherwise usable, and this aquifer is so big that the water withdrawal for SAGD is insignificant.
One measure of efficiency of oil sands harvesting is the steam-to-oil ratio - the ratio of energy input (largely steam to liquefy the oil) to energy recovery as oil. This matters a lot for the economics of SAGD and also for the global warming consequences, but it's hard to find the numbers. Well no, it's easy to find numbers, but they're usually not commensurate (not in the same units). Barrels of water used per barrels of oil? Cubic feet of natural gas burned per barrel of oil? Tons of CO2 produced per cubic meter of bitumen? One of the Connacher engineers told me that the energy input (for steam) was about 1/8 of the energy output (as bitumen), but this seems low.
Can the steam-to-oil ratio be improved? Luckily this is an economic issue as well as an ecological one, so the SAGD operators have made some advances. One is to pump the steam in at a relatively low pressure. This allows more of its heat to be transferred to the bitumen, but requires secondary pumps to push (pull?) the oil-water mix back to the surface. Another advance is to mix the steam with a solvent that helps the bitumen liquefy at a lower temperature, so less steam is needed. Connacher is testing this in some of its wells, but I don't have any information about the energy inputs associated with the solvent (where does it come from), how efficiently the solvent is recovered with the bitumen, or even what the solvent is. I suspect this is all proprietary information. One company reportedly is using butane, which is also a hydrocarbon - I think such a solvent would be neutral with respect to global warming, being either burned directly (in uses unrelated to the oil sands) or recovered with the bitumen and eventually burned. And any solvent that isn't recovered would be a hydrocrarbon that isn't even burned. So appropriate use of solvents in SAGD would be good for the atmosphere. Solvents such as butane are less ecologically nasty than the bitumen itself, so any solvent that stays in the ground is unlikely to cause problems.
The disruption to the surface ecosystem is much less than caused by surface mining. Not only is the area disturbed much smaller (just the main extraction facility, pipelines and access roads), but the disturbances needed are much shallower and thus easier to restore. Connacher operates under quite strict ecological regulations imposed by the Alberta government, and they appear to take these very seriously. They monitor wildlife using the same kinds of remote cameras we saw at Banff, although they're strictly forbidden from interacting with any of the wildlife they see.
I don't know how much of the good impression Connacher made on us was created by high-quality spin. Our group of science writers felt that we were pretty good at detecting spin, but we could be mistaken. One ecology colleague of mine, whose paranoia has been honed by years of public service, suggested that the whole Connacher operation might have been created by the giant oil sands companies as a public relations showpiece (a 'Potemkin village'?). And the cleanliness and efficiency of their operations does nothing to mitigate the global warming consequences of burning buried sunshine.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* Wikipedia says that bitumen is the correct name for the tar-like crude oil in these sands. It's a very complex mixture of polycyclic aromatic hydrocarbons - hydrocarbon chains that have been extensively cyclized and crosslinked into viscous tangles.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development4 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.4 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Not your typical science blog, but an 'open science' research blog. Watch me fumbling my way towards understanding how and why bacteria take up DNA, and getting distracted by other cool questions.
Simulations run successfully, but drowning in contamination angst
After some futzing around I rediscovered how to run my computer program that simulates the evolution of DNA uptake sequences in genomes. So now I've done 4 runs, using uptake-bias matrices derived either from our previous genome analysis or from the postdoc's new uptake data. There are 4 runs because I started each matrix with either a random 20 kb DNA sequence or a random 20 kb DNA sequence pre-seeded with 100 uptake sequences.
Now I've just sent the input and evolved sequences to the postdoc - he will analyze them with his new uptake-sequence-prediction model and with the old-genome-derived model. We hope this will help us understand why our previous view of uptake specificity was wrong.
He and I have spent months (and months and months) working on the manuscript that describes his DNA uptake results. Lately I've been griping that he's too much of a perfectionist, always trying to make sure the data and analysis is absolutely perfect rather than just getting the damn manuscript written and submitted. But I've now fallen into the same trap, spending days trying to understand exactly how contamination of one DNA pool with another might be compromising his analysis. (The Excel screenshot below is just for illustration - there's lots more data where that came from.) And it's not the first time I've been the one to insist on perfection - last month I spent weeks making sure we really understood the impact of sequencing error.
But we also have a reason to celebrate, as his paper on recombination tracts just appeared in PLoS Pathogens: Mell, Shumilina, Hall and Redfield, 2011. Transformation of natural genetic variation into Haemophilus influenzae genomes. Open access, of course.
Now I've just sent the input and evolved sequences to the postdoc - he will analyze them with his new uptake-sequence-prediction model and with the old-genome-derived model. We hope this will help us understand why our previous view of uptake specificity was wrong.
He and I have spent months (and months and months) working on the manuscript that describes his DNA uptake results. Lately I've been griping that he's too much of a perfectionist, always trying to make sure the data and analysis is absolutely perfect rather than just getting the damn manuscript written and submitted. But I've now fallen into the same trap, spending days trying to understand exactly how contamination of one DNA pool with another might be compromising his analysis. (The Excel screenshot below is just for illustration - there's lots more data where that came from.) And it's not the first time I've been the one to insist on perfection - last month I spent weeks making sure we really understood the impact of sequencing error.
But we also have a reason to celebrate, as his paper on recombination tracts just appeared in PLoS Pathogens: Mell, Shumilina, Hall and Redfield, 2011. Transformation of natural genetic variation into Haemophilus influenzae genomes. Open access, of course.
RRResearch: Phosphate levels and DNA controls for GFAJ-1
OK, it looks like my arsenic-bacteria experiments have advanced from "Why won't they grow" troubleshooting to real science. My test of growth in medium with no added phosphorus (P- medium) succeeded nicely. All of the cultures except the most dense grew to the same low density of about 4 x 10^6 cells/ml. This tells me that their growth was indeed limited by the small amount of phosphorus contaminating the medium. It also tells me that my P- medium contains less phorphate than the medium used by Wolfe-Simon et al., as the cells grew to about 2 x 10^7 in their P- medium. This means that I should add a small amount of phosphate (~ 3 µM) to my P- medium to replicate the growth conditions they used. I'll set up cultures with this medium tonight to check that growth is as expected.
I've also been thinking about controls for the DNA purification steps. This weekend I'm going to make up my arsenic solution (100 ml of 1.0 M Na2HAsO4). A colleague has given me her procedure for safely weighing out the powder in the fume hood (that's the only seriously risky step).
To check that my DNA-purification procedure does get rid of free arsenate from the medium, I'm going to do the following: I'll start with about 1 mg of old H. influenzae chromosomal DNA that we no longer need for other experiments. I'll mix it with some AML60 medium containing 40 mM arsenate, and then put it through the purification steps I plan to use on the GFAJ-1 DNA - two rounds of ethanol precipitation with the DNA collected by spooling, and a final spin-column purification.
I'll save a sample of the DNA after each step, and send all the steps to my collaborators for mass-spectrometry analysis of its arsenic content. Ideally the DNA will not contain any detectable arsenic. If it does, I can try more extensive purification, or just accept this level as the baseline, depending on how sensitive the mass-spec turns out to be.
Once I have the growth results from my medium with 3 µM phosphate, I'll also use my new sodium arsenate stock to make some 3 µM P medium that also has 40 mM arsenate, and then test whether the arsenate inhibits growth at all, or whether it enhances it as reported by Wolfe-Simon et al. Then I can grow a big batch of cells in 3 µM P 40 mM As, purify DNA, and send the DNA to my collaborators for analysis.
And the Research Associate has already amplified the 16S-rRNA gene from the GFAJ-1 DNA I made and sent it off for sequencing (she's a whiz!). So next week we'll be certain that I'm working with the right strain. Or not.
I've also been thinking about controls for the DNA purification steps. This weekend I'm going to make up my arsenic solution (100 ml of 1.0 M Na2HAsO4). A colleague has given me her procedure for safely weighing out the powder in the fume hood (that's the only seriously risky step).
To check that my DNA-purification procedure does get rid of free arsenate from the medium, I'm going to do the following: I'll start with about 1 mg of old H. influenzae chromosomal DNA that we no longer need for other experiments. I'll mix it with some AML60 medium containing 40 mM arsenate, and then put it through the purification steps I plan to use on the GFAJ-1 DNA - two rounds of ethanol precipitation with the DNA collected by spooling, and a final spin-column purification.
I'll save a sample of the DNA after each step, and send all the steps to my collaborators for mass-spectrometry analysis of its arsenic content. Ideally the DNA will not contain any detectable arsenic. If it does, I can try more extensive purification, or just accept this level as the baseline, depending on how sensitive the mass-spec turns out to be.
Once I have the growth results from my medium with 3 µM phosphate, I'll also use my new sodium arsenate stock to make some 3 µM P medium that also has 40 mM arsenate, and then test whether the arsenate inhibits growth at all, or whether it enhances it as reported by Wolfe-Simon et al. Then I can grow a big batch of cells in 3 µM P 40 mM As, purify DNA, and send the DNA to my collaborators for analysis.
And the Research Associate has already amplified the 16S-rRNA gene from the GFAJ-1 DNA I made and sent it off for sequencing (she's a whiz!). So next week we'll be certain that I'm working with the right strain. Or not.
Simulating evolution of uptake sequences with our new uptake-bias matrix
One of the things the post-doc and I want to do with his new DNA uptake data is test what how it behaves with our old perl simulation of the evolution of genomic uptake sequences.
This analysis is prompted by the disparity (dissonance? discrepancy? disagreement?) between the uptake bias he's measured with his degenerate DNA fragments and the sequences overrepresented in the genome. The top part of the figure is a diagram of one of the DNA sequences that H. influenzae cells prefer to take up. The middle part is a 'sequence logo' based on the related sequences found in the H. influenzae genome, and the bottom part is a logo based on the uptake biases measured by the post-doc.
This analysis is prompted by the disparity (dissonance? discrepancy? disagreement?) between the uptake bias he's measured with his degenerate DNA fragments and the sequences overrepresented in the genome. The top part of the figure is a diagram of one of the DNA sequences that H. influenzae cells prefer to take up. The middle part is a 'sequence logo' based on the related sequences found in the H. influenzae genome, and the bottom part is a logo based on the uptake biases measured by the post-doc.
Because the two logos were drawn from very different datasets, we can't directly compare their overall 'importance' (the technical term is 'information content', indicated by the height of each column of letters); I've instead just shrunk the height of the genomic logo image so its overall importance appears similar to that of the uptake logo.
The two logos still look very different, even though their consensus sequences are both identical to that shown on the double helix above them. The genome logo has a block of nine 'core' bases on the left, all of roughly equal importance (indicated by their height, and two 'T-tracts' on the right. But the core bases in the DNA-uptake logo have very different importances, with four (GCGG) being much more important than the others. The T-tracts in the uptake logo also appear much less important than those in the genomic logo.
We think the sequences in the genome accumulated (over many millions of years) due to the sequence bias of the cells' DNA-uptake machinery, so we don't understand why the two patterns are so different. Maybe other cellular processes contribute additional sequence biases, or maybe the difference is just an artefact of the way the genome sequences were identified. One way to (maybe) clarify the issues is to simulate the accumulation process in a computer program. We already have such a program (described in this research paper), and have used it with the data matrix that specifies the genomic logo. So in principle all we need to do is run this program with the new uptake-based matrix.
In practice, not so easy. The model is quite complicated even though the processes it simulates are treated very simply, and I've forgotten all the details about how it works. Luckily it's quite well documented, and the paper describing it is very clearly written (I'm patting myself on the back for this). One thing I do remember is that the program ran very slowly when dealing with the big genomic matrix (29 positions) rather than the short fake matrix we used for most runs. I can help it out by specifying a fast mutation rate, a short genome and short DNA fragments, and by seeding the genome with some partial matches to the uptake sequence consensus.
Migrating isn't the best word for what's happening to this blog...
It's certainly not moving with the herd, or flock. Emigrating maybe (leaving one population and joining another)? The blog's home is moving from Blogger to Field of Science, a much more select and congenial environment.
For regular readers, nothing much should change, though I hope the pages will get better-looking.
For new readers I should explain what happens here. I run a small research lab at the University of British Columbia; our lab home page is here. As the blog header says (yes, it's too brusque and unfriendly; it'll change too), here I mainly write about the research that I and the members of my lab are doing, day to day. This gives readers a slightly-sanitized window into the real research experience (best described as "Most scientists spend most of their time trying to figure out why their experiments won't work."). I try to provide a bit of background to the experiments, but unless you're a regular reader you probably should just view these posts as brief glimpses of a scientist's thinking style.
Sometimes I also write about other ideas, or critique published research papers from other labs. One such critique, of the NASA-sponsored research paper claiming that bacteria can put arsenic into their DNA in place of phosphorus, led to one of my current projects - testing that claim. So far I've spent most of my time trying to figure out why the cells won't grow.
Now back to regular posting...
For regular readers, nothing much should change, though I hope the pages will get better-looking.
For new readers I should explain what happens here. I run a small research lab at the University of British Columbia; our lab home page is here. As the blog header says (yes, it's too brusque and unfriendly; it'll change too), here I mainly write about the research that I and the members of my lab are doing, day to day. This gives readers a slightly-sanitized window into the real research experience (best described as "Most scientists spend most of their time trying to figure out why their experiments won't work."). I try to provide a bit of background to the experiments, but unless you're a regular reader you probably should just view these posts as brief glimpses of a scientist's thinking style.
Sometimes I also write about other ideas, or critique published research papers from other labs. One such critique, of the NASA-sponsored research paper claiming that bacteria can put arsenic into their DNA in place of phosphorus, led to one of my current projects - testing that claim. So far I've spent most of my time trying to figure out why the cells won't grow.
Now back to regular posting...
Growth of GFAJ-1 under phosphate limitation
Before I can properly test whether GFAJ-1 cells put arsenic into their DNA backbone when they're starved for phosphorus, I need to carefully characterize how phosphorus starvation affects their growth in the absence of arsenic.
Yesterday I inoculated medium that had no added phosphate with cells that had been growing in medium with 1.5 mM phosphate. I'll call the no-added-phosphate medium 'P- medium', even though it likely contains a low concentration of contaminating phosphate, and I'll call the cells grown in P+ medium 'P-replete' cells. The cells doubled 3-4 times before growth stalled, reaching a density of about 2 x 10^8 cells per ml. I'll call these cells 'P-depleted' cells; I froze about 15 vials of them to use for starting future phosphate-limited cultures.
Some of this growth was no doubt due to the contaminating phosphate in the -P medium, but I suspect that most of it was possible because the cells contained quite a lot of non-essential phosphate, mainly in the form of ribosomal RNA. Today I started another experiment to tease apart the effects of P in the medium and in the P-replete cells.
I inoculated P- medium with P-replete or P-depleted cells, at initial densities of 10^4, 10^5, 10^6 and 10^7 cells/ml.
As diagramed in the graph below, I expect the low-density cultures of P-depleted cells (dashed lines) to all grow to the same low final density, all limited by the low level of P contaminating the P- medium. I expect the high-density cultures of P-replete cells (solid lines) to all increase in density by the same factor, all limited by the relatively large amounts of P present in each P-replete cell.
Although the graph doesn't show this, I expect the lowest density cultures of P-replete cells to reach only the same low density as the P-depleted cells. And the highest-density culture of P-depleted cells might grow to a higher density that the others, depending or how low the P contamination is and how depleted of P the cells are.
Yesterday I inoculated medium that had no added phosphate with cells that had been growing in medium with 1.5 mM phosphate. I'll call the no-added-phosphate medium 'P- medium', even though it likely contains a low concentration of contaminating phosphate, and I'll call the cells grown in P+ medium 'P-replete' cells. The cells doubled 3-4 times before growth stalled, reaching a density of about 2 x 10^8 cells per ml. I'll call these cells 'P-depleted' cells; I froze about 15 vials of them to use for starting future phosphate-limited cultures.
Some of this growth was no doubt due to the contaminating phosphate in the -P medium, but I suspect that most of it was possible because the cells contained quite a lot of non-essential phosphate, mainly in the form of ribosomal RNA. Today I started another experiment to tease apart the effects of P in the medium and in the P-replete cells.
I inoculated P- medium with P-replete or P-depleted cells, at initial densities of 10^4, 10^5, 10^6 and 10^7 cells/ml.
As diagramed in the graph below, I expect the low-density cultures of P-depleted cells (dashed lines) to all grow to the same low final density, all limited by the low level of P contaminating the P- medium. I expect the high-density cultures of P-replete cells (solid lines) to all increase in density by the same factor, all limited by the relatively large amounts of P present in each P-replete cell.
Although the graph doesn't show this, I expect the lowest density cultures of P-replete cells to reach only the same low density as the P-depleted cells. And the highest-density culture of P-depleted cells might grow to a higher density that the others, depending or how low the P contamination is and how depleted of P the cells are.
Next steps
Now that I have good growth conditions for GFAJ-1, I need to plan the rest of the work in more detail.
Frozen cell stocks: I'll want to start my phosphate-limited cultures with cells that have been pre-grown under phosphate-limited conditions, so that their subsequent growth will be limited by the phosphate in the medium, rather than by the phosphorus they have accumulated in RNA and other molecules. I have about 50 ml of dense culture in +P medium, so I'll wash these cells, resuspend them in a large volume of -P medium, and incubate them just until growth stalls**. To make monitoring easy I'll start the cultre at the final OD reached by my original -P culture; at this density I expect the cells to run out of phosphate within a few doublings.
(** I need to be careful that apparent cell density isn't being influenced by accumulation of poly-hydroxybutyrate, as it was in the Wolfe-Simon paper. So I'll also plate the cells to check that numbers are no longer increasing.)
Once I have a phosphate-limited culture I'll collect the cells (by filtration or centrifugation), resuspend them at high density, and freeze 1 ml aliquots of them in 15% glycerol. I'll check the cell density by plating before and after freezing, to make sure that freezing doesn't kill the cells.
DNA prep: I should have enough cells to also make my first DNA prep - if not I'll inoculate one of my freezer-stock tubes into a new overnight culture. I'll put this DNA through the same multiple purification steps I plan to use for DNA from arsenic-grown cells, checking the DNA concentration at each step, and running a gel to confirm the quality of the DNA and the absence of contaminating RNA. This will let me estimate the loss in each purification step, so I'll know how many cells I'll need for the arsenic-grown DNA prep.
Elemental analysis: I'll send aliquots of the media and some of the DNA to my collaborators, so they can check levels of phosphorus and arsenic. We can consider this to be our control DNA - if the DNA from arsenic-grown cells has no more arsenic than this DNA, we can conclude that arsenic-grown cells do not put arsenic into their DNA. It will be good to test this DNA now, as it sets the limit of detection we need for the arsenic-grown DNA.
However, when I purify the DNA from the arsenic-grown cells I'll also do a second control, by briefly incubating phosphate-grown cells in the arsenic medium before DNA purification. This will control for any carryover of arsenic in the DNA purification. If my purification is adequate I expect this DNA to have no more arsenic than the first control.
Strain identification: I'll give some of the DNA to the RA. She has the primers for the 16S rDNA amplification and will get it sequenced to confirm that these bacteria are GFAJ-1.
Frozen cell stocks: I'll want to start my phosphate-limited cultures with cells that have been pre-grown under phosphate-limited conditions, so that their subsequent growth will be limited by the phosphate in the medium, rather than by the phosphorus they have accumulated in RNA and other molecules. I have about 50 ml of dense culture in +P medium, so I'll wash these cells, resuspend them in a large volume of -P medium, and incubate them just until growth stalls**. To make monitoring easy I'll start the cultre at the final OD reached by my original -P culture; at this density I expect the cells to run out of phosphate within a few doublings.
(** I need to be careful that apparent cell density isn't being influenced by accumulation of poly-hydroxybutyrate, as it was in the Wolfe-Simon paper. So I'll also plate the cells to check that numbers are no longer increasing.)
Once I have a phosphate-limited culture I'll collect the cells (by filtration or centrifugation), resuspend them at high density, and freeze 1 ml aliquots of them in 15% glycerol. I'll check the cell density by plating before and after freezing, to make sure that freezing doesn't kill the cells.
DNA prep: I should have enough cells to also make my first DNA prep - if not I'll inoculate one of my freezer-stock tubes into a new overnight culture. I'll put this DNA through the same multiple purification steps I plan to use for DNA from arsenic-grown cells, checking the DNA concentration at each step, and running a gel to confirm the quality of the DNA and the absence of contaminating RNA. This will let me estimate the loss in each purification step, so I'll know how many cells I'll need for the arsenic-grown DNA prep.
Elemental analysis: I'll send aliquots of the media and some of the DNA to my collaborators, so they can check levels of phosphorus and arsenic. We can consider this to be our control DNA - if the DNA from arsenic-grown cells has no more arsenic than this DNA, we can conclude that arsenic-grown cells do not put arsenic into their DNA. It will be good to test this DNA now, as it sets the limit of detection we need for the arsenic-grown DNA.
However, when I purify the DNA from the arsenic-grown cells I'll also do a second control, by briefly incubating phosphate-grown cells in the arsenic medium before DNA purification. This will control for any carryover of arsenic in the DNA purification. If my purification is adequate I expect this DNA to have no more arsenic than the first control.
Strain identification: I'll give some of the DNA to the RA. She has the primers for the 16S rDNA amplification and will get it sequenced to confirm that these bacteria are GFAJ-1.
Hmm, did I do this experiment 20 years ago?
Today at lab meeting we discussed our plans for creating specific point mutations in competence genes. This is something that one of the reviewers of our CIHR proposal wanted to see, and we think we can do it before the next grant deadline (Sept. 15).
The RA has a clever way to make any desired point mutation in our cloned competence genes, and we can easily introduce such mutations into the H. influenzae chromosome by natural transformation. The big problem is getting a high enough transformation frequency that we can identify the desired mutants by PCR. Because the mutations don't have an associated antibiotic resistance we won't be able to select for them, and because we won't usually know the effect of the mutation in advance we may not be able to screen for an expected phenotype.
We know that transformation frequency depends on the length of the fragment, the kind of heterology (single nt, insertion/deletion, etc), and the presence of uptake sequences.
We want to do some preliminary experiments to check the effect of fragment length. In the past I had gotten transformation frequencies of better than 5% with a 9 kb restriction fragment containing a cloned novobiocin-resistance point mutation, but more recent experiments with shorter fragments had given frequencies that were 100-fold to 1000-fold lower. So we were planning to do a new test, first cutting this novR fragment to different lengths with restriction enzymes and then measuring the effect on transformation frequency.
But while looking up the restriction map of this plasmid I discovered that I and my first technician had done a version of this experiment already, back when we were first working with this plasmid (RR#196). I think we weren't testing how fragment size affects transformation, but just finding out whether we should cut the insert free of the vector.
The plasmid has a 9.25 kb fragment of chromosomal DNA in a 2.3 kb pSU vector; the novR mutation is somewhere in the 2.4 kb gyrB gene (the postdoc knows where but he's not here right now).
The technician's transformation frequency results: Using an uncut plasmid: 0.024; linearizing the plasmid, leaving the vector attached to the insert: 0.061 and 0.086; cutting the insert free of the vector: 0.093; cutting the plasmid once, about 2 kb from one end of gyrB: 0.051; cutting the plasmid once, 70 bp from one end of gyrB: 0.036. (One time, I got a transformation frequency of 0.22 with this DNA!) Transformations that used the same novR marker carried in chromosomal DNA gave a transformation frequency of 0.011.
These are lovely high transformation frequencies - if we can get similar frequencies with our engineered mutants we'll have no trouble identifying them by PCR. Another limiting factor is how much DNA we can use. I don't have a good estimate of the DNA concentration the technician used in this experiment, but in a later experiment (RR#860) I used 100 ng of insert, from plasmid grown in either H. influenzae or E. coli, and got transformation frequencies of 0.045 and 0.07 respectively. So if the RA can generate 100 ng of mutant fragments (perhaps by long PCR) then we'll be all set.
But generating long fragments is a pain, so we still need to test the efficiencies of transformation with shorter novR fragments to find out how long her fragments need to be. This means that we need to grow up a prep of the plasmid. I've had problems with yield of this plasmid in the past (it's a low-copy vector), but the post-doc is confident he can get lots, so I'm streaking the strain out for him now. If yields suck we can always reclone the insert in a high-copy vector.
The RA has a clever way to make any desired point mutation in our cloned competence genes, and we can easily introduce such mutations into the H. influenzae chromosome by natural transformation. The big problem is getting a high enough transformation frequency that we can identify the desired mutants by PCR. Because the mutations don't have an associated antibiotic resistance we won't be able to select for them, and because we won't usually know the effect of the mutation in advance we may not be able to screen for an expected phenotype.
We know that transformation frequency depends on the length of the fragment, the kind of heterology (single nt, insertion/deletion, etc), and the presence of uptake sequences.
We want to do some preliminary experiments to check the effect of fragment length. In the past I had gotten transformation frequencies of better than 5% with a 9 kb restriction fragment containing a cloned novobiocin-resistance point mutation, but more recent experiments with shorter fragments had given frequencies that were 100-fold to 1000-fold lower. So we were planning to do a new test, first cutting this novR fragment to different lengths with restriction enzymes and then measuring the effect on transformation frequency.
But while looking up the restriction map of this plasmid I discovered that I and my first technician had done a version of this experiment already, back when we were first working with this plasmid (RR#196). I think we weren't testing how fragment size affects transformation, but just finding out whether we should cut the insert free of the vector.
The plasmid has a 9.25 kb fragment of chromosomal DNA in a 2.3 kb pSU vector; the novR mutation is somewhere in the 2.4 kb gyrB gene (the postdoc knows where but he's not here right now).
The technician's transformation frequency results: Using an uncut plasmid: 0.024; linearizing the plasmid, leaving the vector attached to the insert: 0.061 and 0.086; cutting the insert free of the vector: 0.093; cutting the plasmid once, about 2 kb from one end of gyrB: 0.051; cutting the plasmid once, 70 bp from one end of gyrB: 0.036. (One time, I got a transformation frequency of 0.22 with this DNA!) Transformations that used the same novR marker carried in chromosomal DNA gave a transformation frequency of 0.011.
These are lovely high transformation frequencies - if we can get similar frequencies with our engineered mutants we'll have no trouble identifying them by PCR. Another limiting factor is how much DNA we can use. I don't have a good estimate of the DNA concentration the technician used in this experiment, but in a later experiment (RR#860) I used 100 ng of insert, from plasmid grown in either H. influenzae or E. coli, and got transformation frequencies of 0.045 and 0.07 respectively. So if the RA can generate 100 ng of mutant fragments (perhaps by long PCR) then we'll be all set.
But generating long fragments is a pain, so we still need to test the efficiencies of transformation with shorter novR fragments to find out how long her fragments need to be. This means that we need to grow up a prep of the plasmid. I've had problems with yield of this plasmid in the past (it's a low-copy vector), but the post-doc is confident he can get lots, so I'm streaking the strain out for him now. If yields suck we can always reclone the insert in a high-copy vector.
We are not alone!
I've just been exchanging emails with someone else who is trying to grow GFAJ-1 "to look at the DNA". They're also dealing with a media-related problem, though not the same problem as mine.
Growth properties of GFAJ-1's relatives?
To help me figure out what nutrient might be missing from the AML60 medium I'm trying to grow GFAJ-1 in, I'm reading about its Halomonas relatives.
Here's a tree showing the relationship between GFAJ-1 and its closest known relatives (source).
And here's a link to a paper describing VERY THOROUGH phenotypic characterization of all the formally described species of Halomonas: Mata et al., 2002. A detailed phenotypic characterization of the type strains of Halomonas species. System. Appl. Microbiol. 25:360-375. I was initially assuming that GFAJ-1 would be quite picky about growth conditions, but if it's a typical Halomonas species it's probably quite robust, as they all tolerate wide ranges of salt concentration, pH and temperature.
Here's a tree showing the relationship between GFAJ-1 and its closest known relatives (source).
And here's a link to a paper describing VERY THOROUGH phenotypic characterization of all the formally described species of Halomonas: Mata et al., 2002. A detailed phenotypic characterization of the type strains of Halomonas species. System. Appl. Microbiol. 25:360-375. I was initially assuming that GFAJ-1 would be quite picky about growth conditions, but if it's a typical Halomonas species it's probably quite robust, as they all tolerate wide ranges of salt concentration, pH and temperature.
They're growing...
Two nice preliminary results:
First, GFAJ-1 cells grow well in liquid AML60 medium supplemented with 10 mM glutamate.
Second, they don't grow much at all if I omit the phosphate from the AML60 medium.
Together this means that I now have conditions for investigating whether the cells can incorporate arsenic into their DNA when phosphate is limiting.
But I first need to now do some careful growth curves. This will be relatively easy because I've realized that Halomonas are nutritionally quite versatile, and I can get good colonies on agar plates overnight if I supplement the AML60 medium with tryptone or yeast extract.
(still no tungsten...)
First, GFAJ-1 cells grow well in liquid AML60 medium supplemented with 10 mM glutamate.
Second, they don't grow much at all if I omit the phosphate from the AML60 medium.
Together this means that I now have conditions for investigating whether the cells can incorporate arsenic into their DNA when phosphate is limiting.
But I first need to now do some careful growth curves. This will be relatively easy because I've realized that Halomonas are nutritionally quite versatile, and I can get good colonies on agar plates overnight if I supplement the AML60 medium with tryptone or yeast extract.
(still no tungsten...)
Email to my GFAJ-1 collaborators
Hi Leonid, Josh and Marshall,
If you've been checking my blog you'll know that I'm still fussing around with growth conditions for GFAJ-1. Before doing any more work I want to check with you to make sure we agree that it's worth proceeding. (I'm off at a Gordon Conference this week. Various batches of cells are in the incubator, so I may have more growth results when I get back.)
Basically, I haven't observed cell growth in the liquid version of the AML60 medium described by Wolfe-Simon et al.; instead the cells just very slowly die. But the cells grow very well when this medium is solidified with agar. They don't grow if it's solidified with agarose, so I think the agar is providing a missing nutrient. The cells grow very well in liquid AML60 if I supplement it with Casamino Acids (hydrolysed casein) or with an amino acid mixture we use in our competence induction medium. Neither of these supplements can be used in my growth experiments because they contain too much phosphate, so I'm going to test addition of individual amino acids - initially just glutamate and aspartate since they're what's in our competence mix.
An additional complication is that my version of AML60 medium is still not exactly what Wolfe-Simon et al. described: First, the trace element mix I'm using doesn't contain tungsten (Oremland's lab adds 45 nM sodium tungstate to their minimal media, but there's no evidence that GFAJ-1 needs this); I've ordered some, and I'll add it when I get back. Second, I realized that their version of the basic AML60 salts lacked potassium, so I'm adding 10 mM KCl to my medium.
I'm confident that I can find a suitable growth condition for the ±P/±As growth experiments. But this probably won't be exactly the same condition reported by Wolfe-Simon et al., so we need to decide whether it's appropriate to use my growth condition to test their observation.
Here's what I would do:
Do you think that this would be seen as a valid test of the reported result? If not I'll probably stop now, as I have lots of other work to do.
Rosie
If you've been checking my blog you'll know that I'm still fussing around with growth conditions for GFAJ-1. Before doing any more work I want to check with you to make sure we agree that it's worth proceeding. (I'm off at a Gordon Conference this week. Various batches of cells are in the incubator, so I may have more growth results when I get back.)
Basically, I haven't observed cell growth in the liquid version of the AML60 medium described by Wolfe-Simon et al.; instead the cells just very slowly die. But the cells grow very well when this medium is solidified with agar. They don't grow if it's solidified with agarose, so I think the agar is providing a missing nutrient. The cells grow very well in liquid AML60 if I supplement it with Casamino Acids (hydrolysed casein) or with an amino acid mixture we use in our competence induction medium. Neither of these supplements can be used in my growth experiments because they contain too much phosphate, so I'm going to test addition of individual amino acids - initially just glutamate and aspartate since they're what's in our competence mix.
An additional complication is that my version of AML60 medium is still not exactly what Wolfe-Simon et al. described: First, the trace element mix I'm using doesn't contain tungsten (Oremland's lab adds 45 nM sodium tungstate to their minimal media, but there's no evidence that GFAJ-1 needs this); I've ordered some, and I'll add it when I get back. Second, I realized that their version of the basic AML60 salts lacked potassium, so I'm adding 10 mM KCl to my medium.
I'm confident that I can find a suitable growth condition for the ±P/±As growth experiments. But this probably won't be exactly the same condition reported by Wolfe-Simon et al., so we need to decide whether it's appropriate to use my growth condition to test their observation.
Here's what I would do:
- Test growth on medium with and without each of my candidate supplements (single amino acids), with and without added phosphate. (At present all my media have 1.5 mM phosphate added.) I want to see that cells grow to high density with at least one of the supplements, but only when phosphate is provided. (If they grow well without any supplement, that's even better.)
- Choose the supplement that gives the fastest growth or the highest growth yield. Do careful growth curves with and without the supplement, with and without 1.5 mM phosphate. Perhaps test intermediate phosphate levels too (3 µM and 30 µM?), since I don't know how much contaminating phosphate my medium might contain. I would monitor growth by plating dilutions on supplemented AML60 agar, and perhaps by flow cytometry (the flow-cytometer person might not be willing to count cells in 40mM arsenic), as microscopic counting turns out to be a pain in the butt. The goal is to get publishable data (1) showing that the cells won't grow in the specified medium, and (2) establishing suitable phosphate-limited growth conditions for the arsenic test. If the cells don't grow at all in supplemented medium with no added phosphate I'll use medium with 3 µM phosphate added.
- Prepare DNA from the phosphate-replete and phosphate-limited cultures. Use PCR and sequencing to check that the strain I'm using really is GFAJ-1. Send you samples of the DNA and the culture media, so you can confirm that your mass-spec assays will work. You have yet to tell me how much DNA you will need; to get enough DNA I might need to grow a scaled-up version of the phosphate-limited culture.
- Again grow cells in the supplemented AML60 medium, phosphate-replete and phosphate-limited, this time with and without 40 mM sodium arsenate added to the media.
- Grow a scaled-up culture of the cells in 40 mM arsenate plus limited phosphate, and prepare DNA. Send you samples of the media and the DNA.
Do you think that this would be seen as a valid test of the reported result? If not I'll probably stop now, as I have lots of other work to do.
Rosie
Do bacteria communicate using nanotubes?
Until Friday I'm at the Microbial Population Genetics Gordon Conference, a black hole from which no tweets or blog posts must emerge. Well, that's an exaggeration - we just can't tweet or post about the conference. But someone at last night's excellent poster session drew my attention to a remarkable paper published in February, and now that I've read it I'm going to write about it here.
The paper is titled Intercellular nanotubes mediate bacterial communication. It's by Gyanendra Dubey and Sigal Ben-Yehuda (Hebrew University) and appeared in Cell 144:590-600. (It's behind a paywall here, but you can find a pdf of it here at OpenWetWare).The authors found that Bacillus subtilis cells form tubular connections through which small molecules, proteins and plasmids can pass from one cell to another. Although the authors' ideas about bacterial communication and cooperation appear to owe more to Sesame Street than to evolutionary theory, this is a very surprising and important finding.
The paper first reports that, when a mixture of B. subtilis cells with and without GFP were mixed on an agar-solidified medium, GFP- cells lying next to GFP+ cells gradually acquired GFP. The same transfer happened if some cells instead were preloaded with a different fluorophore, one known to not pass through cell membranes. If the cells contained different chromosomally encoded antibiotic resistance proteins, some cells became transiently resistant to both antibiotics. And if some cells contained an antibiotic-resistance gene on a plasmid, some of the plasmid-free cells acquired the plasmid. Plasmid acquisition was not blocked by DNase I and did not occur when a sub-inhibitory concentration of the detergent SDS was added to the mixed cells or when free plasmid DNA was added to plasmid-free cells.
The obvious next step was to look at the cells with electron microscopy. This showed tubular connections between cells, which the authors unfortunately chose to call 'nanotubes'. (Unfortunately because nanotube is a very well established term for tubular filaments of fullerene
How the authors prepared the cells for EM is probably important. Normally cells are first suspended in liquid and then placed on an EM grid, but this might disrupt any cell connections. So instead the authors grew cells on agar medium for 3 hr, and then placed EM grids on top of the cells. They let the cells grow for 3 more hr and then lifted the grids and the attached cells from the agar. It would have been good to have repeated this unconventional procedure using cells that were already dead when placed on the plate, or otherwise incapacitated so they couldn't actively form connections. This would show that the apparent connections aren't just coming from the agar.
What's bizarre about this result is that nobody has ever observed these connections before, but the authors don't discuss why the connections wouldn't have been discovered by earlier B. subtilis researchers. It's true that researchers may not have used exactly these conditions. However the connections must have formed quite quickly in their experiments, as evidence of transfer was seen within 15 minutes, so I would think that anyone looking at cell behaviour under a microscope would have noticed that cells became stuck together.
It could be that microbiologists have overlooked this because bacteria aren't usually co-cultivated or agar surfaces. But growth on surfaces is the norm for bacteria in natural environments. If connecting tubes and molecular transfer were this ubiquitous, phages and plasmids would be much more uniformly distributed both within and between species. Evolutionary processes would also be very different; mutant phenotypes would blend when different strains made contact.
Overall I'm dubious. The data looks OK, but the phenomenon doesn't make biological or evolutionary sense.
***A few more details I picked up on rereading the paper:
The paper is titled Intercellular nanotubes mediate bacterial communication. It's by Gyanendra Dubey and Sigal Ben-Yehuda (Hebrew University) and appeared in Cell 144:590-600. (It's behind a paywall here, but you can find a pdf of it here at OpenWetWare).The authors found that Bacillus subtilis cells form tubular connections through which small molecules, proteins and plasmids can pass from one cell to another. Although the authors' ideas about bacterial communication and cooperation appear to owe more to Sesame Street than to evolutionary theory, this is a very surprising and important finding.
The paper first reports that, when a mixture of B. subtilis cells with and without GFP were mixed on an agar-solidified medium, GFP- cells lying next to GFP+ cells gradually acquired GFP. The same transfer happened if some cells instead were preloaded with a different fluorophore, one known to not pass through cell membranes. If the cells contained different chromosomally encoded antibiotic resistance proteins, some cells became transiently resistant to both antibiotics. And if some cells contained an antibiotic-resistance gene on a plasmid, some of the plasmid-free cells acquired the plasmid. Plasmid acquisition was not blocked by DNase I and did not occur when a sub-inhibitory concentration of the detergent SDS was added to the mixed cells or when free plasmid DNA was added to plasmid-free cells.
The obvious next step was to look at the cells with electron microscopy. This showed tubular connections between cells, which the authors unfortunately chose to call 'nanotubes'. (Unfortunately because nanotube is a very well established term for tubular filaments of fullerene
How the authors prepared the cells for EM is probably important. Normally cells are first suspended in liquid and then placed on an EM grid, but this might disrupt any cell connections. So instead the authors grew cells on agar medium for 3 hr, and then placed EM grids on top of the cells. They let the cells grow for 3 more hr and then lifted the grids and the attached cells from the agar. It would have been good to have repeated this unconventional procedure using cells that were already dead when placed on the plate, or otherwise incapacitated so they couldn't actively form connections. This would show that the apparent connections aren't just coming from the agar.
What's bizarre about this result is that nobody has ever observed these connections before, but the authors don't discuss why the connections wouldn't have been discovered by earlier B. subtilis researchers. It's true that researchers may not have used exactly these conditions. However the connections must have formed quite quickly in their experiments, as evidence of transfer was seen within 15 minutes, so I would think that anyone looking at cell behaviour under a microscope would have noticed that cells became stuck together.
It could be that microbiologists have overlooked this because bacteria aren't usually co-cultivated or agar surfaces. But growth on surfaces is the norm for bacteria in natural environments. If connecting tubes and molecular transfer were this ubiquitous, phages and plasmids would be much more uniformly distributed both within and between species. Evolutionary processes would also be very different; mutant phenotypes would blend when different strains made contact.
Overall I'm dubious. The data looks OK, but the phenomenon doesn't make biological or evolutionary sense.
***A few more details I picked up on rereading the paper:
- The frequency of plasmid transfer after >4 hr of co-cultivation was only 10^-7. That's very low, given the apparently high transfer rate of other molecules.
- Transfer was blocked by levels of SDS that didn't affect cell growth - this suggests that the tubes do not have the same envelope as the cell bodies.
On other research fronts...
The postdoc and I continue to work on his DNA uptake specificity manuscript, temporarily distracted by (1) the need to prepare a poster for the Gordon Conference on Microbial Population Biology, which I'll be attending next week (no tweets or live-blogging, by conference policy); and (2) the belated arrival of the sequencing output for the 88 recombinant clones he submitted to the Genome Sciences Centre 8 months ago.
I've also repeated the controls for my phage recombination experiments. This time they worked well, and I realized that 'infectious centers' are not a good way to estimate recombination.
The Research Associate has me doing competence assays on the collection of knockout mutants she's generating. I can do these much more efficiently than anyone else in the lab - I've done 12 in the past week.
I've also repeated the controls for my phage recombination experiments. This time they worked well, and I realized that 'infectious centers' are not a good way to estimate recombination.
The Research Associate has me doing competence assays on the collection of knockout mutants she's generating. I can do these much more efficiently than anyone else in the lab - I've done 12 in the past week.
Two mistakes discovered
My GFAJ-1 cells grow on the designated medium (AML60) if it is solidified by agar, but not if it is solidified with agarose. In response to a tweet I sent out about possible nutrient differences between agar and agarose, Mark Martin suggested potassium. This led me to the discovery of two errors, one by me and one by Wolfe-Simon et al.
My error was that my stock phosphate solution was sodium phosphate, not potassium phosphate as specified by Wolfe-Simon et al. Because the specified no-phosphate AML60 base recipe does not include a source of potassium, my liquid medium had no potassium. Agar does contain potassium as a contaminant; the analysis of Bacto agar lists 0.121% potassium. My back-of-the-envelope (literally) calculation converts this to about 0.2 mM potassium in medium solidified with 1.5% agar. That's not a lot. but probably enough, and tenfold more than is present in agarose.
Wolfe-Simon et al's error makes their growth results even harder to interpret. The originally published recipe for AML60 medium ('artificial Mono Lake') includes 1.5 mM potassium phosphate, which provides both potassium and phosphate. (The lake water is about 24 mM potassium.) This medium was modified by Wolfe-Simon et al, who initially removed the phosphate and replaced it with arsenate. Although they used potassium phosphate for their +P version, they used sodium arsenate, not potassium arsenate, for their +As versions. Thus their arsenate-grown cells were starved for both arsenate and potassium.
Anyway, I've now added 2.5 mM potassium chloride to my AML60 medium.
My error was that my stock phosphate solution was sodium phosphate, not potassium phosphate as specified by Wolfe-Simon et al. Because the specified no-phosphate AML60 base recipe does not include a source of potassium, my liquid medium had no potassium. Agar does contain potassium as a contaminant; the analysis of Bacto agar lists 0.121% potassium. My back-of-the-envelope (literally) calculation converts this to about 0.2 mM potassium in medium solidified with 1.5% agar. That's not a lot. but probably enough, and tenfold more than is present in agarose.
Wolfe-Simon et al's error makes their growth results even harder to interpret. The originally published recipe for AML60 medium ('artificial Mono Lake') includes 1.5 mM potassium phosphate, which provides both potassium and phosphate. (The lake water is about 24 mM potassium.) This medium was modified by Wolfe-Simon et al, who initially removed the phosphate and replaced it with arsenate. Although they used potassium phosphate for their +P version, they used sodium arsenate, not potassium arsenate, for their +As versions. Thus their arsenate-grown cells were starved for both arsenate and potassium.
Anyway, I've now added 2.5 mM potassium chloride to my AML60 medium.
Life and death of GFAJ-1
Because the GFAJ-1 cells have been growing well on plates (visible colonies in less than 48 hr), I've started using colony counts to follow their growth (or lack of it). Two little experiments highlight the problem.
First, I tested the growth of cells in several tiny colonies that had grown up in 48 hr on agar plates with the complete ML60 medium. I sucked each colony up into a micropipette tip and resuspended it in 100 µl of ML60. Then I spotted 1 µl, 10 µl and the remaining 89 µl onto a new agar plate and incubated it for 48 hr. The 1 µl spots each produced more than a thousand new colonies, which means that the original colonies each had more than 10^5 cells. This means that, on agar plates, the cells are doubling in less than 3 hours, much faster than their fastest doubling rate seen in Fig. 1 B of the Wolfe-Simon et al. paper (~12 hr).
Second, I tested the growth of cells in liquid ML60 medium. I had a stock of cells that had been grown in a layer of liquid ML60 overlying the surface of a ML60 agar plate. I serially diluted these cells in liquid ML60 (10^-1, 10^-2 ... 10^-6), and spotted 10 µl of each dilution onto agar plates. After 48 hr the 10^-5 dilution had produced about 50 colonies, indicating that the original stock had about 5 x 10^8 viable cells per ml. While the agar plates were incubating I also incubated the dilutions I'd made, and after 48 hr I plated them again. Now the 10^-5 dilution spots produced only ~1 colony, and the 10^-4 spots produced ~15. This means that 95% of the cells had died in the same medium that, when solidified with agar, they grew very well on!
Today I'm going to several more tests:
First, I'm going to make a completely new batch of the ML60 medium base, and test whether cells grow better in it.
I've already tested whether cells grow better sealed in screw-cap glass tubes (used by Wolfe-Simon et al.) rather than in loosely capped glass tubes - they don't. using the sealed tubes lets me test whether they grow better when gently agitated. I can't fit a roller wheel into my little 28°C incubator but I can fit a rocker platform. But I first needed to test whether the rocker motor produced too much heat, as this would raise the temperature of the incubator above 28°C. The temperature seemed OK last night, so they cells in sealed tubes have been rocking overnight.
Finally, I'm going to test whether GFAJ-1 cells use agar as a carbon source, by plating them on agar-solidified ML60 with no glucose, and by adding agar rather than glucose to the liquid medium. Maybe I'll also test whether they grow on ML60 solidified with agarose (much more pure than agar). I don't think that we have any gellan (an agar substitute), but I know that a lab upstairs does, so I might test that too.
The big question is, should we still try to test whether GFAJ-1 put arsenic in their DNA, if I can't grow them under exactly the conditions that Wolfe-Simon et al. used?
First, I tested the growth of cells in several tiny colonies that had grown up in 48 hr on agar plates with the complete ML60 medium. I sucked each colony up into a micropipette tip and resuspended it in 100 µl of ML60. Then I spotted 1 µl, 10 µl and the remaining 89 µl onto a new agar plate and incubated it for 48 hr. The 1 µl spots each produced more than a thousand new colonies, which means that the original colonies each had more than 10^5 cells. This means that, on agar plates, the cells are doubling in less than 3 hours, much faster than their fastest doubling rate seen in Fig. 1 B of the Wolfe-Simon et al. paper (~12 hr).
Second, I tested the growth of cells in liquid ML60 medium. I had a stock of cells that had been grown in a layer of liquid ML60 overlying the surface of a ML60 agar plate. I serially diluted these cells in liquid ML60 (10^-1, 10^-2 ... 10^-6), and spotted 10 µl of each dilution onto agar plates. After 48 hr the 10^-5 dilution had produced about 50 colonies, indicating that the original stock had about 5 x 10^8 viable cells per ml. While the agar plates were incubating I also incubated the dilutions I'd made, and after 48 hr I plated them again. Now the 10^-5 dilution spots produced only ~1 colony, and the 10^-4 spots produced ~15. This means that 95% of the cells had died in the same medium that, when solidified with agar, they grew very well on!
Today I'm going to several more tests:
First, I'm going to make a completely new batch of the ML60 medium base, and test whether cells grow better in it.
I've already tested whether cells grow better sealed in screw-cap glass tubes (used by Wolfe-Simon et al.) rather than in loosely capped glass tubes - they don't. using the sealed tubes lets me test whether they grow better when gently agitated. I can't fit a roller wheel into my little 28°C incubator but I can fit a rocker platform. But I first needed to test whether the rocker motor produced too much heat, as this would raise the temperature of the incubator above 28°C. The temperature seemed OK last night, so they cells in sealed tubes have been rocking overnight.
Finally, I'm going to test whether GFAJ-1 cells use agar as a carbon source, by plating them on agar-solidified ML60 with no glucose, and by adding agar rather than glucose to the liquid medium. Maybe I'll also test whether they grow on ML60 solidified with agarose (much more pure than agar). I don't think that we have any gellan (an agar substitute), but I know that a lab upstairs does, so I might test that too.
The big question is, should we still try to test whether GFAJ-1 put arsenic in their DNA, if I can't grow them under exactly the conditions that Wolfe-Simon et al. used?
Howard Gest says that 'astrobiology' is an oxymoron
Howard Gest, Emeritus Professor at Indiana, has posted an article about astrobiology on his university's document server.
The title says it all:
The title says it all:
On the Origin, Evolution, and Demise of an Oxymoron: “astrobiology.”
A Select Time Line, from Elephants on the Moon to Phantom Microbes on Mars, onto Earth’s Bacteria in the Guise of Extraterrestrial Life and the Arsenic Monster of Mono Lake.
Touching on: astrobiology follies, bacteria, chicken pie, exobiology, extremophiles, fossil microbes, Mars, media mayhem, meteorites, moon dust, NASA, phantom microbes, War of the Worlds, and sundry other topics.I especially like his dig at those physicists who assume that all the major problems in biology can be solved by the application of a little physics-derived common sense:
Why is it that biologists never advance hypotheses on problems of physics relating to quarks, gluons, black holes etc., whereas many physical scientists (physicists, astronomers, geologists etc.) have attempted to explain major complex unsolved problems of biology?
More phage recombination work
I finally found some time to do more phage recombination work. I still need to repeat the experiment showing that DNase I pretreatment of lysates reduces phage recombination but not phage titer) and that protease pretreatment reduces phage titer but not phage recombination. But this experiment was instead testing whether phage recombination is affected by mutations that eliminate DNA uptake or reduce chromosomal transformation.
Each test involves infecting cells with amixture of two temperature-sensitive phage mutants (ts1 and ts3), and plating the progeny phage at the restrictive temperature of 41 °C. I tested two uptake-null mutants, pilA and rec2. rec2 mutants are known to be defective for phage recombination; pilA has never been tested but I expected the same phenotype, based on the hypothesis that phage recombination depends on uptake of phage DNA by the competence machinery. I also tested knockouts of three cytoplasmic-protein genes in the competence regulon, dprA, comM, and radC.
As positive controls I tested phage recombination in competent wildtype cells, and phage production using wildtype phage. As negative controls I tested phage recombination in log-phase wildtype cells (no recombination expected), and infection of wildtype cells with each phage mutant singly (no plaques expected at the restrictive temperature).
The positive control infection gave a recombinant frequency of 5 x 10^-3, as expected, and the single-infection controls worked well - no plaques at 41 °C, giving revertant frequencies of <3 x 10^-6 (ts1) and 3 x 10^-5 (ts3). But the negative phage-recombination control (log-phase cells) and the known recombination-negative mutant (rec2) both gave way more plaques at 41°C than expected (frequencies of 2 x 10^-4 and 9 x 10^-5 respectively).
Three of the previously untested mutants had recombination frequencies similar to the negative controls (pilA: 1.6 x 10^-4; dprA: 1.3 x 10^-4; comM: 2.7 x 10^-4), and the radC mutant, which has normal frequency of chromosomal transformation, had a near-normal frequency of phage recombination (1.2 x 10^-3). But these values are not very useful because of the high background. Need to do it again.
Each test involves infecting cells with amixture of two temperature-sensitive phage mutants (ts1 and ts3), and plating the progeny phage at the restrictive temperature of 41 °C. I tested two uptake-null mutants, pilA and rec2. rec2 mutants are known to be defective for phage recombination; pilA has never been tested but I expected the same phenotype, based on the hypothesis that phage recombination depends on uptake of phage DNA by the competence machinery. I also tested knockouts of three cytoplasmic-protein genes in the competence regulon, dprA, comM, and radC.
As positive controls I tested phage recombination in competent wildtype cells, and phage production using wildtype phage. As negative controls I tested phage recombination in log-phase wildtype cells (no recombination expected), and infection of wildtype cells with each phage mutant singly (no plaques expected at the restrictive temperature).
The positive control infection gave a recombinant frequency of 5 x 10^-3, as expected, and the single-infection controls worked well - no plaques at 41 °C, giving revertant frequencies of <3 x 10^-6 (ts1) and 3 x 10^-5 (ts3). But the negative phage-recombination control (log-phase cells) and the known recombination-negative mutant (rec2) both gave way more plaques at 41°C than expected (frequencies of 2 x 10^-4 and 9 x 10^-5 respectively).
Three of the previously untested mutants had recombination frequencies similar to the negative controls (pilA: 1.6 x 10^-4; dprA: 1.3 x 10^-4; comM: 2.7 x 10^-4), and the radC mutant, which has normal frequency of chromosomal transformation, had a near-normal frequency of phage recombination (1.2 x 10^-3). But these values are not very useful because of the high background. Need to do it again.
GFAJ-1 (no real progress to report)
I'm now using the medium exactly as specified, but the cells still aren't growing consistently. They also form variable numbers and sizes of Tween 20-resistant clumps.
More detailed plans
- Get the GFAJ-1 cells growing in liquid ML60 medium (with vitamins and trace elements) at least as well as they grew for Wolfe-Simon et al. That means at least two doublings a day.
- Do preliminary growth curves to find out what phosphate concentrations limit growth.
- Grow a big batch of cells under phosphate-limiting conditions. Collect them and freeze them when they are still growing exponentially (so no complications due to accumulation of poly-hydroxybutyrate granules).
- Do meticulous growth curves with different concentrations of added phosphate with and without 40 mM arsenate. Also try media with no added phosphate, in case my phosphate-contamination levels are like those in the paper.
- Grow enough cells at the lowest phosphate level, with and without arsenate, that I can purify enough DNA for analysis.
- Send the DNA and samples of the various media to Leonid Krugliak and Josh Rabinowitz, who will use mass spectrometry to measure phosphorus and arsenic levels. I'm waiting for them to tell me the sample sizes they'll need.
I'll also ask the RA to order the universal 16S rDNA primers so we can use PCR and DNA sequencing to confirm that the cells I'm growing are indeed GFAJ-1.
Subscribe to:
Posts (Atom)