for GFAJ-1, that is. (I have plenty of excuses for myself.)
My favourite old-timer microbiologist had lots of PABA and was happy to give me some. And, after some fussing with the pH to get the folic acid to dissolve, I now have my sterile 500x multi-vitamin stock solution for the specified ML60 culture medium. And I also have the 1000x trace element solution, thanks to a grad student in another colleague's lab. So now I can make medium exactly as specified for GFAJ-1 cells to grow in.
I also have fresh single colonies of cells to resuspend and inoculate into this medium. I'll use the Petroff-Hausser counter and acridine orange staining to count the number of cells the new cultures start with,and recount them every day.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
Political pollsters are pretending they know what's happening. They don't.5 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections6 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Not your typical science blog, but an 'open science' research blog. Watch me fumbling my way towards understanding how and why bacteria take up DNA, and getting distracted by other cool questions.
It's not the water, nor the tubes, nor the parafilm...
So far my GFAJ-1 cells grow great on agar plates. I've tested several brands of agar (Bacto, MBP, and some cheap old stuff (Mikrobiologia?) that we only use for E. coli. They also grew great on agar plates that had been overlaid with 2 ml of the liquid medium, to greater than 10^8 cells/ml!
But they won't grow at all (and I suspect they die) in the liquid media I made, regardless of whether the medium was made with tap water or distilled water. It doesn't matter whether the culture tubes were washed before being used, nor whether the medium is in a plastic petri dish sealed with parafilm instead of a culture tube.
Adding the trace elements mix to the original batches of medium didn't let the cells grow, but the cells did survive better (maybe even grow a bit?) in a new batch of liquid medium I made up with trace elements. Tonight I'm testing whether they'll grow better in a plastic tube with a tightly sealed screw cap; the control is an identical tube with a loose-fitting cover.
The vitamins we ordered have arrived so I was going to text the full ML60 medium used by Wolfe-Simon et al. But I just discovered that I'd forgotten to put PABA (4-aminobenzoic acid) on the list. Maybe I can 'borrow' some from another lab, but if not I'll have to wait till next week.
But they won't grow at all (and I suspect they die) in the liquid media I made, regardless of whether the medium was made with tap water or distilled water. It doesn't matter whether the culture tubes were washed before being used, nor whether the medium is in a plastic petri dish sealed with parafilm instead of a culture tube.
Adding the trace elements mix to the original batches of medium didn't let the cells grow, but the cells did survive better (maybe even grow a bit?) in a new batch of liquid medium I made up with trace elements. Tonight I'm testing whether they'll grow better in a plastic tube with a tightly sealed screw cap; the control is an identical tube with a loose-fitting cover.
The vitamins we ordered have arrived so I was going to text the full ML60 medium used by Wolfe-Simon et al. But I just discovered that I'd forgotten to put PABA (4-aminobenzoic acid) on the list. Maybe I can 'borrow' some from another lab, but if not I'll have to wait till next week.
Maybe it's the water? Or the tubes?
I thought of two (three!) more easily testable explanations for why the GFAJ-1 cells grow on agar plates but not in the liquid media.
First, maybe there's something in the water. When making up my first batch of the culture medium for GFAJ-1, I didn't go to the trouble of making up the special trace-element mix that Wolfe-Simon et al added their culture medium. This seemed reasonable because I'm using cheap 'Regent Grade' chemicals (as did Wolfe-Simon et al.), and these are likely to contain traces of all the needed elements. Just in case they didn't, I made up the medium with tap water rather than the usual distilled water; Vancouver's tap water is very pure, but it still probably contains traces of everything cells might need. The stock salt solution (4X) was made with distilled water, but the final liquid medium is 75% tap water. But the agar plates are mainly distilled water (only about 10% tap water), because the 2% agar stock I used was made with distilled water. So maybe my GFAJ-1 cells aren't growing in liquid medium because tap water contains something that inhibits growth.
Second, I used brand new glass culture tubes for my liquid cultures, because I didn't want growth to be affected by traces of past cultures or of phosphate detergent that might have adhered to the surfaces of our usual tubes. I didn't wash the tubes before I used them, because I didn't want to risk leaving detergent on them. But maybe the new tubes are coated with something that inhibits growth.
Third, Wolfe-Simon et al. used screw-cap glass tubes for their cultures, but my tubes have loosely fitting caps designed to admit air. Maybe the air is inhibiting growth - that would be consistent with the good growth in parafilm-sealed petri dishes.
So this morning:
First, maybe there's something in the water. When making up my first batch of the culture medium for GFAJ-1, I didn't go to the trouble of making up the special trace-element mix that Wolfe-Simon et al added their culture medium. This seemed reasonable because I'm using cheap 'Regent Grade' chemicals (as did Wolfe-Simon et al.), and these are likely to contain traces of all the needed elements. Just in case they didn't, I made up the medium with tap water rather than the usual distilled water; Vancouver's tap water is very pure, but it still probably contains traces of everything cells might need. The stock salt solution (4X) was made with distilled water, but the final liquid medium is 75% tap water. But the agar plates are mainly distilled water (only about 10% tap water), because the 2% agar stock I used was made with distilled water. So maybe my GFAJ-1 cells aren't growing in liquid medium because tap water contains something that inhibits growth.
Second, I used brand new glass culture tubes for my liquid cultures, because I didn't want growth to be affected by traces of past cultures or of phosphate detergent that might have adhered to the surfaces of our usual tubes. I didn't wash the tubes before I used them, because I didn't want to risk leaving detergent on them. But maybe the new tubes are coated with something that inhibits growth.
Third, Wolfe-Simon et al. used screw-cap glass tubes for their cultures, but my tubes have loosely fitting caps designed to admit air. Maybe the air is inhibiting growth - that would be consistent with the good growth in parafilm-sealed petri dishes.
So this morning:
- I resuspended a single colony of cells from an agar plate in 1 ml of medium, and inoculated 0.2 ml of the cells into each of five tubes containing distilled-water medium with different amounts of phosphate.
- I did the same, this time into new tubes that I'd washed thoroughly with deionized water (no detergent) before sterilizing them by autoclaving.
- I asked the RA to check the prices of screw-cap tubes. They're expensive and available only in large quantities, so we won't order them unless the other tests don't solve the growth problem.
Why would GFAJ-1 grow much better on agar than in liquid?
My GFAJ-1 cells are growing very well on agar plates with the medium I made. After three days I resuspended the cells in one colony and did a rough estimate of their numbers. I calculated that there were about 3 x 10^5 cells in the colony, which means the cells must have been dividing at least once every four hours.
This is a lot faster than Wolfe-Simon et al. reported for GFAJ-1 cells in their liquid medium (about two doublings per day was the fastest), and it's a hell of a lot faster than my GFAJ-1 cells are growing in the liquid version of the same medium. I think the cells in liquid medium are still alive, but the numbers from my crude counts haven't really changed at all in the past two days, and have hardly increased from when I inoculated them five days ago. It's not for lack of phosphate; if anything, the cells in medium with little or no added phosphate are doing better than the cells in medium with the full 1.5 mM phosphate that's also in the agar medium.
Do the cells just prefer growing on an agar surface to growing in a liquid? Does the agar contain some trace nutrient they need? Or chelate away some inhibitory trace component of the liquid medium? Is it something about being in a petri dish? Or being sealed? (The petri dishes were wrapped with parafilm so they wouldn't dry out during long incubations, but the tubes of liquid can breathe through their loose caps.) I've made a fresh batch of agar plates, which should let me test these ideas.
This is a lot faster than Wolfe-Simon et al. reported for GFAJ-1 cells in their liquid medium (about two doublings per day was the fastest), and it's a hell of a lot faster than my GFAJ-1 cells are growing in the liquid version of the same medium. I think the cells in liquid medium are still alive, but the numbers from my crude counts haven't really changed at all in the past two days, and have hardly increased from when I inoculated them five days ago. It's not for lack of phosphate; if anything, the cells in medium with little or no added phosphate are doing better than the cells in medium with the full 1.5 mM phosphate that's also in the agar medium.
Do the cells just prefer growing on an agar surface to growing in a liquid? Does the agar contain some trace nutrient they need? Or chelate away some inhibitory trace component of the liquid medium? Is it something about being in a petri dish? Or being sealed? (The petri dishes were wrapped with parafilm so they wouldn't dry out during long incubations, but the tubes of liquid can breathe through their loose caps.) I've made a fresh batch of agar plates, which should let me test these ideas.
- Compare growth on plate from previous batch to growth on new batch.
- Spot 10 µl of each liquid culture onto plates (I did this three days ago too, so I can compare colony counts).
- Compare growth with the two vitamins to growth without pantothenic acid, with and without thiamine.
- Compare growth with and without parafilm wrapping.
- Test whether cells will grow in liquid medium if it's overlaid on agar medium.
- Test whether cells will grow in liquid medium if it's in a petri dish.
And I've found a Petroff-Hausser counting chamber, so maybe I can improve my microscope counts. I've also had advice from several colleagues who count cells by flow cytometry or microscopically - I'm still digesting this.
CSWA field trips: 2. Banff
On Monday attendees at the Canadian Science Writers' Association meeting went on a field trip to Banff. We didn't see the city at all - we went straight from its new recreation centre out into the field with park naturalists, learning about four different aspects of the park. First was fire, then traditional human use of the land. The last two were the most interesting - different aspects of how roads and railways affect the movement of park animals, first land mammals and then fish.
How do park animals cross the road? The park is traversed east-west by the Trans-Canada Highway, and most interactions between animals and humans used occur along the road. Humans have traditionally been thrilled to see animals by the roadside, but the animals haven't fared so well, with many being killed by cars and trucks. Part of the highway (~45 km) has been recently widened ('twinned', with separated eastbound and westbound lanes), and one goal of the improvement project was to reduce harm to the animals. One component is 2.4 m high fencing all along the improved highway, with a 1.5 m 'apron' under the ground to block burrowing. Complementing the fencing are tunnels and overpasses that allow animals to safely cross. As VIP visitors (lucky us) we were taken up onto a 50 m wide wildlife overpass that allows large animals (bears, lynx, elk etc) to cross the highway while feeling like they're still in the park. Park staff limit visits to the overpasses to only a few times each year, so that human scent won't alarm the animals - we were joined by visiting staff from the national park system of China, looking for ways to manage their wildlife. The sides of the overpass are 'bermed', with earth banked up so you can't see or hear the traffic, and there are infrared-triggered cameras that capture snapshots of passers-by.
Now that the bears can safely cross the road, another hazard has moved to high priority - being hit by a train. Bears like to walk along railroad tracks, but they don't realize that they should get out of the way of approaching trains, and the trains can't stop in time. This problem hasn't been solved yet.
What about the fish? Fish? Why would fish need to cross the road? The main river through the park runs parallel to the highway, and many side streams join it through culverts (pipes under the highway). Fish need to be able to move back and forth between the streams and the river, but the culverts often block this. When the stream's flow rate is high, the water moves even faster through the narrow culvert (the aquatics expert said this was the Venturi effect, but that's not what Wikipedia says...). The fish may not be able to swim upstream against this current, especially because a culvert doesn't provide them with anywhere to rest. So culverts are being replaced with much wider culverts or mini-bridges, and their bottoms are being filled with the same diverse materials found in natural stream beds.
A fast-flowing culvert will wash away the soil and gravel at its outflow point, creating a mini-waterfall. But unlike salmon, most fish can't jump, so they can't get into the culvert from its downstream side. Parks staff are redesigning the culvert outflows to eliminate this problem.
Next: the field trip to the Athabaska oil sands...
How do park animals cross the road? The park is traversed east-west by the Trans-Canada Highway, and most interactions between animals and humans used occur along the road. Humans have traditionally been thrilled to see animals by the roadside, but the animals haven't fared so well, with many being killed by cars and trucks. Part of the highway (~45 km) has been recently widened ('twinned', with separated eastbound and westbound lanes), and one goal of the improvement project was to reduce harm to the animals. One component is 2.4 m high fencing all along the improved highway, with a 1.5 m 'apron' under the ground to block burrowing. Complementing the fencing are tunnels and overpasses that allow animals to safely cross. As VIP visitors (lucky us) we were taken up onto a 50 m wide wildlife overpass that allows large animals (bears, lynx, elk etc) to cross the highway while feeling like they're still in the park. Park staff limit visits to the overpasses to only a few times each year, so that human scent won't alarm the animals - we were joined by visiting staff from the national park system of China, looking for ways to manage their wildlife. The sides of the overpass are 'bermed', with earth banked up so you can't see or hear the traffic, and there are infrared-triggered cameras that capture snapshots of passers-by.
Now that the bears can safely cross the road, another hazard has moved to high priority - being hit by a train. Bears like to walk along railroad tracks, but they don't realize that they should get out of the way of approaching trains, and the trains can't stop in time. This problem hasn't been solved yet.
What about the fish? Fish? Why would fish need to cross the road? The main river through the park runs parallel to the highway, and many side streams join it through culverts (pipes under the highway). Fish need to be able to move back and forth between the streams and the river, but the culverts often block this. When the stream's flow rate is high, the water moves even faster through the narrow culvert (the aquatics expert said this was the Venturi effect, but that's not what Wikipedia says...). The fish may not be able to swim upstream against this current, especially because a culvert doesn't provide them with anywhere to rest. So culverts are being replaced with much wider culverts or mini-bridges, and their bottoms are being filled with the same diverse materials found in natural stream beds.
A fast-flowing culvert will wash away the soil and gravel at its outflow point, creating a mini-waterfall. But unlike salmon, most fish can't jump, so they can't get into the culvert from its downstream side. Parks staff are redesigning the culvert outflows to eliminate this problem.
Next: the field trip to the Athabaska oil sands...
Vitamins are for wusses (#arseniclife)
The GFAJ-1 cells I streaked on a plate of my less-than-specified culture medium are growing, and much faster than I had dared hope. After less than 48 hr I can already see tiny (very tiny) colonies. I can't estimate growth rate because I don't know how many cells are the colony,
I was worried that they might not grow at all, because this medium doesn't include the trace-element mix that Wolfe-Simon et al. added to their medium. Instead I simply made up the medium with lovely Vancouver tap water. It also doesn't include most of the vitamins they added - I only had stocks of thiamine and pantothenic acid, so they were the only vitamins I added.
I'm still considering solutions to the cell-counting problem. Flow cytometry of acridine-orange-stained cells won't work very well because some of the cells are in small clumps of 2-10 cells. Collecting known volumes of cells onto black filter membranes would probably work fine but would be a pain and use a lot of the filters. I think the simplest solution is to make a 2-X acridine orange stock that contains a known density of 2 µ polystyrene beads, mix this with an equal volume of culture, and then count both the beads and the cells in random microscope fields-of-view. I'm going to test this now.
I was worried that they might not grow at all, because this medium doesn't include the trace-element mix that Wolfe-Simon et al. added to their medium. Instead I simply made up the medium with lovely Vancouver tap water. It also doesn't include most of the vitamins they added - I only had stocks of thiamine and pantothenic acid, so they were the only vitamins I added.
I'm still considering solutions to the cell-counting problem. Flow cytometry of acridine-orange-stained cells won't work very well because some of the cells are in small clumps of 2-10 cells. Collecting known volumes of cells onto black filter membranes would probably work fine but would be a pain and use a lot of the filters. I think the simplest solution is to make a 2-X acridine orange stock that contains a known density of 2 µ polystyrene beads, mix this with an equal volume of culture, and then count both the beads and the cells in random microscope fields-of-view. I'm going to test this now.
Epi-fluoresence OK
Nobody in the next lab knew how to use the epi-fluorescence illumination (the boss is away) but we figured it out. Basically: turn on the power supply, get the cells in focus while waiting for the little controller screen to say the lamp is warmed up, then slide the filter holder back and forth to find the filter that lets you see glowing cells.
The only trick was finding how to increase the size of the illuminated area to fill the screen (it was small off-center patch of light). I know enough about microscopes to know that it's never a good idea to randomly twiddle knobs to see if things get better, I had to do quite a bit of RTFMing and Google-searching to figure out that one of the mysterious rods on the top of the microscope controlled the opening of a hidden something called the 'luminous field diaphragm', and that the even more mysterious knurled rods beneath it could be used to center the illumination.
That let me discover that a hemocytometer does not work well with epi-fluorescence; I think the glass actually glows. I'm now quizzing the old-timer microbiologists in search of something called a Petroff-Hausser counting chamber, designed for bacteria. If that fails I'll just mix a known concentration of 1 µ polystyrene beads with the cells so I can calibrate the volume of whatever area I'm counting.
The only trick was finding how to increase the size of the illuminated area to fill the screen (it was small off-center patch of light). I know enough about microscopes to know that it's never a good idea to randomly twiddle knobs to see if things get better, I had to do quite a bit of RTFMing and Google-searching to figure out that one of the mysterious rods on the top of the microscope controlled the opening of a hidden something called the 'luminous field diaphragm', and that the even more mysterious knurled rods beneath it could be used to center the illumination.
That let me discover that a hemocytometer does not work well with epi-fluorescence; I think the glass actually glows. I'm now quizzing the old-timer microbiologists in search of something called a Petroff-Hausser counting chamber, designed for bacteria. If that fails I'll just mix a known concentration of 1 µ polystyrene beads with the cells so I can calibrate the volume of whatever area I'm counting.
Error and contamination in our uptake data
Here's a figure showing our basic uptake data; it shows how changes at individual bases affected DNA uptake from our giant pool of degenerate uptake sequence fragments. At some positions (19-23, 30, 31) changes had no effect; these positions contributed nothing to uptake specificity. At other positions there were small (1, 11-14, 18, 24, 29) or moderate (2-5, 10, 15-17, 25-28) effects. And at four positions (6-9) there were severe effects.
The evidence of what was not taken up isn't compromised by concerns about sequencing error or about contamination of the recovered (taken up) DNA with DNA that remained outside the cells or on their surfaces.
But we also want to analyze the fragments that were taken up, as these can reveal the effects of interactions between different positions. But we need to be sure that this is genuine uptake. Sequencing error and contamination will have only small impacts on the positions with modest and moderate uptake effects. But error and contamination become very important when we want to consider what was taken up at the positions with severe uptake effects. Thus we want to know how much of the apparent residual uptake at positions 6, 7, 8 and 9 is genuine and how much is only apparent, due to either sequencing errors or contamination.
It's easiest to analyze the contributions of error and contamination when we consider the subsets of fragments that have only one difference from the perfect consensus uptake sequence (we call these 'one-off' fragments). The second version of the figure (again, waiting for Blogger to fix this new problem) shows the one-off uptake data. When only position 6, 7, 8 or 9 had a non-consensus base, uptake was reduced to 0.4%, 0.1%, 0.1% and 0.4% respectively. In principle, this apparent uptake could be entirely genuine, entirely due to sequencing errors, or entirely due to contamination, and considering each of these extreme possibilities lets us set limits to their contributions.
Consider position 7: The set of 10,000,000 sequence reads from the recovered pool contains about 225,000 perfect consensus sequences but only 215 sequences mismatched at only position 7. At one extreme, these 215 could have all arisen by errors in sequencing the perfect-consensus fragments; this would imply an error rate of about 0.1%. At another extreme, these 215 could all be due to contamination; this would imply that about 3.6% of the DNA in the recovered pool actually came directly from the input pool. At the third extreme, these 215 could all have been taken up by the competent cells.
What other evidence can constrain these extremes? The above 'upper limit' error rate of 0.1% is already quite small, at the low end of estimates of typical Illumina error rates. Our preliminary analysis of the 10 control positions in the sequenced fragments indicates a much higher (not lower) rate, but this analysis is confounded by frameshift errors that we hope to disentangle today. But I don't expect the control positions to give us a rate lower than 0.1%, so we won't be able to formally exclude the possibility that all 215 one-off-at-7 sequences arose by sequencing errors. Late-breaking data: Our collaborator, who did the sequencing, has just provided the error-frequency data from the control lane: 0.44%.
We can constrain the upper-limit analysis of contamination rates using data from uptake experiments using radiolabeled fragments. When cells were given a DNA fragment carrying a random sequence rather than a USS, uptake was reduced more than 100-fold. So contamination is expected to be less than 1%, but we don't have any direct way to test for it in the experiment that generated our sequence data.
We do have direct evidence of how well fragments mismatched only at position 7 are taken up (here). But this estimate is about 5%, a lot higher than the 0.1% upper limit set by the one-off sequence data.
One other analysis of our data is important here. The postdoc made logos of all of the sequences in the uptake dataset that were mismatched at any particular position, to compare to the logo for the complete uptake dataset (some are shown below). For most positions we still see the basic preferences. But in the ~75,000 sequences with position 7 mismatched, there is no information in other positions. He originally interpreted this as meaning that fragments mismatched at position 7 were taken up in a different way, but it's also entirely consistent with most of the sequences arising from contamination. We've just reexamined this raw data (without the logo analysis), and there are weak signals, suggesting that some of the sequences are not contamination.
What does this error and contamination analysis tell us? Basically, for the four positions where uptake is severely affected by sequence changes, I don't think we can use the sequences in the uptake dataset to make inferences about uptake of mismatched fragments.
But we haven't yet analyzed the predicted effects of sequencing errors on the full set of sequences, only the one-offs, so maybe this will give more useful constraints. There were almost 10,000,000 sequences in the recovered dataset with the consensus base at position 7, and about 75,000 with one of the three non-consensus bases. For these 75,000 to have all arisen by sequencing errors, the error rate would have to be 0.75%. If the error rate was indeed 0.44%, then uptake (or contamination) could be responsible for only about 40% of the not-consensus-at-7 sequences. However this is a very simplistic analysis - we need to lay out all the sources and sinks to do the error analysis properly. (Later: No, this is the complete analysis - it's not overly simplistic at all.) Sequences that result from errors in sequencing of fragments with the consensus C at position 7 are expected to show the same logo as that of the full recovered dataset. The only ways to get sequences that don't show this logo are (1) by contamination or (2) by uptake if the specificity for other positions is eliminated when position 7 is not a C.
The evidence of what was not taken up isn't compromised by concerns about sequencing error or about contamination of the recovered (taken up) DNA with DNA that remained outside the cells or on their surfaces.
But we also want to analyze the fragments that were taken up, as these can reveal the effects of interactions between different positions. But we need to be sure that this is genuine uptake. Sequencing error and contamination will have only small impacts on the positions with modest and moderate uptake effects. But error and contamination become very important when we want to consider what was taken up at the positions with severe uptake effects. Thus we want to know how much of the apparent residual uptake at positions 6, 7, 8 and 9 is genuine and how much is only apparent, due to either sequencing errors or contamination.
It's easiest to analyze the contributions of error and contamination when we consider the subsets of fragments that have only one difference from the perfect consensus uptake sequence (we call these 'one-off' fragments). The second version of the figure (again, waiting for Blogger to fix this new problem) shows the one-off uptake data. When only position 6, 7, 8 or 9 had a non-consensus base, uptake was reduced to 0.4%, 0.1%, 0.1% and 0.4% respectively. In principle, this apparent uptake could be entirely genuine, entirely due to sequencing errors, or entirely due to contamination, and considering each of these extreme possibilities lets us set limits to their contributions.
Consider position 7: The set of 10,000,000 sequence reads from the recovered pool contains about 225,000 perfect consensus sequences but only 215 sequences mismatched at only position 7. At one extreme, these 215 could have all arisen by errors in sequencing the perfect-consensus fragments; this would imply an error rate of about 0.1%. At another extreme, these 215 could all be due to contamination; this would imply that about 3.6% of the DNA in the recovered pool actually came directly from the input pool. At the third extreme, these 215 could all have been taken up by the competent cells.
What other evidence can constrain these extremes? The above 'upper limit' error rate of 0.1% is already quite small, at the low end of estimates of typical Illumina error rates. Our preliminary analysis of the 10 control positions in the sequenced fragments indicates a much higher (not lower) rate, but this analysis is confounded by frameshift errors that we hope to disentangle today. But I don't expect the control positions to give us a rate lower than 0.1%, so we won't be able to formally exclude the possibility that all 215 one-off-at-7 sequences arose by sequencing errors. Late-breaking data: Our collaborator, who did the sequencing, has just provided the error-frequency data from the control lane: 0.44%.
We can constrain the upper-limit analysis of contamination rates using data from uptake experiments using radiolabeled fragments. When cells were given a DNA fragment carrying a random sequence rather than a USS, uptake was reduced more than 100-fold. So contamination is expected to be less than 1%, but we don't have any direct way to test for it in the experiment that generated our sequence data.
We do have direct evidence of how well fragments mismatched only at position 7 are taken up (here). But this estimate is about 5%, a lot higher than the 0.1% upper limit set by the one-off sequence data.
One other analysis of our data is important here. The postdoc made logos of all of the sequences in the uptake dataset that were mismatched at any particular position, to compare to the logo for the complete uptake dataset (some are shown below). For most positions we still see the basic preferences. But in the ~75,000 sequences with position 7 mismatched, there is no information in other positions. He originally interpreted this as meaning that fragments mismatched at position 7 were taken up in a different way, but it's also entirely consistent with most of the sequences arising from contamination. We've just reexamined this raw data (without the logo analysis), and there are weak signals, suggesting that some of the sequences are not contamination.
What does this error and contamination analysis tell us? Basically, for the four positions where uptake is severely affected by sequence changes, I don't think we can use the sequences in the uptake dataset to make inferences about uptake of mismatched fragments.
But we haven't yet analyzed the predicted effects of sequencing errors on the full set of sequences, only the one-offs, so maybe this will give more useful constraints. There were almost 10,000,000 sequences in the recovered dataset with the consensus base at position 7, and about 75,000 with one of the three non-consensus bases. For these 75,000 to have all arisen by sequencing errors, the error rate would have to be 0.75%. If the error rate was indeed 0.44%, then uptake (or contamination) could be responsible for only about 40% of the not-consensus-at-7 sequences. However this is a very simplistic analysis - we need to lay out all the sources and sinks to do the error analysis properly. (Later: No, this is the complete analysis - it's not overly simplistic at all.) Sequences that result from errors in sequencing of fragments with the consensus C at position 7 are expected to show the same logo as that of the full recovered dataset. The only ways to get sequences that don't show this logo are (1) by contamination or (2) by uptake if the specificity for other positions is eliminated when position 7 is not a C.
Counting the GFAJ-1 cells
I looked at some of the GFAJ-1 cells on the plate I received under the microscope. The Wolfe-Simon paper hadn't mentioned that they are motile, but it was reassuring to see them vigorously swimming around.
But they're little! Not as tiny as Haemophilus influenzea, but quite a bit smaller than E. coli. Their size made counting them with the hemocytometer a challenge - the hemocytometer is so thick that it messes up the microscope's optics, especially at high magnification, and my eyeballs are chock full of tiny floaters that aren't a problem in normal vision but look just like GFAJ-1 cells when I'm using a microscope.
Wolfe-Simon used the fluorescent dye acridine orange to stain the cells for counting. We do have acridine orange in our extensive collection of laboratory stains (scored from an old lab that was shutting down: we've never used most of them). And our microscope has fluorescence illumination, thanks to an investment by the lab next door - I'll have to ask them how to use it.
But they're little! Not as tiny as Haemophilus influenzea, but quite a bit smaller than E. coli. Their size made counting them with the hemocytometer a challenge - the hemocytometer is so thick that it messes up the microscope's optics, especially at high magnification, and my eyeballs are chock full of tiny floaters that aren't a problem in normal vision but look just like GFAJ-1 cells when I'm using a microscope.
Wolfe-Simon used the fluorescent dye acridine orange to stain the cells for counting. We do have acridine orange in our extensive collection of laboratory stains (scored from an old lab that was shutting down: we've never used most of them). And our microscope has fluorescence illumination, thanks to an investment by the lab next door - I'll have to ask them how to use it.
Starting to work with GFAJ-1!
The GFAJ-1 bacteria I requested should arrive today, so I'd better get some medium made. My plan is to initially grow them in medium supplemented with different levels of phosphate but no arsenate. This should let me see the extent to which phosphate availability limits growth in the absence of arsenate. Once I've carefully characterized their growth I'll test the effect of adding arsenate to medium that's phosphate-limited, and purify DNA from various cultures. The levels of P and As in the DNA and the various culture media will be determined by mass spectrometry, by Leonid Kruglyak and Josh Rabinowitz.
The medium-salts base contains 1 M NaCl, with ammonium sulfate, magnesium sulfate and sodium carbonate and bicarbonate; it's pH 9.8. I'm going to make this up as a 5X stock, then dilute it into water and add the glucose, phosphate (at 0 µM, 3 µM, 30 µM, 300 µM and 1.5 mM) and some but not all of the vitamins in the mix that Wolfe-Simon et al. used. (The paper never tested whether any of the vitamins were needed, and I don't have any thiotic acid (= thioctic acid = alpha lipoic acid). I'll test diluting the medium salts into both distilled water and into tap water, because I don't think I need to fuss with the trace element mix the authors added to their medium (they didn't use analytical-purity reagents and neither will I).
I think the cells will arrive streaked on a phosphate-,medium agar plate. I'll scrape up some cells, dilute them in a bit of the no-phoshate medium, and check the cell density under the microscope. Then I'll dilute them into the various media, aiming for a cell concentration that's detectable but not high. I'll incubate them in our glass culture tubes (using new tubes) in a 28 °C incubator, and check the cell counts every day.
Time to go turn down the 33 °C incubator, and find the chemicals and the new tubes.
The medium-salts base contains 1 M NaCl, with ammonium sulfate, magnesium sulfate and sodium carbonate and bicarbonate; it's pH 9.8. I'm going to make this up as a 5X stock, then dilute it into water and add the glucose, phosphate (at 0 µM, 3 µM, 30 µM, 300 µM and 1.5 mM) and some but not all of the vitamins in the mix that Wolfe-Simon et al. used. (The paper never tested whether any of the vitamins were needed, and I don't have any thiotic acid (= thioctic acid = alpha lipoic acid). I'll test diluting the medium salts into both distilled water and into tap water, because I don't think I need to fuss with the trace element mix the authors added to their medium (they didn't use analytical-purity reagents and neither will I).
I think the cells will arrive streaked on a phosphate-,medium agar plate. I'll scrape up some cells, dilute them in a bit of the no-phoshate medium, and check the cell density under the microscope. Then I'll dilute them into the various media, aiming for a cell concentration that's detectable but not high. I'll incubate them in our glass culture tubes (using new tubes) in a 28 °C incubator, and check the cell counts every day.
Time to go turn down the 33 °C incubator, and find the chemicals and the new tubes.
Oops, we forgot about sequencing errors
The postdoc derailed our consideration of contamination in his 'uptake' pool of DNA fragments by raising the issue of errors in the Illumina sequencing. We had discussed this issue long ago, before we had any data, and then forgot about it in the rush to analyze the results. How embarassing!
The expected level of sequencing errors is somewhere between 0.1 and 1%. We have two sets of estimates from our data, but they're very discordant.
One set of estimates comes from the frequency of sequences in the uptake pool that differ from the 225158 perfect consensus sequences at only one of the 31 degenerate positions. At the positions that are most important for uptake, positions 7 and 8, there are only 215 and 156 such fragments. If we make the extreme assumption that they all arose by sequencing errors of perfect-consensus fragments, the error rate must be less than 0.1% (If we allow some contamination and/or some uptake of the mismatched fragments the error rate would be even lower.
The other set of estimates comes from the control non-degenerate bases that precede (4) and follow (5) the degenerate sequence. We know what the base should be at these positions, so we can just count the differences. These are shockingly high; for different positions they range from 0.5% to 9.1%. Because of a weird pattern in the identity of the error bases, we suspect that these values have been confounded by misalignment problems, arising because the oligo synthesis or the sequencing erroneously skipped one or more positions. We'll try to sort this out this morning by looking directly at the non-degenerate positions in a few of these reads. If the differences at the 10 control positions are really due to base-identification errors we should see them in almost half of the reads.
The expected level of sequencing errors is somewhere between 0.1 and 1%. We have two sets of estimates from our data, but they're very discordant.
One set of estimates comes from the frequency of sequences in the uptake pool that differ from the 225158 perfect consensus sequences at only one of the 31 degenerate positions. At the positions that are most important for uptake, positions 7 and 8, there are only 215 and 156 such fragments. If we make the extreme assumption that they all arose by sequencing errors of perfect-consensus fragments, the error rate must be less than 0.1% (If we allow some contamination and/or some uptake of the mismatched fragments the error rate would be even lower.
The other set of estimates comes from the control non-degenerate bases that precede (4) and follow (5) the degenerate sequence. We know what the base should be at these positions, so we can just count the differences. These are shockingly high; for different positions they range from 0.5% to 9.1%. Because of a weird pattern in the identity of the error bases, we suspect that these values have been confounded by misalignment problems, arising because the oligo synthesis or the sequencing erroneously skipped one or more positions. We'll try to sort this out this morning by looking directly at the non-degenerate positions in a few of these reads. If the differences at the 10 control positions are really due to base-identification errors we should see them in almost half of the reads.
Controlling for contamination in the uptake sequence set
The postdoc gave me the actual numbers for the fragments with single mismatches at position 7: The input set contained 5940 of these, and the recovered (uptake) set had only 215. If we hypothesize that all of these 215 arise from contamination, then 3.6% of the fragments in the recovered pool come directly from the input pool. Because we know the exact sequence distribution of the uptake pool fragments (we sequenced 10^7 of them) we can correct the distributions in various subsets of the recovered pool for this possible contamination.
The plan is to do the main analyses with and without the correction. We don't actually know how much contamination there is, but 3.6% is the upper limit. Any results that don't change when the correction is applied are robust.
The analysis I'm most concerned about is the test for interactions between bases at different positions in the uptake sequence. The measure of interactions between positions that don't have big effects on uptake is likely to be robust, as these samples are large and removing 3.6% is unlikely to make much difference. For positions with very strong effects (6, 7, 8 and 9), the contamination correction will definitely reduce the ability to detect any interactions (because the sample size will get much smaller)...
What we see when we ignore possible contamination: When all the sequences with a mismatch at a weak position (e.g. 5) are aligned, we see an increase in the importances of some other positions, and we think this means that the effects of the positions are interdependent. But when all the sequences with a mismatch at a very strong position (e.g. 8) are aligned, we see that the importances of the other positions all shrink dramatically. This could mean that when base 8 is incorrect the DNA is taken up by some sequence-independent process, or that the fragments with incorrect base 8 contain out-of-alignment uptake sequences that our analysis overlooked (we know this occurs). But it could also mean that the fragments with incorrect base 8 were not taken up at all, but entered the recovered pool as contamination. So we need to correct for the maximum possible contamination (3.6%) and see how the importances change.
How should the corrections be done? We have position-weight matrices for the recovered and input pools, and for each subset of this data (e.g. for all fragments with mismatches at position 5, or 8, or 14). We think that, to correct a recovered-pool matrix for contamination, we just need to subtract from it 3.6% of the corresponding value in the corresponding input-pool matrix. This is easy, but when the postdoc tried it he sometimes got negative numbers (whenever 3.6% of an input value was larger than the recovered value. He thinks this means we need to use a more complicated calculation, but I wonder if it just means that, at this position of the matrix, the corrected value is indistinguishable from zero. We both think that it might be wise to consult a mathematician at this point.
The plan is to do the main analyses with and without the correction. We don't actually know how much contamination there is, but 3.6% is the upper limit. Any results that don't change when the correction is applied are robust.
The analysis I'm most concerned about is the test for interactions between bases at different positions in the uptake sequence. The measure of interactions between positions that don't have big effects on uptake is likely to be robust, as these samples are large and removing 3.6% is unlikely to make much difference. For positions with very strong effects (6, 7, 8 and 9), the contamination correction will definitely reduce the ability to detect any interactions (because the sample size will get much smaller)...
What we see when we ignore possible contamination: When all the sequences with a mismatch at a weak position (e.g. 5) are aligned, we see an increase in the importances of some other positions, and we think this means that the effects of the positions are interdependent. But when all the sequences with a mismatch at a very strong position (e.g. 8) are aligned, we see that the importances of the other positions all shrink dramatically. This could mean that when base 8 is incorrect the DNA is taken up by some sequence-independent process, or that the fragments with incorrect base 8 contain out-of-alignment uptake sequences that our analysis overlooked (we know this occurs). But it could also mean that the fragments with incorrect base 8 were not taken up at all, but entered the recovered pool as contamination. So we need to correct for the maximum possible contamination (3.6%) and see how the importances change.
How should the corrections be done? We have position-weight matrices for the recovered and input pools, and for each subset of this data (e.g. for all fragments with mismatches at position 5, or 8, or 14). We think that, to correct a recovered-pool matrix for contamination, we just need to subtract from it 3.6% of the corresponding value in the corresponding input-pool matrix. This is easy, but when the postdoc tried it he sometimes got negative numbers (whenever 3.6% of an input value was larger than the recovered value. He thinks this means we need to use a more complicated calculation, but I wonder if it just means that, at this position of the matrix, the corrected value is indistinguishable from zero. We both think that it might be wise to consult a mathematician at this point.
Abstracts for our uptake paper
The postdoc's new manuscript is dangerously close to being smothered in data analysis. Having so much high-quality Illumina sequence data allows him to do many elegant and perhaps important analyses. But we don't need to tell the reader about all these. Instead I've suggested that we're now at the point where we should have a clear list of the conclusions we want the reader to take away, and we should use this list to identify the analyses needed to justify these conclusions that should be presented in the paper.
To identify the conclusions, the postdoc and I are now independently writing our versions the Abstract. Here's mine: (The target journal is PNAS, so the audience goes beyond microbiologists.)
RR ABSTRACT: Most naturally competent bacteria take up any DNA, but some exhibit a strong preference for fragments containing a specific 'uptake sequence', with a corresponding abundance of the preferred sequence in their genomes. Although these sequences play a major role in the debate about the function of DNA uptake, the actual uptake specificities have not been carefully characterized. Using Illumina sequencing of degenerate DNA fragments recovered from competent Haemophilus influenzae cells, we have produced a very detailed analysis of this species' uptake specificity.
POSTDOC ABSTRACT: Many bacterial species are naturally competent, and competent cells are able to take up intact DNA molecules from the surroundings and bring them to the cytosol. Natural competence has a profound effect on genome evolution, due to recombination of taken up fragments with competent cell chromosomes. To bring DNA across cell membranes, the DNA uptake machinery must overcome the physical constraint imposed by stiff highly charged DNA molecules. Haemophilus influenzae competent cells have strong uptake specificity, preferentially taking up DNA fragments through their outer membrane that contain short “uptake signal sequences” (USS); ~2200 sites in the H. influenzae genome conform to a “genomic USS motif”. This genomic USS motif consists of a 9 bp “core” AAGTGCGGT with a strong consensus, flanked by two helically phased AT-tracts with a weaker consensus. We used massively parallel sequencing to dissect the genomic USS motif and find out how the structure of this unusually abundant sequence motif contributes to uptake efficiency. Competent cells were incubated with a complex pool of fragments containing a degenerate version of the consensus genomic USS, and the fragments cells took up were purified from the periplasm. Comparison of sequences from the recovered pool to sequences from the input pool revealed novel aspects of uptake specificity not predicted from genome sequence analysis and subdivides the USS into parts with distinct properties. Four bases in the “inner” core USS (GCGG) are (nearly?) essential for uptake. “Outer” core bases and the AT-tracts make weak individual contributions to uptake, but instead cooperatively contribute to uptake. These results provide a specific mechanistic hypothesis about the interaction of the USS with the DNA uptake machinery, as well as having implications for the evolution of uptake specificity and the accumulation of uptake sequences in genomes by molecular drive.
To identify the conclusions, the postdoc and I are now independently writing our versions the Abstract. Here's mine: (The target journal is PNAS, so the audience goes beyond microbiologists.)
RR ABSTRACT: Most naturally competent bacteria take up any DNA, but some exhibit a strong preference for fragments containing a specific 'uptake sequence', with a corresponding abundance of the preferred sequence in their genomes. Although these sequences play a major role in the debate about the function of DNA uptake, the actual uptake specificities have not been carefully characterized. Using Illumina sequencing of degenerate DNA fragments recovered from competent Haemophilus influenzae cells, we have produced a very detailed analysis of this species' uptake specificity.
- The uptake consensus sequence matched the known genomic consensus, with the 9 bp core AAGTGCGGT and two flanking AT-rich segments.
- Positions with no genomic signal showed little or no effects on uptake.
- Only four of the core positions (GCGG) made very strong contributions to uptake.
- Other positions of the consensus made relatively weak contributions to uptake.
- Compensatory interactions between bases at some of these positions made substantial contributions to uptake, suggesting that these bases may contact each other during uptake.
And here's his:
POSTDOC ABSTRACT: Many bacterial species are naturally competent, and competent cells are able to take up intact DNA molecules from the surroundings and bring them to the cytosol. Natural competence has a profound effect on genome evolution, due to recombination of taken up fragments with competent cell chromosomes. To bring DNA across cell membranes, the DNA uptake machinery must overcome the physical constraint imposed by stiff highly charged DNA molecules. Haemophilus influenzae competent cells have strong uptake specificity, preferentially taking up DNA fragments through their outer membrane that contain short “uptake signal sequences” (USS); ~2200 sites in the H. influenzae genome conform to a “genomic USS motif”. This genomic USS motif consists of a 9 bp “core” AAGTGCGGT with a strong consensus, flanked by two helically phased AT-tracts with a weaker consensus. We used massively parallel sequencing to dissect the genomic USS motif and find out how the structure of this unusually abundant sequence motif contributes to uptake efficiency. Competent cells were incubated with a complex pool of fragments containing a degenerate version of the consensus genomic USS, and the fragments cells took up were purified from the periplasm. Comparison of sequences from the recovered pool to sequences from the input pool revealed novel aspects of uptake specificity not predicted from genome sequence analysis and subdivides the USS into parts with distinct properties. Four bases in the “inner” core USS (GCGG) are (nearly?) essential for uptake. “Outer” core bases and the AT-tracts make weak individual contributions to uptake, but instead cooperatively contribute to uptake. These results provide a specific mechanistic hypothesis about the interaction of the USS with the DNA uptake machinery, as well as having implications for the evolution of uptake specificity and the accumulation of uptake sequences in genomes by molecular drive.
And here's our consensus:
CONSENSUS ABSTRACT: Most naturally competent bacteria will take up any DNA, but some exhibit a strong preference for fragments containing a specific 'uptake sequence', with a corresponding abundance of the preferred sequence in their genomes. These sequences play a major role in the debate about the function of DNA uptake, but although the genomic motifs are often assumed to directly reflect the uptake specificities, the actual uptake specificity has not been carefully characterized for any species. Using Illumina sequencing of degenerate DNA fragments recovered from competent Haemophilus influenzae cells, we have produced a very detailed analysis of this species' uptake specificity. This work identified an uptake consensus sequence that did indeed match the known genomic consensus, with the 9 bp core AAGTGCGGT and two flanking AT-rich segments. However, positions with moderately strong genomic consensuses had unexpectedly weak effects on uptake, and only four of the core positions (GCGG) made very strong contributions to uptake. Compensatory interactions between bases at some of the minor positions made substantial contributions to uptake, suggesting that these bases may contact each other or the same component of the uptake machinery during uptake. These findings suggest that interaction of the central four bases with a DNA-binding protein may be the main factor in uptake specificity, with minor DNA-DNA and/or DNA-protein interactions also contributing. Cumulative effects of these interactions over evolutionary time may explain the discrepancy between the genomic and uptake motifs. Experimental work on these interactions is likely to clarify how the DNA uptake machinery overcomes the physical constraints imposed by stiff highly charged DNA molecules.
Thinking about possible contamination in the postdoc's data
The postdoc and I are still/again working on his paper about DNA uptake specificity, and were getting deep into the Results. I had thought I understood the specificity matrices and sequence logos, but tonight I realized that I didn't even understand the simple stuff (the ratios of different categories of sequences in the sequence reads from the input pool and the recovered pool.
While struggling with that, I realized that we had been assuming that all the DNA fragments in the recovered pool had indeed been taken up by the competent cells and then reisolated. We hadn't properly controlled for the possibility that the recovered pool was contaminated with DNA fragments that had never been taken up, but that had the same sequence distribution as the input pool.
I started thinking about contamination because of what appeared to be an odd result. The postdoc's results show that four positions in the 31 bp DNA uptake sequence (USS) are particularly important for uptake; fragments with a non-consensus base at even one of these positions were rarely taken up by competent cells. But he has sequences of 10^7 fragments from the recovered DNA pool, and this large dataset does include quite a few fragments with such mismatches.
Surprisingly, when a set of fragments with one of these mismatches is examined (e.g. fragments that do have a mismatched base (A, G or T rather than C) only at position 7, the most important for uptake), the bases at all of the other positions of the USS appear to have made no contribution to uptake. This observation might be explained by some weird type of interaction, but it might also be a sign that the recovered DNA preparation is contaminated with DNA fragments that were never taken up.
In fact, we can use this observation to set limits on contamination, and to correct the database for possible effects of contamination, by considering the implications of two extreme hypotheses about contamination. The first is the hypothesis that there is no contamination - that every fragment in the recovered pool is there because it was taken up by a competent cell, including all the fragments with a mismatched base at position 7. The second is the hypothesis that fragments with A, G or T at position 7 are never taken up, and that all of the fragments with a mismatched base at position 7 are there because of contamination, not uptake. We can use this hypothetical value to calculate an upper limit to the contamination of the recovered pool, and then apply appropriate correction factors to all the analyses.
It's a bit embarrassing to realize that we've been neglecting this obvious issue with our data. But I'm really glad that we're catching it now and not leaving it for the referees to catch after we submit our manuscript.
While struggling with that, I realized that we had been assuming that all the DNA fragments in the recovered pool had indeed been taken up by the competent cells and then reisolated. We hadn't properly controlled for the possibility that the recovered pool was contaminated with DNA fragments that had never been taken up, but that had the same sequence distribution as the input pool.
I started thinking about contamination because of what appeared to be an odd result. The postdoc's results show that four positions in the 31 bp DNA uptake sequence (USS) are particularly important for uptake; fragments with a non-consensus base at even one of these positions were rarely taken up by competent cells. But he has sequences of 10^7 fragments from the recovered DNA pool, and this large dataset does include quite a few fragments with such mismatches.
Surprisingly, when a set of fragments with one of these mismatches is examined (e.g. fragments that do have a mismatched base (A, G or T rather than C) only at position 7, the most important for uptake), the bases at all of the other positions of the USS appear to have made no contribution to uptake. This observation might be explained by some weird type of interaction, but it might also be a sign that the recovered DNA preparation is contaminated with DNA fragments that were never taken up.
In fact, we can use this observation to set limits on contamination, and to correct the database for possible effects of contamination, by considering the implications of two extreme hypotheses about contamination. The first is the hypothesis that there is no contamination - that every fragment in the recovered pool is there because it was taken up by a competent cell, including all the fragments with a mismatched base at position 7. The second is the hypothesis that fragments with A, G or T at position 7 are never taken up, and that all of the fragments with a mismatched base at position 7 are there because of contamination, not uptake. We can use this hypothetical value to calculate an upper limit to the contamination of the recovered pool, and then apply appropriate correction factors to all the analyses.
It's a bit embarrassing to realize that we've been neglecting this obvious issue with our data. But I'm really glad that we're catching it now and not leaving it for the referees to catch after we submit our manuscript.
CSWA field trips: 1. Calgary's new sewage treatment plant!
(This post will be followed by two others, on the field trips to Banff and to the Alberta oil sands.)
A big part of the Canadian Science Writers Association meeting was its field trips. The first one I went on was to Calgary's new Pine Creek water treatment plant. (Yeah, I'm a microbiologist so I love this stuff.)
Calgary is very proud of this plant, but they haven't yet gotten around to making it easy to find. Signage is almost non-existent and there's no website (just pages with planning docs), so our old yellow schoolbus drove back and forth for about an hour searching for it (many PhDs with many iPhones were pretty useless).
Calgary gets its water from its two big rivers, the Bow and the Elbow, and it puts all its treated wastewater into the Bow. (The Historic Park where we'd had our banquet the night before included a short paddlewheel ride on the Elbow-fed Glenmore Reservoir. Coming from Vancouver, I was first surprised that boats were allowed on the reservoir, and then shocked at the muddy (completely opaque) state of the water. I guess they have a really good filtration system.)
The city's other water treatment plants leave a lot of nutrients (e.g. nitrogen, phosphorus) in the treated water; these can stimulate unwanted algal growth in the river downstream. The goal of the new Pine Creek plant is to put water into the Bow that contains no more nutrients than the river water. It does this by some clever microbiology that I didn't understand. My lack of understanding isn't the fault of our guide, an engineer who gave up his Sunday afternoon to lead us around.
But the information we got was just enough to be tantalizing - bacteria in one fraction of the sewage ferment nutrients (?) to produce volatile fatty acids, which are passed over to another fraction where the bacteria do something else (maybe just multiply?), using the fatty acids as an energy source. Then this fraction, with its bacteria (?), takes on the job of removing the phosphate and nitrogen from the water(?). At the end of the line the remaining bacteria and other organisms (stalked ciliates, rotifers...) are killed with UV, so that the water being released into the Bow River has few nutrients and no more than 200 coliform bacteria per 100 ml. I think that the bulk of the nutrient-gobbling bacteria were removed from the water by a filtration step that uses plushy white mats, and then sent to the tree farm that's associated with the treatment plant, where their nutrients provide fertilizer.
I think I need to do another tour, where I ask all the questions I didn't ask in Calgary. But I don't think any of Vancouver's treatment plants are up to their standard.
A big part of the Canadian Science Writers Association meeting was its field trips. The first one I went on was to Calgary's new Pine Creek water treatment plant. (Yeah, I'm a microbiologist so I love this stuff.)
Calgary is very proud of this plant, but they haven't yet gotten around to making it easy to find. Signage is almost non-existent and there's no website (just pages with planning docs), so our old yellow schoolbus drove back and forth for about an hour searching for it (many PhDs with many iPhones were pretty useless).
Calgary gets its water from its two big rivers, the Bow and the Elbow, and it puts all its treated wastewater into the Bow. (The Historic Park where we'd had our banquet the night before included a short paddlewheel ride on the Elbow-fed Glenmore Reservoir. Coming from Vancouver, I was first surprised that boats were allowed on the reservoir, and then shocked at the muddy (completely opaque) state of the water. I guess they have a really good filtration system.)
The city's other water treatment plants leave a lot of nutrients (e.g. nitrogen, phosphorus) in the treated water; these can stimulate unwanted algal growth in the river downstream. The goal of the new Pine Creek plant is to put water into the Bow that contains no more nutrients than the river water. It does this by some clever microbiology that I didn't understand. My lack of understanding isn't the fault of our guide, an engineer who gave up his Sunday afternoon to lead us around.
But the information we got was just enough to be tantalizing - bacteria in one fraction of the sewage ferment nutrients (?) to produce volatile fatty acids, which are passed over to another fraction where the bacteria do something else (maybe just multiply?), using the fatty acids as an energy source. Then this fraction, with its bacteria (?), takes on the job of removing the phosphate and nitrogen from the water(?). At the end of the line the remaining bacteria and other organisms (stalked ciliates, rotifers...) are killed with UV, so that the water being released into the Bow River has few nutrients and no more than 200 coliform bacteria per 100 ml. I think that the bulk of the nutrient-gobbling bacteria were removed from the water by a filtration step that uses plushy white mats, and then sent to the tree farm that's associated with the treatment plant, where their nutrients provide fertilizer.
I think I need to do another tour, where I ask all the questions I didn't ask in Calgary. But I don't think any of Vancouver's treatment plants are up to their standard.
Guest post about #arseniclife
I have a followup article about the arsenic bacteria paper on Scientific American blogs. It focuses on the science, and is accompanied by a post from Marie-Claire Shanahan about the implications for science communication and education.
CIHR results (it's deja vu all over again)
Score: 4.36 out of 5
Ranking: 12 out of 51
Chances of getting funded: slim.
One of the reviewers had specific suggestions for ways to further strengthen the proposed work, recommending that we include plans to (and preliminary work on) making directed point mutations in key genes. This is something we can certainly do, though the lack of a good counterselectable marker will still be a problem.
Ranking: 12 out of 51
Chances of getting funded: slim.
One of the reviewers had specific suggestions for ways to further strengthen the proposed work, recommending that we include plans to (and preliminary work on) making directed point mutations in key genes. This is something we can certainly do, though the lack of a good counterselectable marker will still be a problem.
CSWA: Panel followup on Energy, Environment and the Economy
Summary: How do North America and the world’s energy systems need to transform to address problems such as climate change, energy security and energy poverty? A panel of prominent experts shares their insights.
Wilson: North Americans fail to come to grips with changes in China and India. They may become the superpowers. Think about water. We use water (pollute it) to get energy; we're having increasingly to use energy to get usable water from polluted water.
Eaton: She's worn lots of hats, on all sides of energy and environment. Key is educating people...
Need to include all the costs in the price of energy ('externalities'). And high energy prices drive development of supply. But how to put costs onto the externalities? How do we price the water used by oil sands production?
How to reduce consumption? Hofmeister: a big energy waste is allowing anyone to live anywhere and still guarantee grid access? (I though he was going to say pay transportation costs, not guarantee grid costs.) The big problem is that energy is cheap and North Americans are rich.
How to get new technologies to market (in Canada)? For solar, the nature of the current grid is the problem (no way to store, even though in principle could store as hydrogen). Canada is especially bad at technology transfer (risk-averse).
Most of Canada's present oil production is exported to the US. What if we kept our surplus oil? Would the US invade us? Doucet: We do it because they pay lots of $, more than it costs, and this is a way for our economy to become more wealthy. We don't need the oil, so let's sell it! (That can't be true - he's claiming we have all we'll ever need!) There's no cumulative oversight of impacts, so no sustainable long-term plan. Doucet says we need to consider cumulative and environmental issues...
Canadian government won't listen to scientists. How to change this? Hofmeister: Need a crisis (natural or engineered). Canada has enough energy that natural crisis is unlikely in the near future, but US is heading for a crisis (outages) within 10 years. Wilson: Canada used to be viewed as 'the conscience of the North' - no more. Eaton is optimistic that environmental good sense is spreading, and generating a groundswell of change.
Canadian crisis? What would it be? Not climate change. Water, says Doucet. Wilson - expects demonstrations for water -it shouldn't be commodified. Hofmeister - don't expect government to do the right thing, except under leadership of corporate and NGO worlds. Issues these can agree on (common sense?) can push government to do the right thing. But watch out for China.
- John Hofmeister, Citizens for Affordable Energy
- Susan Eaton, geologist/journalist/consultant (last-minute pinch hitter)
- Joseph Doucet, University of Alberta
- Lee Wilson, University of Saskatchewan
- Set energy supply
- Make technology decisions
- Manage environmental issues
- Ensure infrastructure is built.
Wilson: North Americans fail to come to grips with changes in China and India. They may become the superpowers. Think about water. We use water (pollute it) to get energy; we're having increasingly to use energy to get usable water from polluted water.
Eaton: She's worn lots of hats, on all sides of energy and environment. Key is educating people...
Need to include all the costs in the price of energy ('externalities'). And high energy prices drive development of supply. But how to put costs onto the externalities? How do we price the water used by oil sands production?
How to reduce consumption? Hofmeister: a big energy waste is allowing anyone to live anywhere and still guarantee grid access? (I though he was going to say pay transportation costs, not guarantee grid costs.) The big problem is that energy is cheap and North Americans are rich.
How to get new technologies to market (in Canada)? For solar, the nature of the current grid is the problem (no way to store, even though in principle could store as hydrogen). Canada is especially bad at technology transfer (risk-averse).
Most of Canada's present oil production is exported to the US. What if we kept our surplus oil? Would the US invade us? Doucet: We do it because they pay lots of $, more than it costs, and this is a way for our economy to become more wealthy. We don't need the oil, so let's sell it! (That can't be true - he's claiming we have all we'll ever need!) There's no cumulative oversight of impacts, so no sustainable long-term plan. Doucet says we need to consider cumulative and environmental issues...
Canadian government won't listen to scientists. How to change this? Hofmeister: Need a crisis (natural or engineered). Canada has enough energy that natural crisis is unlikely in the near future, but US is heading for a crisis (outages) within 10 years. Wilson: Canada used to be viewed as 'the conscience of the North' - no more. Eaton is optimistic that environmental good sense is spreading, and generating a groundswell of change.
Canadian crisis? What would it be? Not climate change. Water, says Doucet. Wilson - expects demonstrations for water -it shouldn't be commodified. Hofmeister - don't expect government to do the right thing, except under leadership of corporate and NGO worlds. Issues these can agree on (common sense?) can push government to do the right thing. But watch out for China.
CSWA: John Hofmeister (Citizens for Affordable Energy)
An inflammatory speech by the former CEO of Shell Oil. He says we Canadians should be very afraid because of how the US is changing. The 'Holier than thou* (HTT)' crowd vs the 'We like the way things are (WLTWTA)' crowd. The military-industrial complex is a big problem... Obama despises hydrocarbon, but the US economy runs on oil, and produces only a small fraction of what it uses. And our lives run on personal mobility. Supplies of oil to the US from other countries will dwindle as demand from China and India increase.
I'm having a hard time figuring out where he's going.. He just said that the US and Canada are blessed with more energy than we will ever need. It's true only if we include unlimited nuclear energy, but not otherwise.
He seems to be saying that it's all the government's fault that 'the system is broken'. The 'grass-roots' citizens need a clear view of the energy disaster they face, and, I guess, should demand higher energy production from all sources. Go back to the past and ??? Something about the Federal Reserve Bank? We need an independent regulatory agency for energy, with a long term so it's independent of short-term political forces, and with the a uthority to
*The term 'holier than thou' is a great pejorative, putting people in the wrong for being right.
I'm having a hard time figuring out where he's going.. He just said that the US and Canada are blessed with more energy than we will ever need. It's true only if we include unlimited nuclear energy, but not otherwise.
He seems to be saying that it's all the government's fault that 'the system is broken'. The 'grass-roots' citizens need a clear view of the energy disaster they face, and, I guess, should demand higher energy production from all sources. Go back to the past and ??? Something about the Federal Reserve Bank? We need an independent regulatory agency for energy, with a long term so it's independent of short-term political forces, and with the a uthority to
- Set energy supply
- Make technology decisions
- Manage environmental issues
- Ensure infrastructure is built.
*The term 'holier than thou' is a great pejorative, putting people in the wrong for being right.
CSWA: Of Mice and Microbes: Advances in Fighting and Managing Infectious Diseases
Summary: Is it just the flu or is it the next global pandemic? Hear from infectious disease researchers on the frontline in the war against pathogens.
Christopher (Chip) Doig, Head of the Department of Community Health Sciences, University of Calgary.
Andrew Potter, Director, VIDO-InterVac, University of Saskatchewan
Paul Kubes, Director of the Snyder Institute of Infection, Immunity and Inflammation, University of Calgary
Penny Hawe, Professor, Population Health Intervention Research Centre, University of Calgary.
Moderator: Kathryn Warden, University of Saskatchewan
Andrew Potter: Measles (254 cases) in Quebec over the past few months, an outbreak imported from France and spread because vaccination rates are so low due to fearmongering. 1/3 or more of all deaths on the planet are due to infectious disease. We take influenza for granted ('just the flu'), but $37 billion cost in US per year. Solution to pandemic influenza will come from work on seasonal influenza. Emerging infectious diseases emerge from animals (and vice versa). Much is also transmitted by animals to humans, directly or via food and water.
Paul Kubes: (...about the Snyder Institute...) MRSA (methicillin resistant Staph aureus). Immune system causes the inflammation that's part of the harm done by infections. Upcoming: "Nice videos, but where's the data?" Intravital microscopy. New topics. Neutrophils recognize the surfaces of mitichondria with the same receptors (?) that recognize bacterial surfaces.
Chip Doig: Severe infections - mortality >30% even in the very best hospitals. Septic shock is what kills. Usually from pneumonia from Streptococcus pneumoniae. Most is still sensitive to penicillin, so why can't we cure the infection with penicillin? Because death is due to inflammation that's a response to infection, and the antibiotic doesn't prevent inflammation.
Penny Hawe: She's a psychologist! Importance of social science in epidemiology and public health. Failures in science communication are responsible for failures in public health. Story themes that attract media attention: freakish/weird events, moral tales, heroic rescues, grandma was right. Failure to deal with the subtexts. (Read The Panic Virus. Anecdote about Harvey Fienberg (v. imp. scientist, Dean at Harvard) being creamed by the anti-vaccine lobby on TV.) Need to recognize subtexts ('framing'?) and change strategies appropriately. Julie Leask, discourse analysis: profit alliances, conspiracies by powerful groups to hide the truth... Results is that 'experts' are judged by their apparent value systems (evidence of compassion especially, not power) rather than by the factual evidence they present.
Christopher (Chip) Doig, Head of the Department of Community Health Sciences, University of Calgary.
Andrew Potter, Director, VIDO-InterVac, University of Saskatchewan
Paul Kubes, Director of the Snyder Institute of Infection, Immunity and Inflammation, University of Calgary
Penny Hawe, Professor, Population Health Intervention Research Centre, University of Calgary.
Moderator: Kathryn Warden, University of Saskatchewan
Andrew Potter: Measles (254 cases) in Quebec over the past few months, an outbreak imported from France and spread because vaccination rates are so low due to fearmongering. 1/3 or more of all deaths on the planet are due to infectious disease. We take influenza for granted ('just the flu'), but $37 billion cost in US per year. Solution to pandemic influenza will come from work on seasonal influenza. Emerging infectious diseases emerge from animals (and vice versa). Much is also transmitted by animals to humans, directly or via food and water.
Paul Kubes: (...about the Snyder Institute...) MRSA (methicillin resistant Staph aureus). Immune system causes the inflammation that's part of the harm done by infections. Upcoming: "Nice videos, but where's the data?" Intravital microscopy. New topics. Neutrophils recognize the surfaces of mitichondria with the same receptors (?) that recognize bacterial surfaces.
Chip Doig: Severe infections - mortality >30% even in the very best hospitals. Septic shock is what kills. Usually from pneumonia from Streptococcus pneumoniae. Most is still sensitive to penicillin, so why can't we cure the infection with penicillin? Because death is due to inflammation that's a response to infection, and the antibiotic doesn't prevent inflammation.
Penny Hawe: She's a psychologist! Importance of social science in epidemiology and public health. Failures in science communication are responsible for failures in public health. Story themes that attract media attention: freakish/weird events, moral tales, heroic rescues, grandma was right. Failure to deal with the subtexts. (Read The Panic Virus. Anecdote about Harvey Fienberg (v. imp. scientist, Dean at Harvard) being creamed by the anti-vaccine lobby on TV.) Need to recognize subtexts ('framing'?) and change strategies appropriately. Julie Leask, discourse analysis: profit alliances, conspiracies by powerful groups to hide the truth... Results is that 'experts' are judged by their apparent value systems (evidence of compassion especially, not power) rather than by the factual evidence they present.
CSWA: Minding the Brain: Advances in Protecting and Repairing the Central Nervous System
Summary: The human brain and spinal cord control our muscles, eyesight, breathing and memory. An interdisciplinary panel of scientists explores how we might repair the damaged central nervous system, and protect the aging brain against dementia, Alzheimer’s disease and stroke.
Moderator: Karen Thomas, Alberta Innovates
Marc Poulin, University of Calgary
Robert Sutherland, University of Lethbridge
SamuelWeiss, University of Calgary
(More 'real scientists', all male)
Poulin: missed this brief talk
Sutherland: Hippocampus - decline of memory function with aging.
Weiss: Myelination, multiple sclerosis and women: In female mice, nurturing as an infant correlates with better myelination (more oligodendrocytes) as an adult. There's an epigenetic effect, as their offspring have better oligodendrocytes too. And (in mice) prolactin stimulates oligodendrocyte production, perhaps explaining why pregnancy promotes remission in women with MS.
Moderator: Karen Thomas, Alberta Innovates
Marc Poulin, University of Calgary
Robert Sutherland, University of Lethbridge
SamuelWeiss, University of Calgary
(More 'real scientists', all male)
Poulin: missed this brief talk
Sutherland: Hippocampus - decline of memory function with aging.
Weiss: Myelination, multiple sclerosis and women: In female mice, nurturing as an infant correlates with better myelination (more oligodendrocytes) as an adult. There's an epigenetic effect, as their offspring have better oligodendrocytes too. And (in mice) prolactin stimulates oligodendrocyte production, perhaps explaining why pregnancy promotes remission in women with MS.
CSWA panel: Perceptions of the Oilsands: 'Avatar' vs reality
Programme entry: Oil sands development is frequently portrayed as one of the most destructive industrial projects on the planet. But how does this portrayal compare with what’s happening on the ground in Alberta? And what does the future look like for the oil sands? Join this expert panel for a timely “reality check.”
Sponsor: Connacher Oil and Gas Limited. Their technology = Steam-assisted gravity drainage (SAGD): at Algar site and Great Divide Pod One site.
Matt Palmer: clip from movie - we really need oil and petrochemicals for almost everything, and they are not (that much) worse than other choices. Platitudes and truisms, but nothing specific about the oil sands.
Rhona DelFrari: Cenovus is new company (2009), almost all in oil sands. Now about half mining, half 'in situ' = beneath the surface, leaving the surface intact. By 2016 will be 80% in situ (= SAGD). Steam generated at plant and piped to wellpad site, 150,000 barrels/day (~300,000/day in future). Cost? Water use .15 barrel fresh, most of the rest alkaline from aquifers. Initially her emphasis is on reducing local environmental impacts, but I care more about global greenhouse gas impact. They know this, and their longterm goal is to reduce greenhouse gas emissions.
Preston McEachern (Section Head - Oil Sands, Alberta Environment): Discovering that rational arguments aren't very effective. Map comparison of Los Angeles and Oil Sands. Another whiz-bang graph of emissions from different petrochemical sources that appears to show that oil sands aren't worse than the other options. But without a careful analysis of the axes and the colours, I can't evaluate it. Water use: problem isn't the amount but the contamination. Recycling improves that ratio, now about 1.5 barrels of water used per barrel of bitumen extracted. Seepage from tailings ponds not significant. The natural river cuts right through exposed oil sands, and most of the oil that gets into the river does so naturally. His goal: get the good news out.
Andrew Nikiforuk: 'Canada's biggest science experiment'. Look at peer reviewed science on emissions - 17-23% greater than conventional oil. "Energy return on energy invested" Conventional oil: invest 1 barrel to find, get 300 barrels back. SAGD: 1.5 gets 5? Sometimes negative = not sustainable. Canada is in climate-change denial - doing nothing to mitigate the carbon cost, and not putting any income away for the future. Peter Lougheed's recommendations are good.
- Moderator, Michal Moore, ISEEE and School of Public Policy
- Rhona DelFrari, Cenovus Energy (Manager, Media Relations)
- Preston McEachern, Alberta Environment
- Andrew Niliforuk, author of Tar Sands, Dirty Oil and the Future of a Continent
- Matt Palmer, Director of the documentary film Pay Dirt: the Unconventional Conventional
Sponsor: Connacher Oil and Gas Limited. Their technology = Steam-assisted gravity drainage (SAGD): at Algar site and Great Divide Pod One site.
Matt Palmer: clip from movie - we really need oil and petrochemicals for almost everything, and they are not (that much) worse than other choices. Platitudes and truisms, but nothing specific about the oil sands.
Rhona DelFrari: Cenovus is new company (2009), almost all in oil sands. Now about half mining, half 'in situ' = beneath the surface, leaving the surface intact. By 2016 will be 80% in situ (= SAGD). Steam generated at plant and piped to wellpad site, 150,000 barrels/day (~300,000/day in future). Cost? Water use .15 barrel fresh, most of the rest alkaline from aquifers. Initially her emphasis is on reducing local environmental impacts, but I care more about global greenhouse gas impact. They know this, and their longterm goal is to reduce greenhouse gas emissions.
Preston McEachern (Section Head - Oil Sands, Alberta Environment): Discovering that rational arguments aren't very effective. Map comparison of Los Angeles and Oil Sands. Another whiz-bang graph of emissions from different petrochemical sources that appears to show that oil sands aren't worse than the other options. But without a careful analysis of the axes and the colours, I can't evaluate it. Water use: problem isn't the amount but the contamination. Recycling improves that ratio, now about 1.5 barrels of water used per barrel of bitumen extracted. Seepage from tailings ponds not significant. The natural river cuts right through exposed oil sands, and most of the oil that gets into the river does so naturally. His goal: get the good news out.
Andrew Nikiforuk: 'Canada's biggest science experiment'. Look at peer reviewed science on emissions - 17-23% greater than conventional oil. "Energy return on energy invested" Conventional oil: invest 1 barrel to find, get 300 barrels back. SAGD: 1.5 gets 5? Sometimes negative = not sustainable. Canada is in climate-change denial - doing nothing to mitigate the carbon cost, and not putting any income away for the future. Peter Lougheed's recommendations are good.
CSWA lunch talk: Joe Schwarcz, McGill
He's a charismatic chemist. Everything we eat is contaminated with other stuff. Lots of conflicting dietary advice out there, changing all the time.
Numbers matter. "There are no safe substances, only safe ways to use substances." "Only the dose makes the poison."
Consumer's Guide to Food Additives, by Ruth Winter: full of major errors and misinformation. Web site of Joseph Mercola: full of major errors and misinformation. Seductive arguments, simplistic solutions to inaginary problems.
His web sites: www.oss.mcgill.ca, www.chemicallyspeaking.com.
Numbers matter. "There are no safe substances, only safe ways to use substances." "Only the dose makes the poison."
Consumer's Guide to Food Additives, by Ruth Winter: full of major errors and misinformation. Web site of Joseph Mercola: full of major errors and misinformation. Seductive arguments, simplistic solutions to inaginary problems.
His web sites: www.oss.mcgill.ca, www.chemicallyspeaking.com.
Ros Reid - Communicating science through pictures: visual literacy for science writers (workshop)
(Live-blogging the Canadian Science Writers Association meeting)
First we're introducing ourselves, and I realize that, although I spend most of my time communicating, I have no training in this. After her brief presentation she's going to give us a tough visualization problem to work on in small groups.
(Too bad that the visuals for her presentation are not themselves examples of good visualization. Instead they're numbered lists of text points. And now bullet points! Another case of "Do as I say , not as I do.")
When interviewing scientists she first asks them to send her the PowerPoint slides from a typical presentation (a slide deck).
Our task: Create a visual narrative about 'D-wave Systems' (an adiabatic quantum computing company).
First we're introducing ourselves, and I realize that, although I spend most of my time communicating, I have no training in this. After her brief presentation she's going to give us a tough visualization problem to work on in small groups.
(Too bad that the visuals for her presentation are not themselves examples of good visualization. Instead they're numbered lists of text points. And now bullet points! Another case of "Do as I say , not as I do.")
When interviewing scientists she first asks them to send her the PowerPoint slides from a typical presentation (a slide deck).
- Information graphics, but also beautiful eye-grabbing decoration.
- Simple graphics with hand-drawn warmth.
- 'Feel' is important: energy=movement vs static
- Interacting with the scientist using sketching
- Successful online graphics have long lifetimes
Our task: Create a visual narrative about 'D-wave Systems' (an adiabatic quantum computing company).
- show directly
- by analogy
- clarify
- describe (±words)
- organize
- show passage of time
- medium?
- audience?
- main idea?
- hierarchy of info?
- best tools?
- explanatory vs exploratory (better)
Working safely with arsenic (what I'd need to know)
I'm still trying to get laboratory safety information about using media containing arsenate. I know how to work safely with radioisotopes, but arsenic is a whole new thing. It doesn't decay, and you can't detect it with a Geiger counter or a scintillation counter.
The molecular weight of arsenic is 75, so 75 g/l is a 1 M solution, and 75 mg/l is 1 mM. I think I'd make a 1 M stock solution of sodium arsenate and, after somehow sterilizing it, use it to make culture media with up to 40 mM arsenate.
I found some US Environment Protection Agency limits online, but these are for industrial-scale work (e.g. how much arsenic for landfill) Sewage sludge can apparently contain 73 mg arsenic per kg; that's about 1 mM! The US EPA limit for drinking water is 10 ppb (10 ng/ml), which is about 0.13 µM if I've done the arithmetic correctly. That seems like a lot, but, according to this very detailed document, normal background arsenic concentrations in soil range from 1- 40 mg/kg, with a mean of about 5 mg/kg (that's about 70 µM). Our total intake in food and beverages per day ranges between 20 and 300 µg. Smokers get about 10 µg/day just from their cigarettes.
I got the attention of the university's chemical safety person by sending an email to their supervisor. Both of us are going to be away for the next week or so, and we plan to talk when we're back. To get things started I left a voicemail asking about the specific contamination limits - what level of arsenic requires cleanup-decontamination work, and what can be treated as 'normal background'? The preliminary response (by email) was
My voicemail also asked how one could detect arsenic contamination? Is there a sensitive chemical test? If I were to miss cleaning up a drop of a spill and the dried up arsenic spreads around on people's shoes, how would we know? The email response said:
This whole arsenic-safety business sucks. My experience so far suggests that the Safety Office is not going to be much help - I'm mostly going to have to dig out the information for myself and figure out how it applies to my experiment. Second, maybe there really aren't any guidelines for contamination in laboratory work. That would be consistent with the safety advice I got from my chemical engineering colleague, which was to be very careful to avoid personal exposure when handling the solid chemical (when making up solutions) and otherwise to just use normal care. I don't think they've been doing contamination checks.
The molecular weight of arsenic is 75, so 75 g/l is a 1 M solution, and 75 mg/l is 1 mM. I think I'd make a 1 M stock solution of sodium arsenate and, after somehow sterilizing it, use it to make culture media with up to 40 mM arsenate.
I found some US Environment Protection Agency limits online, but these are for industrial-scale work (e.g. how much arsenic for landfill) Sewage sludge can apparently contain 73 mg arsenic per kg; that's about 1 mM! The US EPA limit for drinking water is 10 ppb (10 ng/ml), which is about 0.13 µM if I've done the arithmetic correctly. That seems like a lot, but, according to this very detailed document, normal background arsenic concentrations in soil range from 1- 40 mg/kg, with a mean of about 5 mg/kg (that's about 70 µM). Our total intake in food and beverages per day ranges between 20 and 300 µg. Smokers get about 10 µg/day just from their cigarettes.
I got the attention of the university's chemical safety person by sending an email to their supervisor. Both of us are going to be away for the next week or so, and we plan to talk when we're back. To get things started I left a voicemail asking about the specific contamination limits - what level of arsenic requires cleanup-decontamination work, and what can be treated as 'normal background'? The preliminary response (by email) was
"...while we cannot talk about background level with respect to arsenate, any concentration above 0.01 mg/M3 of arsenate, will have a potential adverse health effect."Wait, what? We can't talk about background level? Why not? And, assuming 'M3' is cubic meter, cubic meter of what? Air? Water? What about contamination on a surface? A cubic meter is 10^6 ml, so 0.01 mg/M3 would be 0.01 ng/ml, 1000-fold lower than the EPA limit for drinking water. (The molecular weight of arsenate is about twice that of arsenic, but I'll neglect the difference for now.) OK, this EPA document says that 0.01 mg/m^3 is the maximum average level of airborne arsenic that a worker can be exposed to (averaged over an 8 hr workday).
My voicemail also asked how one could detect arsenic contamination? Is there a sensitive chemical test? If I were to miss cleaning up a drop of a spill and the dried up arsenic spreads around on people's shoes, how would we know? The email response said:
"If you have a concern that the area is contaminated, wipe samples can be sent to testing laboratory for analysis."But in the absence of any specification of what level of surface arsenic is considered 'contamination', how would I know whether to be concerned, or how to interpret the results? And how much does this analysis cost, what's the sensitivity, and what's the turnaround time? Maybe I'll be able to pry this information out of the safety person when we talk, but I suspect the real reason I'm getting such terse responses to my questions is that they don't know the answers.
This whole arsenic-safety business sucks. My experience so far suggests that the Safety Office is not going to be much help - I'm mostly going to have to dig out the information for myself and figure out how it applies to my experiment. Second, maybe there really aren't any guidelines for contamination in laboratory work. That would be consistent with the safety advice I got from my chemical engineering colleague, which was to be very careful to avoid personal exposure when handling the solid chemical (when making up solutions) and otherwise to just use normal care. I don't think they've been doing contamination checks.
Looks like the postdoc's hypothesis is correct
Here are the results from my quick-and-dirty test of how phage recombination happens in competent H. influenzae cells. They show that pretreating lysates with DNase I reduces recombination, as expected if recombination happens between replicating phage and free DNA brought in from the lysate by the competence machinery.
(I'm an idiot. Why didn't I do an infection with both lysates treated with DNase I? Treating only one lysate is only expected to eliminate half of the potential recombination under this hypothesis, but treating them both would eliminate it all! Next time...)
(I'm an idiot. Why didn't I do an infection with both lysates treated with DNase I? Treating only one lysate is only expected to eliminate half of the potential recombination under this hypothesis, but treating them both would eliminate it all! Next time...)
(I forgot to label the Y-axis - it's the recombination frequency.) A is the recombinant frequency seen when cells were infected with untreated lysates of phage mutants ts1 and ts3. In infections B and D, one or the other lysate was pretreated with DNase I, and in infections C and E, one or the other lysate was pretreated with Proteinase K. The blue bars are the fractions of infected cells that produced wildtype (not ts) recombinant phage, and the red bars are the fraction of the phage output that were wildtype.
The data are not very accurate because I titered the cells and lysates by spotting dilutions on lawns and didn't try to count the phage in spots with more than 50 plaques. I'll repeat the experiment with more careful titers.
I also titered the input ts lysates after their pretreatments. The proteinase-treated lysates had 3-5-fold fewer plaque-forming units than their DNase-treated counterparts, suggesting that proteinase digestion did reduce infectivity but not recombination. I should have also retitered the untreated lysates but I forgot to; the titers of the DNase-treated lysates were about 1/3 lower than the previously determined titers of these lysates.
Phage recombination progress
The other day I tested for phage recombination, using my new mutant phage lysates and incubators. As expected, I saw lots of recombination in competent cells (I didn't test non-competent cells). Today I tested the postdoc's hypothesis that recombination occurs by recombination between the DNA of infecting phage and strands of phage DNA that was free in the lysate and was brought into the cytoplasm by the competence machinery.
I used two lysates of phages carrying different temperature-sensitive mutations. An aliquot of each lysate was incubated with DNase I, which should destroy the free DNA and prevent recombination if this hypothesis is correct. A second aliquot of each lysate was incubated with Proteinase K, a broad-specificity protease that should degrade enough of the phage protein that they are unable to infect the cells. This treatment should not harm the DNA, and if the hypothesis is correct, it should reduce the infectivity of the phage but not reduce recombination.
I infected competent cells with the untreated lysate of each phage in combination with each of the treated and untreated lysates of the other phage, and assayed both the infectious centers and the final lysates for recombinants by plating at both 33°C and 41 °C. Tomorrow I should have the answer.
(I also treated the last four of our competence mutants with the competence-inducing ritual, transformed them with novobiocin-resistant DNA, and froze aliquots for later recombination and uptake assays.)
I used two lysates of phages carrying different temperature-sensitive mutations. An aliquot of each lysate was incubated with DNase I, which should destroy the free DNA and prevent recombination if this hypothesis is correct. A second aliquot of each lysate was incubated with Proteinase K, a broad-specificity protease that should degrade enough of the phage protein that they are unable to infect the cells. This treatment should not harm the DNA, and if the hypothesis is correct, it should reduce the infectivity of the phage but not reduce recombination.
I infected competent cells with the untreated lysate of each phage in combination with each of the treated and untreated lysates of the other phage, and assayed both the infectious centers and the final lysates for recombinants by plating at both 33°C and 41 °C. Tomorrow I should have the answer.
(I also treated the last four of our competence mutants with the competence-inducing ritual, transformed them with novobiocin-resistant DNA, and froze aliquots for later recombination and uptake assays.)
Panel at the Canadian Science Writer's meeting
I've been invited to the annual meeting of the Canadian Science Writers Association, starting next Friday in Calgary. I'll be participating in the Opening Panel, titled Better Adjust Your Set (Friday, June 10, 12:30 – 2 p.m., MacEwan Student Centre Ballroom (3rd Floor) University of Calgary).
Panelists:
-- some of the trends (positive and negative) we see in science communications;
Panelists:
- Moderator: Penny Park
- Rosalind Reid
- Christie Nicholson
- Jasmine Antonick
- Jay Ingram
- Rosie Redfield
The Friday afternoon opening panel is really meant to be as much of a dialogue as possible with conference delegates. So we're looking for about five to seven minutes of opening remarks from each panelist, then brief comments from the panelists on what the other panelists had to say, then right into a moderated (and hopefully lively!) discussion with the delegates.We're having a conference call this afternoon to sort out what we're going to say in our opening remarks. We're asked to address the following, so here are my preliminary ideas:
-- some of the trends (positive and negative) we see in science communications;
- Positive: increasing openness of scientific research (informally and formally)
- Negative: public suspicion of science stirred up by media, politicians and cranks
- Access: Papers are all online, often open-access. Blogs and tweets = accessible to the public.
- That market is small and getting smaller. The best science writing now is free.
- look outside of conventional media. Look where the money is? Where can you get paid to write about science without being corrupted by your paymasters?
- look outside of conventional media
- Improve their prospects of making a living? I don't have any sensible ideas.
Just in case I do decide to test the #arseniclife claims...
Just in case I do decide to do the experiments I outlined the other day, I've started the process of getting ready.
I sent an email requesting the GFAJ-1 strain from Oremland's group, and received a Materials Transfer Agreement form from them. I filled it in, found and filled in the other form that our local bureaucracy requires, and sent them both on to the UBC people who sign such forms. Provided they don't see any complicating factors (fingers crossed that the lawyers don't get involved), it should be signed in a day or so and them I'll have a 2-4 week wait to receive the cells.
I also contacted our Chemical Safety office about the rules and regulations and general safe practices for working with and disposing of arsenic. First I was just given weird information about disposal (is arsenate really volatile? If so, why not disinfect with iodine?), and when I asked for more information all I got was a form-letter list of generic advice. I've pasted the correspondence below.
Luckily a colleague might be investigating the effects of arsenic on bioremediation, and maybe she can help me with the safety issues.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
To: Chemical Safety contact person
Subject: Advice on experiments using culture media containing arsenic
Hi,
I'm considering doing some experiments where I will culture bacteria in a medium containing 40 mM arsenate. Are there specific safety procedures and disposal regulations I should be aware of?
The cultures will be at room temperature, not shaken and in fairly small volumes (initially 10 ml in screw-cap glass tubes,) The bacteria are not pathogenic.
Thanks very much,
Rosie Redfield
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hello Rosie,
The challenge with this experiment is that you end up with mixed microbiological (risk group 1 from what you indicated) and chemical waste. In this case my recommendation will be first to kill the bacteria so the mixture will no longer be biohazard, since you cannot autoclave it and bleach is incompatible with the arsenate, I will recommend treating it with ethyl isopropyl alcohol 70-85%. After this treatment the mixture could be considered sodium arsenate, ethyl isopropyl alcohol mixture and can be disposed of as chemical waste. You will have to go online through our chemical waste inventory system, to request approval for disposal, indicating the chemical name and percent of these chemicals in the mixture, and follow the chemical waste disposal procedure.
Let me know if you have any question or if you need additional information.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hi (name redacted),
I definitely need more information, about handling materials containing arsenate as well as about disposal.
Can I not autoclave any solutions containing arsenate? Where can I find information about what I can and can't do? What are the concentration or volume issues?
Thanks,
Rosie
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hello Rosie,
I sent an email requesting the GFAJ-1 strain from Oremland's group, and received a Materials Transfer Agreement form from them. I filled it in, found and filled in the other form that our local bureaucracy requires, and sent them both on to the UBC people who sign such forms. Provided they don't see any complicating factors (fingers crossed that the lawyers don't get involved), it should be signed in a day or so and them I'll have a 2-4 week wait to receive the cells.
I also contacted our Chemical Safety office about the rules and regulations and general safe practices for working with and disposing of arsenic. First I was just given weird information about disposal (is arsenate really volatile? If so, why not disinfect with iodine?), and when I asked for more information all I got was a form-letter list of generic advice. I've pasted the correspondence below.
Luckily a colleague might be investigating the effects of arsenic on bioremediation, and maybe she can help me with the safety issues.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
To: Chemical Safety contact person
Subject: Advice on experiments using culture media containing arsenic
Hi,
I'm considering doing some experiments where I will culture bacteria in a medium containing 40 mM arsenate. Are there specific safety procedures and disposal regulations I should be aware of?
The cultures will be at room temperature, not shaken and in fairly small volumes (initially 10 ml in screw-cap glass tubes,) The bacteria are not pathogenic.
Thanks very much,
Rosie Redfield
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hello Rosie,
The challenge with this experiment is that you end up with mixed microbiological (risk group 1 from what you indicated) and chemical waste. In this case my recommendation will be first to kill the bacteria so the mixture will no longer be biohazard, since you cannot autoclave it and bleach is incompatible with the arsenate, I will recommend treating it with ethyl isopropyl alcohol 70-85%. After this treatment the mixture could be considered sodium arsenate, ethyl isopropyl alcohol mixture and can be disposed of as chemical waste. You will have to go online through our chemical waste inventory system, to request approval for disposal, indicating the chemical name and percent of these chemicals in the mixture, and follow the chemical waste disposal procedure.
Let me know if you have any question or if you need additional information.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hi (name redacted),
I definitely need more information, about handling materials containing arsenate as well as about disposal.
Can I not autoclave any solutions containing arsenate? Where can I find information about what I can and can't do? What are the concentration or volume issues?
Thanks,
Rosie
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hello Rosie,
- -Information about handling this material specifically you will find in the material safety data sheet for the specific product you are using
- -General information for safe handling of chemicals can be found in the Laboratory Chemical Safety Manual
- http://www.riskmanagement.ubc.ca/health-safety/chemical-safety/chemical-safety-manual
- -Autoclaving the mixture can result in generation of toxic vapors
- -As for the disposal, the specifics for your mixture are indicated in the e-mail below. In general all hazardous waste procedures are available on-line http://www.riskmanagement.ubc.ca/sites/riskmanagement.ubc.ca/files/uploads/Documents/manual8709.pdf
- -As for concentrations and volumes to be used, our recommendation is as low as practically possible. But once you define your experimental protocol, you will need to re-assess the required safety precautions (personal protective equipment etc.), based on the concentration and volume you are going to use
- -All the above info and more (other than the details provided below) is available through the UBC Laboratory Chemical Safety Training that is now offered online.
Subscribe to:
Posts (Atom)