Field of Science

 Some comments about my Evo-WIBO talk plan from a reader:
...what I'm really curious about is the sense I get that you feel a phenotype must be some sort of evolutionary goal (i.e., why would we have an a priori expectation that enzymes would evolve to accomplish homologous recombination?) Gender doesn't seem to pop onto the Natural History landscape full blown and ready to be appreciated. So why should HR? I really like the notion that HR might proceed from a DNA replication and repair background. 
I didn't mean to imply that phenotype is a goal, neither generally or with respect to homologous recombination (HR).  But most other microbiologists and molecular biologists have been assuming but not rigorously evaluating) that HR exists because of selection for its sometimes-beneficial consequences. 
And is it not possible that natural competence is currently an orphan process that exists for food uptake but was once a piece of a primordial sex process that developed further in other lineages but was cast aside in bacteria? (photosynthesis may have been cast aside in oomycetes in favor of parasitism).
That seems backwards to me, because selection for the food benefit is so straightforward and selection for sex so problematic.
To me, HR has to be more beneficial than horizontal gene transfer for a lineage to find it worth the trouble. When organisms are extremely simple the selective disadvantage of maintaining DNA that isn't carrying its weight should lead to its elimination. The notion of an allele implies the existence of a gene - but a gene not in the sense of a capable ORF but in the sense of two or more ORFs in a population that perform the same function in manner that the environment will influence and that selection can act on. If said variant ORFs come to be in the same cell, then HR can go to work on them.
Homologous recombination isn't really any 'trouble', to the extent that it happens as an accidental (i.e. unselected) consequence of enzymes selected for their effects on DNA replication and repair and of accidental transfer of DNA fragments by genetic parasites or of DNA uptake for food.  And in bacteria there's very little evidence that it ever occurs any other way.
The value of taking a different tack on a problem is prescient. And physics offers a host of tools and a philosophical background that could really help. To me the challenge of the 15 minute presentation is to illustrate how having data that describe the physical process of DNA uptake should allow mathematical model development for the process which then allows development of testable hypotheses. There are "big organism" examples of this approach bearing fruit.
I don't think that the phenomenon of natural competence needs mathematical models at all (nor do any other of the phenomena that sometimes lead to to recombination in bacteria). My point for the talk is that many hypotheses can be directly evaluated by more thorough investigation of the phenomena in question.

Defending 'functional design' analysis at Evo-WIBO

In a couple of weeks I'll be giving a short talk at the regional Evo-WIBO meeting.  My title is What's an evolutionary biologist doing in a physics lab?.  I think I'm going to combine a description of my specific scientific question (the physical properties of DNA uptake by Haemophilus influenzae) with a rehash of the defense of 'functional design' that I made in a post last month.  So I might subtitle the talk A defense of functional design analysis.

I'll only have 15 minutes including question time, so I'll need to keep it simple.
  • The simple answer is, I'm measuring the physical properties of DNA uptake by the bacterium Haemophilus influenzae.  I'll show you how this is done at the end of my talk, with a nice explanatory animation.
  • Why is this of evolutionary interest? Because it's one of the final pieces of the Do bacteria have sex? puzzle.
  • Why aren't I using more evolution-style approaches, behaving like a proper evolutionary biologist?  How will knowing physical forces answer evolutionary questions?  Shouldn't I be using the comparative method?  Since these are bacteria, why aren't I doing Rich Lenski-style lab evolution experiments?
  • A defense of 'functional design' analysis:
  • Understanding 'natural history' (the stamp collecting side of biology?) is fundamental to investigating evolutionary forces.  Before we try to explain how natural selection has acted on any phenotype or behaviour, we first need a solid understanding of what the phenotype or behaviour is.
  • First a big-organism example: The head-nodding lizards.  We can use the usual methods of natural history.  What does it do, when does it do it, what are the typical outcomes?
  • Next, a bacterial example: For bacteria, we need to use the methods of molecular biology.  Consider RecBCD (3 proteins that work together).  How was it discovered, what was its function initially thought to be?  What was later learned about the phenomenon (not by evolutionary biologists).  Molecular biologists often treat both 'functions' as equivalently important.  How should evolutionary biologists think about it (consider relative strengths of selective forces).
  • Similar history of thinking about nearly all the genes that contribute to homologous recombination in bacteria.  The molecular biology isn't my work, but I spell out the implications for evolutionary biologists.
  • Main conclusion:  Many (and perhaps all) bacteria don't have 'sex'; that is, they don't have any genes that evolved to promote homologous recombination with alleles from other cells of the same or closely related species.  Two of the three processes that move DNA from one cell to another are caused by genetic parasites, and the genes responsible for the physical recombination all have important functions in DNA replication and repair.  True of E. coli.
  • I say 'perhaps all' because the function of one of the three processes that move DNA is still controversial.  That's natural competence

How best to test binding of competent cells to DNA on beads?

Now I have lots of biotinylated DNA, and a well-tested procedure for binding DNA fragments to streptavidin-coated styrene beads, I'm ready to test whether competent bacterial cells (B. subtilis or H. influenzae) will bind to the DNA on the beads.

How to do this isn't straightforward.  One problem is that the beads are about the same size and density as the cells (B. subtilis cells a bit bigger, H. influenzae cells a bit smaller), so once mixed they can't be easily separated.  That means I have no way to wash unbound beads away from cells, or unbound cells away from beads.  Another problem is that B. subtilis cells are known to cut DNA fragments as part of the uptake process, and in principle this might terminate uptake.  Though maybe not, as the cutting is part of the process that initiates uptake across the inner membrane.  H. influenzae cells don't cut DNA.

I could just mix competent cells and DNA-coated beads, both at low densities, on a microscope slide and watch for them sticking to each other.  Alternatively, we have some streptavidin-coated paramagnetic beads I could use - this would allow me to pull out the beads and see if cells had stuck to them.  But these 50 nm (super-tiny) beads, too small to see individually, so I'd have to plate them to see if there were cells there.  We might also have some micron-sized ones; I'll look around. 

OK, I found our 'starter kit' of 1 and 2 micron paramagnetic beads.  The only problem is, we were too cheap to pay the $150 for the starter version of the magic magnetic rack that holds microfuge tubes against magnets so the beads stick to the side and the liquid can be removed. So I tested various magnets from around the lab, and all of them pulled the rusty-brown beads to the side of the tube in a couple of minutes.  Doing this in a way that holds the tube steady so I can remove the liquid...not yet.

I'm away for a few days, but when I get back I may send an email out asking if anyone in the building has a Dynabeads rack I could borrow for a little while.

How much DNA is on the beads?

The NanoDrop tech support person said that the styrene beads wouldn't hurt the NanoDrop spec, and agreed that light scattering might be a problem.  It was, and that combined with the detection threshold of the Nanodrop meant that my measurements didn't give any evidence of DNA on my beads.  So today I used the PicoGreen assay to look for DNA on the beads.  It's much more sensitive, and not bothered much by light scattering due to the beads.

But first I should describe what my samples were and how I made them.  I incubated some 1.26 µ streptavidin-coated beads with a diluted solution of my biotin-labeled DNA, diluted because I didn't want different beads binding to the two ends of a fragment, and I didn't want steric interference by the DNA on the beads.  I incubated the beads with the DNA for 30 minutes, gently mixing at 37C on our roller wheel.  Then I pelleted the beads, and washed them twice with 1.0 ml of TE, each time rolling the beads plus TE for 10 minutes, and resuspended the washed beads in 100 µl TE (call these Beads1).  I also added another aliquot of beads to the DNA solution I'd already used with the first beads, and put these beads through the same incubation, washing and resuspension steps (call these Beads2).

Beads1 and Beads2 had very similar DNA concentrations, about 250 ng/ml.  This isn't very much DNA (but see below), but because they're the same I know that the low binding isn't because my biotin-labeling failed.  If the labeling had been the problem, then Beads1 would have had little DNA because they had bound up all the labeled DNA in the tube, and Beads2 would have had much less DNA. (I could check this by incubating more beads with the same DNA sample.)  Instead the low labeling may be because of the amount of strepavidin on the beads, or its reduced accessibility once bound DNA fragments are getting in the way of other DNA fragments.

So how much DNA is this per bead?  Here's a very back-of-the-envelope calculation:  The bead concentration in the resuspended Beads1 and Beads2 preps is about 0.1%, assuming that no beads were lost in the washing steps.  Let's consider 1 ml of Beads1 (or Beads2), just because it makes the arithmetic clearer.  With 0.1% beads, 1 ml of Beads1 solution is about 1 µl of packed beads (and yes, that's about how big the pellets appeared).  The beads are about 1.25 µ in diameter, and 1 µl is a cube that's 1000 µ on each side, so 1 µl of packed beads is a cube with about 800 beads per side, or about 5x10^8 beads.  At 250 ng/ml, the same ml of Beads1 contains about 250x10^9 kb of DNA (using Rosie's universal constant of 10^18 kb/gram of DNA).  The average fragment size of EcoRI-cut H. influenzae DNA is about 6 kb, so this is about 42x10^9 fragments.  I conclude that the average bead has about 85 DNA fragments bound to it.  That's pretty reasonable for my experiments, so I can go ahead and use these beads and this DNA to test cells binding to DNA on beads.

I also measured the DNA concentrations in the two washes from each aliquot of beads.  The first washes had about 20 ng/ml DNA, and the second washes had fluorescences not significantly higher than background, so I know that the signals from Beads1 and Beads2 were due to bound DNA.

One control I didn't do was to make a standard curve using known amounts of DNA mixed with 0.1% beads.  I should try this tomorrow.  I've also saved the samples I measured, and I'll also try reading them again tomorrow using the high-sensitivity setting of the plate scanner.  (Later - I was wrong; there is no high-sensitivity setting.) That's if I can figure out how to do this; the scanner software is very non-intuitive, and so far I've spent most of my time trying to find files I thought I'd saved.

At the bench today

Today I did two reactions that labeled the ends of digested chromosomal DNA with biotin (one of EcoRI-digested DNA and one of XhoI-digested DNA).  The next step is to clean up the DNA, to get rid of the Klenow polymerase and the EcoRI/XhoI and the unincorporated nucleotides.  it's especially important to get rid of ALL of the unincorporated biotin-dUTP, because this will otherwise bind to the streptavidin-coated beads and prevent the biotinylated DNA from binding to them.

Because I want to wash the DNA well to get rid of the biotin-dUTP, a column cleanup is best.  We have new cheap cleanup columns from a company called Epoch, to replace the relatively expensive Sigma genelute columns we've been using.  (These in turn replaced very expensive columns from Qiagen.)  What makes the Epoch columns such good value is that instead of charging for little bottles of salty water like the other companies (their 'secret sauce' buffers), Epoch just provides the recipes so users can make their own buffers.

The RA had already tested the new columns with a PCR cleanup, but I needed to test them with large fragments of chromosomal DNA, because DNA fragments bigger than 10-20 kb tend to stick to this kind of column.  Bottom line: both columns release almost all the DNA fragments smaller than 20 kb. 

I also wanted to compare the overall recovery of DNA from the columns, especially if they were heavily loaded with a lot of DNA (their stated binding capacities were either 10 µg or 20 µg, depending on which document I read).  But my DNA wasn't as concentrated as I thought, so the most DNA I put on a column was probably about 14 µg.  Recoveries were good, >80% even with more than 10 µg on the column.

So now I have about 35 µg of biotin-tagged EcoRI-cut chromosomal DNA, and about 25 µg of biotin-tagged XhoI-cut chromosomal DNA.  The next step is to measure binding of this DNA to the streptavidin-coated beads.  I can do this accurately now that the RA has shown me how to use Picogreen to measure very low DNA concentrations, and I've checked that beads don't interfere with these measurements.  I wasn't sure if I could put samples containing beads onto the NanoDrop spec, but I just read their explanation of how it works and I don't see any problem.  Maybe I'll email their Customer Service people just to be sure, as the NanoDrop we use belongs to the lab next door.

What's up with the manuscript about uptake sequence variation?

We're revising it, though not drastically.  One of the reviewers didn't have many concerns, but the other was full of philosophical objections, which we're meeting with calm reason and more analysis.

One bit of data we'll now include is the density of uptake sequences in the equilibrium genomes we discuss.  But when I went back to extract this data from the appropriate runs, I found that one run didn't have the data because it hadn't terminated when it was supposed to; there was a typo in the specified termination cycle (2000o0 rather than 200000), so it would have kept running forever if I hadn't stopped it.

And when I went to redo that run without the typo, I discovered that the set of 12 runs it belonged to had all had another error; instead of recombining 1000 fragments each cycle they had only recombined 100.  Fixing this won't change the conclusions at all; the runs will just all converge on a modestly higher score.   So I requeue'd all 12 runs, and then requeue'd them all again to terminate after 50,000 cycles rather than 200,000, because with ten times more recombination per cycle they may not need nearly as many cycles.  I was thinking that having more recombination would let them run faster, but I forgot that, with more recombination, each cycle will take longer.  Hmm, maybe I should even set them for only 20,000 cycles.   I'll see how far they've gotten tomorrow morning.

Are the purR knockout mutants not really purR knockout mutants?

The meticulous RA thought it would be wise to use PCR to check the genotypes of the purR::kan knockout mutants I used for my time course last weekend.  (I had already checked that they were both resistant to kanamycin.)  So she designed and ordered some primers that would flank the insertion that was described in the notebook of the grad student who originally made the mutant, and did colony PCR on all four of the strains I had used.

Much to my surprise, all four strains produced bands of the size expected for purR+ cells (about 1.0 kb), and none of them produced bands of the size expected for the purR knockout (about 2.2 kb).  Either there's something wrong with the PCR analysis (and she's very meticulous so I doubt that), or the strains aren't what we've been thinking they are.

I had made these strains by transforming cells with DNA I had isolated from cells grown from the old frozen stock of purR cells made by the grad student (at least, that's what I thought I was doing), and selecting for kanamycin-resistant transformants.  Could I have used the wrong DNA?  Or grown up the wrong cells from the freezer?

We know that the original cells made by the grad student had the correct mutation, both because he had carefully checked them out and because a technician had later thawed a vial and done a microarray analysis of RNA.  This showed that the mutant dramatically overexpressed all the genes that were predicted to be repressed by PurR in wildtype cells.

So tonight I've streaked out more cells from the last freezer vial of the original purR knockout, and on Monday the RA will test them by PCR.  I also located the DNA I had used for that transformation, so she can test that by PCR too.  If these cells give the expected 2.2 kb band, we'll assume something went wrong with my transformation.  If they give the 1.0 kb band, we'll carefully check out the new PCR primers and probably run a quantitative PCR of a PurR-repressed gene on RNA from the original mutant and from one of the new mutants (with wildtype cells as control).  Or, because the RA has recombineering working well now, she might just remake the purR mutant with her new primers.

If the mutants I used for my time course turn out to not be purR-, I think we'd still be really interested to find out where their kanR cassette is, because we don't have any other mutants with this interesting phenotype.  That can be done by cloning out the kanR cassette and flanking sequences (the old-fashioned way or by inverse PCR) and then sequencing the DNA on one or both sides of the cassette.

What on earth is 'constructive neutral evolution'?

Ford Doolittle gave a talk here today in the Biodiversity seminar series, which is attended by all the evolutionary biologists.  It was titled 'Irremediable Complexity', and was promoting a concept originally published by Arlin Stoltzfus under the title 'On the possibility of constructive neutral evolution' (here, but probably behind a paywall).  I haven't read it but it's been more influential than Ford said, cited 103 times.

Arlin's title is not at all self-explanatory; here's what I now think the words are intended to mean:  'Evolution' means 'a change over time in how a function is accomplished'.  'Constructive' means 'the change is that the function is accomplished in a more complex way'.  And as a result of some helpful questions at the end, I now think that 'neutral' means 'the function itself is under stabilizing selection but not under adaptive/directional selection, and how it is accomplished (the change in complexity) is not under selection at all'.  Ford didn't define 'complexity' until the question period; he then suggested that one measure of a function's complexity might be the number of components required for it.

Here's the executive summary:
Once organisms have evolved to have many components, some components will inevitably interact with others in 'accidental' ways that have, at least initially, not been shaped by selection.  Once these accidental interactions exist, they will modify how selection acts on mutations that affect the function, sometimes making things worse but sometimes mitigating the effects of what would otherwise be deleterious mutations eliminated by selection*.  These mitigating effects will weaken stabilizing selection on the function, sometimes allowing the mutations to be preserved (especially if populations are small).  Preservation of the mutation effectively creates selection for maintenance of the formerly-unselected interaction.  The function has become more 'complex (by Ford's definition), but there hasn't been any selection for the complexity.  If mutations with these kinds of effects recur repeatedly, the function will become increasingly complex without having been in any way improved.
As evidence that this type of complexity-building is common and important, Ford cited several molecular examples where a process has become ridiculously ('stupidly') complex but doesn't work any better that simpler versions.  The RNA editing of trypanosomes is not well known but is a compelling example.  So are introns and the spliceosomal machinery that lets eukaryotes cope with them.  Simpler examples are the 'maturation proteins' that assist type I and II self-splicing introns.  The ribosome itself may be a (not very stupid) example, where proteins have gradually taken over activities originally handled by the catalytic RNAs.

The issue didn't seem very important to the evolutionary biologists in the audience, I think because they don't constantly deal with the just-so-story functions that molecular biologists typically ascribe to any complicating feature of a process.  To many molecular biologists, every base pair in the genome, every intermolecular interaction, and every small RNA in the cell is the product of adaptive selection.  There are no accidental interactions.  Shit never just happens.

*On the other hand (not considered by Ford at all), mutations whose effects are made worse by the accidental interaction will be more efficiently eliminated by the stabilizing selection on the function.  I don't think this can be said to reduce the complexity of the function, because the interaction was accidental and thus not included in the complexity count.  I don't know if it would it create selection against the interaction.

**Psci Wavefunction has blogged about this concept in some detail, here and here.  I confess that I haven't read these very long posts through, but perhaps now I will.  (She asked an excellent question after the talk.)

*** Somewhere in his talk Ford was describing clade selection; using the example of how a propensity to speciate can cause a lineage to have many more species than other lineages.  He said that more species means more individuals, but that's certainly not true.

Contest to win an Ion Torrent DNA sequencer

The postdoc discovered that a new company called Ion Torrent is having a contest.  The prizes are two of their new Personal Genome Machines.

The object of the Contest is 'to submit the best ideas for development of new applications for DNA sequencing'.  We originally interpreted this as asking for the best research proposal using the machine (i.e.  what we would do with the machine) but now I think it's asking for more general brilliant ideas for applications, not necessarily a project to be carried out by the winner.

On reading over the eligibility restrictions, I just discovered that entries can only come from people who live in the USA (but for some reason not people living in Arizona).  That lets us out.

Successful time course of competence development

Yesterday I did the failed time course again.  The goal was still to replicate the earlier quick-and-dirty experiment that had suggested that knocking out the purine repressor prevented competence development in late-log cultures.  This time the cells grew better, and the results are clear.

I had four strains:  KW20 is wildtype, RR3005 has the purR knockout, RR699 has the sxy-1 hypercompetence mutation that we think should make competence induction less dependent on depletion of nucleotides, and RR1345 has both the purR and sxy-1 mutations.  The graph below shows that the wildtype and sxy-1 strains grew at similar rates, and the two strains with the purR mutation grew a bit slower, perhaps because they were wasting resources on synthesizing nucleotides.  (All four cultures stopped growing at about half the density they should reach with the best medium.)
The next graph shows the transformation frequencies of the four cultures at the same times.  The wildtype cells (blue diamonds) showed the usual pattern, with very low transformation frequencies when cells were growing exponentially (first time point), and 1000-fold higher transformation when the culture became dense.  The purR mutant (blue circles) also started out very low, but its transformation frequency remained low throughout growth, about 200-fold lower than its wildtype parent.

The sxy-1 mutant (green squares) also behaved normally.  Its log-phase transformation frequency was >1000-fold higher than the wildtype strain, and it became about 50-fold more competent when the culture got dense.  (Its final competence and that of the wildtype strain were both a bit lower than I normally see - I suspect this is due to the lower growth in the poorer medium.)  The transformation frequencies of the purR sxy-1 double mutant (green triangles) were lower, but only about 3-9-fold.

So this experiment confirms both observations from the quick-and-dirty one.  First, the purR mutation does prevent the competence development that normally occurs when cultures get dense.  Since this mutation's major effect is to keep the purine biosynthesis pathway maximally active even in exponential growth, this suggests that running short of purines (purine nucleotides?) is the signal that normally induces competence when cultures get dense.  The microarray analysis showed that wildtype cells at high density still have enough of the purine precursors hypoxanthine and inosine to keep PurR in repressing mode.

Second, the sxy-1 mutation makes cells much less sensitive to the competence-inhibiting effect of the purR mutation.  The mutation causes hypercompetence by weakening the secondary structure of sxy mRNA, so this new result supports our hypothesis that the function of the secondary structure is to sense depletion of nucleotide (purine) pools.  When the stem is weakened by mutation, it behaves as if nucleotides are depleted even when they're not, causing many cells make enough Sxy protein to become competent even in log phase.  Some of the other sxy hypercompetent mutations have stronger effects (sxy-2 and maybe sxy-3), so I need to check if they are even less sensitive to the purR mutation.

I should also make a purR double mutant with the other kind of hypercompetence mutation.  We know that some point mutations in murE, a gene responsible for one of the steps in cell wall synthesis, cause even stronger hypercompetence than mutations in sxy.  But we have no idea how these mutations do this - we've ruled out most of the obvious explanations.  (I would have thought I'd posted previously about this set of mutants, but I can't find anything by searching for 'peptidoglycan' or 'murE' or 'cell wall', so maybe I haven't.  I'd better do a separate post about them.)

We already know that the mRNA secondary structure limits translation of the sxy mRNA into Sxy protein.  In my mind, the simplest way for the secondary structure to sense depletion of nucleotide pools is the following:  (1) Depleted pools slows the rate of mRNA elongation; (2) Because the two parts of the main stem are separated by ~100 nucleotides (I forget the actual number), slower elongation delays the formation of the secondary structure.  (3) Because the ribosome binding site and start codon are in the region between these parts, this delay makes them more accessible and increases the initiation of translation. (4) once translation has started, the secondary structure can't form.

I would really like to complete the story by showing that the rate of transcription determines the efficiency of translation.


Cells behaving badly (is it the medium's fault?)

Yesterday I did a time course experiment, to see how the 'spontaneous' development of competence in the rich medium sBHI differed between wildtype cells and cells with either the sxy-1 hypercompetence mutation, the purR::kan knockout mutation, or both.  But I had to give up halfway through because the cells stopped growing.

The graph above shows the densities of the four cultures as a function of time, with the purple line showing what I had expected based on the many previous time courses I've done.  (I deliberately started with cultures at slightly different densities, to space the sampling out a bit.)  They were all growing in medium from the same bottle, and they all stopped growing at about the same time. 

Just before I had taken the first samples I had diluted all the cultures in medium from a new bottle, one that had been prepared on a different day, so I wondered if there had been something wrong with this batch, or if I might have forgotten to add one of the needed supplements to it (NAD).  I quickly added more NAD to each flask (at time = ~150 minutes), but that didn't boost growth.

Then I tested several different batches of medium, including the the remaining of the second bottle I had used, as well as two different batches of NAD.  Unfortunately the first bottle I had used was the last bottle of its batch, so I couldn't test it.  I inoculated each with the same amount of cells, and let them all grow overnight.  The second graph shows that there are substantial differences between different batches of medium, and that none of them gives the amount of cell growth I'd expect from previous time courses (labeled as 'years ago' because I haven't done one recently).

I don't think the problem is just how long the bottles of medium had been sitting on the shelf, as the components are typically quite stable.  Instead I'm wondering if we might be using medium from a different supplier.  In the past I'd noticed substantial differences in how well different brands of BHI supported cell growth, and had sworn to use only the best (Difco), but I know we were recently given some BHI from another supplier.  Tomorrow I'll repeat the time course, with the March 1 medium.