I've been running a lot of simulations to confirm the preliminary conclusion that mutation rate doesn't change the equilibrium uptake sequence score of our simulated genomes.
These runs all used a 20 kb random-sequence 'genome' and our simple high-bias uptake matrix. Each cycle recombined 100 fragments of 100 bp each - that's half of the genome. And the bias decreased by a factor of 0.75 for each step of the cycle that didn't give enough recombination. I ran three replicates with mutation rate = 0.01, three with rate = 0.001, and 4 with rate 0.0001, each for 10,000 cycles.
Results: All three rates are indeed giving similar equilibrium scores. Each point in the graphs to the left shows the mean scores over the interval since the previous point - that's why the scatter gets less as the points get farther apart. I've calculated the means and standard devations for each rate, but the error bars are just as big as you would expect from the graphs.
As expected, the time to equilibrium depends on the mutation rate, and the noise is highest for the lowest rate. The lowest rate runs hadn't really reached equilibrium after the 10,000 cycles, so I've taken the final sequences these runs produced and used them to initiate new 10,000-cycle runs, to give results for 20,000 cycles. These aren't finished yet, but they do define an equilibrium in the same range as for the higher-rate runs.
Now that these runs have established that mutation rate doesn't have a (big) effect on outcome, we can discuss the results the former post-doc obtained using a mutation rate of 0.001 and a genome size of 200,000bp. This larger genome size dramaticaly decreases the noise in the runs, in the same way that the higher mutation rate does (compare µ=0.001 and µ=0.01 in the graph), giving us more confidence in the equilibrium scores we determine.
How can you trust non-gardeners?
12 hours ago in The Phytophactor