We're revising it, though not drastically. One of the reviewers didn't have many concerns, but the other was full of philosophical objections, which we're meeting with calm reason and more analysis.
One bit of data we'll now include is the density of uptake sequences in the equilibrium genomes we discuss. But when I went back to extract this data from the appropriate runs, I found that one run didn't have the data because it hadn't terminated when it was supposed to; there was a typo in the specified termination cycle (2000o0 rather than 200000), so it would have kept running forever if I hadn't stopped it.
And when I went to redo that run without the typo, I discovered that the set of 12 runs it belonged to had all had another error; instead of recombining 1000 fragments each cycle they had only recombined 100. Fixing this won't change the conclusions at all; the runs will just all converge on a modestly higher score. So I requeue'd all 12 runs, and then requeue'd them all again to terminate after 50,000 cycles rather than 200,000, because with ten times more recombination per cycle they may not need nearly as many cycles. I was thinking that having more recombination would let them run faster, but I forgot that, with more recombination, each cycle will take longer. Hmm, maybe I should even set them for only 20,000 cycles. I'll see how far they've gotten tomorrow morning.
How to calculate trigonometry functions
13 hours ago in Doc Madhattan