Here's a better way of looking at how the mutation rate affects (doesn't affect) the outcome of the simulations. It was suggested by a mathematically sophisticated family member.
Previously I plotted the score as a function of the number of cycles run, for each of the three mutation rates, with 3 or 4 replicate runs plotted on the same graph. I've done several things differently in the graph on the left (which PowerPoint has inexplicably warped). The first two differences are trivial - I've plotted the means of the replicate runs, so there are only three lines, and I've put the x-axis on a log scale so the points are spread out evenly.
The third difference is the important one. I've changed the x-axis so instead of being the cycle number it's the total number of mutations each genome has been exposed to, expressed per 100 bp. The µ=0.01 scale didn't change, but the scale for the µ=0.001 runs decreased by 10-fold and the scale for the µ=0.0001 runs decreased 100-fold.
Now we see that the runs with three different mutation rates have all given very similar (superimposable) results. I'll present this figure in the paper, as it will nicely justify our decision to do the bulk of the oher runs with µ=0.001.
This presentation also shows that, even after 20,000 cycles, the runs with µ=0.0001 are only just reaching equilibrium - I'll try to run them for longer, even if this means running them for a week or two on our lab computer.