I'm revising/improving my plans for the first talk I'm preparing: 'What I learned from #arseniclife: communication and quality control in science'.
I still plan to start by going over the Wolfe-Simon debacle, from NASA's first press release to the present state of affairs (10-15 minutes). But I've reorganized my walk-through of all the ways science is communicated (to scientists and to the public) and the ways quality is maintained. Rather than using a semi-historical framework overall, it's now a series of overlapping issues: Publication, Access, Sharing, Searching, Publicity, Pre-publication review, Post-publication review, and six more. Each will be shown as a one-slide mini-history. But the histories are not so much of changes (we used to do that, but now we do this) as of broadening options (we used to only do that, but now we can also do this and this and this). This should probably be given no more than about 15 minutes, so I'll need to trim down my list of issues.
Then I'll list the roles these issues played in #arseniclife, first slow (funding, collaborative research, manuscript, peer review, acceptance) then fast (press release, press conference, in-press publication, excited articles in the media, critical peer review on blogs, spread by Twitter, critical articles in the media, access first by paywall then open) then slow (journal articles..., formal publication stalled).
Then the positive and negative effects. Positive: Communication between scientists was very efficient, and experts very quickly reached a strong consensus that the the conclusion was wrong. The media coverage provided a very public demonstration of how science is self-correcting. Negative: Many members of the public took this as a demonstration of how science gets things wrong, and many more completely missed the correction, seeing only the original story.
But this isn't a very good model for 'normal science'. Rather it's a warning of what can go wrong if you reach too high. Most papers never attract comments on the journal sites (none of the ten year-old PLoS ONE papers I checked had any). Substantial discussion happens on blogs; the Research Blogging aggregator site linked to discussions of 31 articles on March 31 and 35 on March 23. The posts are excellent, but most of them have few or no comments, and I don't know how often they are even seen by the authors of the articles.
I'd like to consider the good and bad outcomes of online publication, though I may not have time. One of these is the problems with for-profit journals. Yesterday I let myself get distracted by the 'Frontiers in ...' enterprise, tabulating data on how many papers each 'journal' has published and how many people are on its Editorial Board. With the exception of the original Frontiers in Neuroscience, the ratio is at least 10 board members for each paper published, so this appears to be mainly a resource for resume-padding, not scientific communication. Most of the Bentham group journals are just as bad, averaging one article per year with an Editorial Board of about 100.
I want to end with considering how human nature limits our use of the new forms of communication to promote the advance of scientific knowledge, and how individuals and institutions can intervene. One way is by having publishers and granting agencies enforce community standards (data deposition, open-access publishing). Research blogging certainly won't hurt, but we need to find ways to bring it into the mainstream.
7 hours ago in The Phytophactor