Jack Welch got a little conspiracy-theory crazy with the job numbers. Thomas Lumley over at StatsChat makes a pretty good case for debunking the theory. I think the real take home message of Thomas’ post and one worth celebrating/highlighting is that agencies that produce the jobs report do so based on a fixed and well-defined study design. Careful efforts by government statistics agencies make it hard to fudge/change the numbers.
Nate Silver, everyone’s favorite statistician made good, just gave an interview where he said he thinks many journal articles should be blog posts. I have been thinking about this same issue for a while now, and I’m not the only one. This is a really interesting post suggesting that although scientific journals once facilitated dissemination of ideas, they now impede the flow of information and make it more expensive. Two recent examples really drove this message home for me.
This is part of the ongoing series of pro tips for graduate students, check out parts one and two for the original installments. Learn how to write papers in a very clear and simple style. Whenever you can write in plain English, skip jargon as much as possible, and make the approach you are using understandable and clear. This can (sometimes) make it harder to get your papers into journals.
My colleagues and I just published a paper on validation of genomic results in BMC Bioinformatics. It is “highly accessed” and we are really happy with how it turned out. But it was brutal getting it published. Here is the line-up of places I sent the paper. Science: Submitted 10/6/10, rejected 10/18/10 without review. I know this seems like a long shot, but this paper on validation was published in Science not too long after.
Happy Father’s Day! A really interesting read on randomized controlled trials (RCTs) and public policy. The examples in the boxes are fantastic. This seems to be one of the cases where the public policy folks are borrowing ideas from Biostatistics, which has been involved in randomized controlled trials for a long time. It’s a cool example of adapting good ideas in one discipline to the specific challenges of another. Roger points to this link in the NY Times about the “Consumer Genome”, which basically is a collection of information about your purchases and consumer history.
Peter Theil gives his take on science funding/peer review: My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized.
Héctor Corrada Bravo Héctor Corrada Bravo is an assistant professor in the Department of Computer Science and the Center for Bioinformatics and Computational Biology at the University of Maryland, College Park. He moved to College Park after finishing his Ph.D. in computer science at the University of Wisconsin and a postdoc in biostatistics at the Johns Hopkins Bloomberg School of Public Health. He has done outstanding work at the intersection of molecular biology, computer science, and statistics.
Jeff Leek and colleagues just published an article in PLoS ONE on the differences between anonymous (closed) and non-anonymous (open) peer review of research articles. They developed a “peer review game” as a model system to track authors’ and reviewers’ behavior over time under open and closed systems. Under the open system, it was possible for authors to see who was reviewing their work. They found that under the open system authors and reviewers tended to cooperate by reviewing each others’ work.
I am a huge fan of open access journals. I think open access is good both for moral reasons (science should be freely available) and for more selfish ones (I want people to be able to read my work). If given the choice, I would publish all of my work in journals that distribute results freely. But it turns out that for most open/free access systems, the publishing charges are paid by the scientists publishing in the journals.
All statisticians in academia are constantly confronted with the question of where to publish their papers. Sometimes it’s obvious: A theoretical paper might go to the Annals of Statistics orJASA Theory & Methods or Biometrika. A more “methods-y” paper might go to JASA or JRSS-B orBiometrics or maybe even Biostatistics (where all three of us are or have been associate editors). But where should the applied papers go? I think this is an increasingly large category of papers being produced by statisticians.