Tag: nature

04
Jan

Does NIH fund innovative work? Does Nature care about publishing accurate articles?

Editor's Note: In a recent post we disagreed with a Nature article claiming that NIH doesn't support innovation. Our colleague Steven Salzberg actually looked at the data and wrote the guest post below. 

Nature published an article last month with the provocative title "Research grants: Conform and be funded."  The authors looked at papers with over 1000 citations to find out whether scientists "who do the most influential scientific work get funded by the NIH."  Their dramatic conclusion, widely reported, was that only 40% of such influential scientists get funding.

Dramatic, but wrong.  I re-analyzed the authors' data and wrote a letter to Nature, which was published today along with the authors response, which more or less ignored my points.  Unfortunately, Nature cut my already-short letter in half, so what readers see in the journal omits half my argument.  My entire letter is published here, thanks to my colleagues at Simply Statistics.  I titled it "NIH funds the overwhelming majority of highly influential original science results," because that's what the original study should have concluded from their very own data.  Here goes:

To the Editors:

In their recent commentary, "Conform and be funded," Joshua Nicholson and John Ioannidis claim that "too many US authors of the most innovative and influential papers in the life sciences do not receive NIH funding."  They support their thesis with an analysis of 200 papers sampled from 700 life science papers with over 1,000 citations.  Their main finding was that only 40% of "primary authors" on these papers are PIs on NIH grants, from which they argue that the peer review system "encourage[s] conformity if not mediocrity."

While this makes for an appealing headline, the authors' own data does not support their conclusion.  I downloaded the full text for a random sample of 125 of the 700 highly cited papers [data available upon request].  A majority of these papers were either reviews (63), which do not report original findings, or not in the life sciences (17) despite being included in the authors' database.  For the remaining 45 papers, I looked at each paper to see if the work was supported by NIH.  In a few cases where the paper did not include this information, I used the NIH grants database to determine if the corresponding author has current NIH support.  34 out of 45 (75%) of these highly-cited papers were supported by NIH.  The 11 papers not supported included papers published by other branches of the U.S. government, including the CDC and the U.S. Army, for which NIH support would not be appropriate.  Thus, using the authors' own data, one would have to conclude that NIH has supported a large majority of highly influential life sciences discoveries in the past twelve years.

The authors – and the editors at Nature, who contributed to the article – suffer from the same biases that Ioannidis himself has often criticized.  Their inclusion of inappropriate articles and especially the choice to require that both the first and last author be PIs on an NIH grant, even when the first author was a student, produced an artificially low number that misrepresents the degree to which NIH supports innovative original research.

It seems pretty clear that Nature wanted a headline about how NIH doesn't support innovation, and Ioannidis was happy to give it to them.  Now, I'd love it if NIH had the funds to support more scientists, and I'd also be in favor of funding at least some work retrospectively - based on recent major achievements, for example, rather than proposed future work.  But the evidence doesn't support the "Conform and be funded" headline, however much Nature might want it to be true.

10
Oct

What's wrong with the predicting h-index paper.

Editor’s Note: I recently posted about a paper in Nature that purported to predict the H-index. The authors contacted me to get my criticisms, then responded to those criticisms. They have requested the opportunity to respond publicly, and I think it is a totally reasonable request. Until there is a better comment generating mechanism at the journal level, this seems like as good a forum as any to discuss statistical papers. I will post an extended version of my criticisms here and give them the opportunity to respond publicly in the comments. 

The paper in question is a clearly a clever idea and the kind that would get people fired up. Quantifying researchers output is all the rage and being able to predict this quantity in the future would obviously make a lot of evaluators happy. I think it was, in that sense, a really good idea to chase down these data, since it was clear that if they found anything at all it would be very widely covered in the scientific/popular press. 

My original post was inspired out of my frustration with Nature, which has a history of publishing somewhat suspect statistical papers, such as this one. I posted the prediction contest after reading another paper I consider to be a flawed statistical paper, both for statistical reasons and for scientific reasons. I originally commented on the statistics in my post. The authors, being good sports, contacted me for my criticisms. I sent them the following criticisms, which I think are sufficiently major that a statistical referee or statistical journal would have likely rejected the paper:
  1. Lack of reproducibility. The code/data are not made available either through Nature or on your website. This is a critical component of papers based on computation and has led to serious problems before. It is also easily addressable. 
  2. No training/test set. You mention cross-validation (and maybe the R^2 is the R^2 using the held out scientists?) but if you use the cross-validation step to optimize the model parameters and to estimate the error rate, you could see some major overfitting. 
  3. The R^2 values are pretty low. An R^2 of 0.67 is obviously superior to the h-index alone, but (a) there is concern about overfitting, and (b) even without overfitting, that low of R^2 could lead to substantial variance in predictions. 
  4. The prediction error is not reported in the paper (or in the online calculator). How far off could you be at 5 years, at 10? Would the results still be impressive with those errors reported?
  5. You use model selection and show only the optimal model (as described in the last paragraph of the supplementary), but no indication of the potential difficulties with this model selection are made in the text. 
  6. You use a single regression model without any time variation in the coefficients and without any potential non-linearity. Clearly when predicting several years into the future there will be variation with time and non-linearity. There is also likely heavy variance in the types of individuals/career trajectories, and outliers may be important, etc. 
They carefully responded to these criticisms and hopefully they will post their responses in the comments. My impression based on their responses is that the statistics were not as flawed as I originally thought, but that the data aren’t sufficient to form a useful prediction. 
However, I think the much bigger flaw is the basic scientific premise. The h-index has been identified as having major flaws, biases (including gender bias), and to be a generally poor summary of a scientist’s contribution. See here, the list of criticisms here, and the discussion here for starters. The authors of the Nature paper propose a highly inaccurate predictor of this deeply flawed index. While that alone is sufficient to call into question the results in the paper, the authors also make bold claims about their prediction tool: 
Our formula is particularly useful for funding agencies, peer reviewers and hir­ing committees who have to deal with vast 
numbers of applications and can give each only a cursory examination. Statistical techniques have the advantage of returning 
results instantaneously and in an unbiased way.
Suggesting that this type of prediction should be used to make important decisions on hiring, promotion, and funding is highly scientifically flawed. Coupled with the online calculator the authors handily provide (which produces no measure of uncertainty) it makes it all too easy for people to miss the real value of scientific publications: the science contained in them. 
30
Apr

A disappointing response from @NatureMagazine about folks with statistical skills

Last week I linked to an ad for a Data Editor position at Nature Magazine. I was super excited that Nature was recognizing data as an important growth area. But the ad doesn’t mention anything about statistical analysis skills; it focuses exclusively on data management expertise. As I pointed out in the earlier post, managing data is only half the equation - figuring out what to do with the data is the other half. The second half requires knowledge of statistics.

The folks over at Nature responded to our post on Twitter:

 it’s unrealistic to think this editor (or anyone) could do what you suggest. Curation & accessibility are key. ^ng

I disagree with this statement for the following reasons:

1. Is it really unrealistic to think someone could have data management and statistical expertise? Pick your favorite data scientist and you would have someone with those skills. Most students coming out of computer science, computational biology, bioinformatics, or statistical genomics programs would have a blend of those two skills in some proportion. 

But maybe the problem is this:

Applicants must have a PhD in the biological sciences

It is possible that there are few PhDs in the biological sciences who know both statistics and data management (although that is probably changing). But most computational biologists have a pretty good knowledge of biology and a very good knowledge of data - both managing and analyzing. If you are hiring a data editor, this might be the target audience. I’d replace PhD in the biological science in the ad with, knowledge of biology,statistics, data analysis, and data visualization. There would be plenty of folks with those qualifications.

2. The response mentions curation, which is a critical issue. But good curation requires knowledge of two things: (i) the biological or scientific problem and (ii) how and in what way the data will be analyzed and used by researchers. As the Duke scandal made clear, a statistician with technological and biological knowledge running through a data analysis will identify many critical issues in data curation that would be missed by someone who doesn’t actually analyze data. 

3. The response says that “Curation and accessibility” are key. I agree that they are part of the key. It is critical that data can be properly accessed by researchers to perform new analyses, verify results in papers, and discover new results. But if the goal is to ensure the quality of science being published in Nature (the role of an editor) curation and accessibility are not enough. The editor should be able to evaluate statistical methods described in papers to identify potential flaws, or to rerun code and make sure that it performs the same/sensible analyses. A bad analysis that is reproducible will be discovered more quickly, but it is still a bad analysis. 

To be fair, I don’t think that Nature is the only organization that is missing the value of statistical skill in hiring data positions. It seems like many organizations are still just searching for folks who can handle/process the massive data sets being generated. But if they want to make accurate and informed decisions, statistical knowledge needs to be at the top of their list of qualifications.  

26
Apr

Nature is hiring a data editor...how will they make sense of the data?

It looks like the journal Nature is hiring a Chief Data Editor (link via Hilary M.). It looks like the primary purpose of this editor is to develop tools for collecting, curating, and distributing data with the goal of improving reproducible research.

The main duties of the editor, as described by the ad are: 

Nature Publishing Group is looking for a Chief Editor to develop a product aimed at making research data more available, discoverable and interpretable.

The ad also mentions having an eye for commercial potential; I wonder if this move was motivated by companies like figshare who are already providing a reproducible data service. I haven’t used figshare, but the early reports from friends who have are that it is great. 

The thing that bothered me about the ad is that there is a strong focus on data collection/storage/management but absolutely no mention of the second component of the data science problem: making sense of the data. To make sense of piles of data requires training in applied statistics (called by whatever name you like best). The ad doesn’t mention any such qualifications. 

Even if the goal of the position is just to build a competitor to figshare, it seems like a good idea for the person collecting the data to have some idea of what researchers are going to do with it. When dealing with data, those researchers will frequently be statisticians by one name or another. 

Bottom line: I’m stoked Nature is recognizing the importance of data in this very prominent way. But I wish they’d realize that a data revolution also requires a revolution in statistics. 

26
Jan

When should statistics papers be published in Science and Nature?

Like many statisticians, I was amped to see a statistics paper appear in Science. Given the impact that statistics has on the scientific community, it is a shame that more statistics papers don’t appear in the glossy journals like Science or Nature. As I pointed out in the previous post, if the paper that introduced the p-value was cited every time this statistic was used, the paper would have over 3 million citations!

But a couple of our readers* have pointed to a response to the MIC paper published by Noah Simon and Rob Tibshirani. Simon and Tibshirani show that the MIC statistic is underpowered compared to another recently published statistic for the same purpose that came out in 2009 in the Annals of Applied Statistics. A nice summary of the discussion is provided by Florian over at his blog. 

If the AoAS statistic came out first (by 2 years) and is more powerful (according to simulation), should the MIC statistic have appeared in Science? 

The whole discussion reminds me of a recent blog post suggesting that journals need to pick one between groundbreaking and definitive. The post points out that groundbreaking and definitive are in many ways in opposition to each other. 

Again, I’d suggest that statistics papers get short shrift in the glossy journals and I would like to see more. And the MIC statistic is certainly groundbreaking, but it isn’t clear that it is definitive. 

As a comparison, a slightly different story played out with another recent high-impact statistical method, the false discovery rate (FDR). The original papers were published in statistics journals. Then when it was clear that the idea was going to be big, a more general-audience-friendly summary was published in PNAS (not Science or Nature but definitely glossy). This might be a better way for the glossy journals to know what is going to be a major development in statistics versus an exciting - but potentially less definitive - method. 

* Florian M. and John S.