Tag: science

20
Nov

A grand experiment in science funding

Among all the young scientists I know, I think Ethan Perlstein is one of the most innovative in the way he has adapted to the internet era. His website is incredibly unique among academic websites, he is all over the social media and his latest experiment in crowd-funding his research is something I'm definitely keeping an eye on.

The basic idea is that he has identified a project (giving meth to yeast mouse brains -see the comment by Ethan below-, I think) and put it up on Rockethub, which is a crowd funding platform. The basic idea is he is looking for people to donate to his lab to fund the project. I would love it if this project succeeded, so if you have a few extra dollars lying around I'm sure he'd really appreciate it if you'd donate.

At the bigger picture level, I love the idea of crowd-funding for science in principal. But it isn't clear that it is going to work in practice. Ethan has been tearing it up with this project, even ending up in the Economist, but he has still had trouble getting to his goal for funding. In the grand scheme of things he is asking for a relatively small amount given how much he will do, so it isn't clear to me that this is a viable option for most scientists.

The other key problem, as a statistician, is that many of the projects I work on will not be as easily understandable/cool as giving meth to yeast. So, for example, I'm not sure I'd be able to generate the kind of support I'd need for my group to work on statistical analysis of RNA-seq data or batch effect removal methods.

Still, I love the idea, and it would be great if there were alternative sources of revenue for the incredibly important work that scientists like Ethan and others are doing.

28
Oct

Sunday Data/Statistics Link Roundup (10/28/12)

  1. An important article about anti-science sentiment in the U.S. (via David S.). The politicization of scientific issues such as global warming, evolution, and healthcare (think vaccination) makes the U.S. less competitive. I think the lack of statistical literacy and training in the U.S. is one of the sources of the problem. People use/skew/mangle statistical analyses and experiments to support their view and without a statistically well trained public, it all looks “reasonable and scientific”. But when science seems to contradict itself, it loses credibility. Another reason to teach statistics to everyone in high school.
  2. Scientific American was loaded this last week, here is another article on cancer screening.  The article covers several of the issues that make it hard to convince people that screening isn’t always good. The predictive value of the positive confusion is a huge one in cancer screening right now. The author of the piece is someone worth following on Twitter @hildabast.
  3. A bunch of data on the use of Github. Always cool to see new data sets that are worth playing with for student projects, etc. (via Hilary M.). 
  4. A really interesting post over at Stats Chat about why we study seemingly obvious things. Hint, the reason is that “obvious” things aren’t always true. 
  5. A story on “sentiment analysis” by NPR that suggests that most of the variation in a stock’s price during the day can be explained by the number of Facebook likes. Obviously, this is an interesting correlation. Probably more interesting for hedge funders/stockpickers if the correlation was with the change in stock price the next day. (via Dan S.)
  6. Yihui Xie visited our department this week. We had a great time chatting with him about knitr/animation and all the cool work he is doing. Here are his slides from the talk he gave. Particularly check out his idea for a fast journal. You are seeing the future of publishing.  
  7. Bonus Link: R is a trendy open source technology for big data
24
Jun

Sunday data/statistics link roundup (6/24)

  1. We’ve got a new domain! You can still follow us on tumblr or here: http://simplystatistics.org/
  2. A cool article on MIT’s annual sports statistics conference (via @storeylab). I love how the guy they chose to highlight created what I would consider a pretty simple visualization with known tools - but it turns out it is potentially a really new way of evaluating the shooting range of basketball players. This is my favorite kind of creativity in statistics.
  3. This is an interesting article calling higher education a “credentials cartel”. I don’t know if I’d go quite that far; there are a lot of really good reasons for higher education institutions beyond credentialing like research, putting smart students together in classes and dorms, broadening experiences etc. But I still think there is room for a smart group of statisticians/computer scientists to solve the credentialing problem on a big scale and have a huge impact on the education industry. 
  4. Check out John Cook’s conjecture on statistical methods that get used: “The probability of a method being used drops by at least a factor of 2 for every parameter that has to be determined by trial-and-error.” I’m with you. I wonder if there is a corollary related to how easy the documentation is to read? 
  5. If you haven’t read Roger’s post on Statistics and the Science Club, I consider it a must-read for anyone who is affiliated with a statistics/biostatistics department. We’ve had feedback by email/on twitter from other folks who are moving toward a more science oriented statistical culture. We’d love to hear from more folks with this same attitude/inclination/approach. 
11
Feb

Peter Thiel on Peer Review/Science

Peter Theil gives his take on science funding/peer review:

My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized. You can’t just write checks to the thirty smartest scientists in the United States. Instead there are bureaucratic processes, and I think the politicization of science—where a lot of scientists have to write grant applications, be subject to peer review, and have to get all these people to buy in—all this has been toxic, because the skills that make a great scientist and the skills that make a great politician are radically different. There are very few people who are both great scientists and great politicians. So a conservative account of what happened with science in the 20thcentury is that we had a decentralized, non-governmental approach all the way through the 1930s and early 1940s. At that point, the government could accelerate and push things tremendously, but only at the price of politicizing it over a series of decades. Today we have a hundred times more scientists than we did in 1920, but their productivity per capita is less that it used to be.

Thiel has a history of making controversial comments, and I don’t always agree with him, but I think that his point about the politicization of the grant process is interesting. 

26
Jan

When should statistics papers be published in Science and Nature?

Like many statisticians, I was amped to see a statistics paper appear in Science. Given the impact that statistics has on the scientific community, it is a shame that more statistics papers don’t appear in the glossy journals like Science or Nature. As I pointed out in the previous post, if the paper that introduced the p-value was cited every time this statistic was used, the paper would have over 3 million citations!

But a couple of our readers* have pointed to a response to the MIC paper published by Noah Simon and Rob Tibshirani. Simon and Tibshirani show that the MIC statistic is underpowered compared to another recently published statistic for the same purpose that came out in 2009 in the Annals of Applied Statistics. A nice summary of the discussion is provided by Florian over at his blog. 

If the AoAS statistic came out first (by 2 years) and is more powerful (according to simulation), should the MIC statistic have appeared in Science? 

The whole discussion reminds me of a recent blog post suggesting that journals need to pick one between groundbreaking and definitive. The post points out that groundbreaking and definitive are in many ways in opposition to each other. 

Again, I’d suggest that statistics papers get short shrift in the glossy journals and I would like to see more. And the MIC statistic is certainly groundbreaking, but it isn’t clear that it is definitive. 

As a comparison, a slightly different story played out with another recent high-impact statistical method, the false discovery rate (FDR). The original papers were published in statistics journals. Then when it was clear that the idea was going to be big, a more general-audience-friendly summary was published in PNAS (not Science or Nature but definitely glossy). This might be a better way for the glossy journals to know what is going to be a major development in statistics versus an exciting - but potentially less definitive - method. 

* Florian M. and John S.