Simply Statistics 2016-05-04T20:07:45+00:00 http://simplystats.github.io High school student builds interactive R class for the intimidated with the JHU DSL 2016-04-27T00:00:00+00:00 http://simplystats.github.io/2016/04/27/r-intimidated <p>Annika Salzberg is currently a biology undergraduate at Haverford College majoring in biology. While in high-school here in Baltimore she developed and taught an R class to her classmates at the <a href="http://www.parkschool.net/">Park School</a>. Her interest in R grew out of a project where she and her fellow students and teachers went to the Canadian sub-Arctic to collect data on permafrost depth and polar bears. When analyzing the data she learned R (with the help of a teacher) to be able to do the analyses, some of which she did on her laptop while out in the field.</p> <p>Later she worked on developing a course that she felt was friendly and approachable enough for her fellow high-schoolers to benefit. With the help of Steven Salzberg and the folks here at the JHU DSL, she built a class she calls <a href="https://www.datacamp.com/courses/r-for-the-intimidated">R for the intimidated</a> which just launched on <a href="https://www.datacamp.com/courses/r-for-the-intimidated">DataCamp</a> and you can take for free!</p> <p>The class is a great introduction for people who are just getting started with R. It walks through R/Rstudio, package installation, data visualization, data manipulation, and a final project. We are super excited about the content that Annika created working here at Hopkins and think you should go check it out!</p> An update on Georgia Tech's MOOC-based CS degree 2016-04-27T00:00:00+00:00 http://simplystats.github.io/2016/04/27/georgia-tech-mooc-program <p><a href="https://www.insidehighered.com/news/2016/04/27/georgia-tech-plans-next-steps-online-masters-degree-computer-science?utm_source=Inside+Higher+Ed&amp;utm_campaign=d373e33023-DNU20160427&amp;utm_medium=email&amp;utm_term=0_1fcbc04421-d373e33023-197601005#.VyCmdfkGRPU.mailto">This article</a> in Inside Higher Ed discusses next steps for Georgia Tech’s ground-breaking low-cost CS degree based on MOOCs run by Udacity. With Sebastian Thrun <a href="http://blog.udacity.com/2016/04/udacity-has-a-new-___.html">stepping down</a> as CEO at Udacity, it seems both Georgia Tech and Udacity might be moving into a new phase.</p> <p>One fact that surprised me about the Georgia Tech program concerned the demographics:</p> <blockquote> <p>Once the first applications for the online program arrived, Georgia Tech was surprised by how the demographics differed from the applications to the face-to-face program. The institute’s face-to-face cohorts tend to have more men than women and international students than U.S. citizens or residents. Applications to the online program, however, came overwhelmingly from students based in the U.S. (80 percent). The gender gap was even larger, with nearly nine out of 10 applications coming from men.</p> </blockquote> Write papers like a modern scientist (use Overleaf or Google Docs + Paperpile) 2016-04-21T00:00:00+00:00 http://simplystats.github.io/2016/04/21/writing <p><em>Editor’s note - This is a chapter from my book <a href="https://leanpub.com/modernscientist">How to be a modern scientist</a> where I talk about some of the tools and techniques that scientists have available to them now that they didn’t before.</em></p> <h2 id="writing---what-should-i-do-and-why">Writing - what should I do and why?</h2> <p><strong>Write using collaborative software to avoid version control issues.</strong></p> <p>On almost all modern scientific papers you will have co-authors. The traditional way of handling this was to create a single working document and pass it around. Unfortunately this system always results in a long collection of versions of a manuscript, which are often hard to distinguish and definitely hard to synthesize.</p> <p>An alternative approach is to use formal version control systems like <a href="https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control">Git</a> and <a href="https://github.com/">Github</a>. However, the overhead for using these systems can be pretty heavy for paper authoring. They also require all parties participating in the writing of the paper to be familiar with version control and the command line. Alternative paper authoring tools are now available that provide some of the advantages of version control without the major overhead involved in using base version control systems.</p> <p><img src="https://imgs.xkcd.com/comics/documents.png" alt="The usual result of file naming by a group (image via https://xkcd.com/1459/)" /></p> <p><strong>Make figures the focus of your writing</strong></p> <p>Once you have a set of results and are ready to start writing up the paper the first thing is <em>not to write</em>. The first thing you should do is create a set of 1-10 publication-quality plots with 3-4 as the central focus (see Chapter 10 <a href="http://leanpub.com/datastyle">here</a> for more information on how to make plots). Show these to someone you trust to make sure they “get” your story before proceeding. Your writing should then be focused around explaining the story of those plots to your audience. Many people, when reading papers, read the title, the abstract, and then usually jump to the figures. If your figures tell the whole story you will dramatically increase your audience. It also helps you to clarify what you are writing about.</p> <p><strong>Write clearly and simply even though it may make your papers harder to publish</strong>.</p> <p>Learn how to write papers in a very clear and simple style. Whenever you can write in plain English and make the approach you are using understandable and clear. This can (sometimes) make it harder to get your papers into journals. Referees are trained to find things to criticize and by simplifying your argument they will assume that what you have done is easy or just like what has been done before. This can be extremely frustrating during the peer review process. But the peer review process isn’t the end goal of publishing! The point of publishing is to communicate your results to your community and beyond so they can use them. Simple, clear language leads to much higher use/reading/citation/impact of your work.</p> <p><strong>Include links to code, data, and software in your writing</strong></p> <p>Not everyone recognizes the value of re-analysis, scientific software, or data and code sharing. But it is the fundamental cornerstone of the modern scientific process to make all of your materials easily accessible, re-usable and checkable. Include links to data, code, and software prominently in your abstract, introduction and methods and you will dramatically increase the use and impact of your work.</p> <p><strong>Give credit to others</strong></p> <p>In academics the main currency we use is credit for publication. In general assigning authorship and getting credit can be a very tricky component of the publication process. It is almost always better to err on the side of offering credit. A very useful test that my advisor <a href="http://www.genomine.org/">John Storey</a> taught me is if you are embarrassed to explain the authorship credit to anyone who was on the paper or not on the paper, then you probably haven’t shared enough credit.</p> <h2 id="writing---what-tools-should-i-use">Writing - what tools should I use?</h2> <h3 id="wysiwyg-software-google-docs-and-paperpile">WYSIWYG software: Google Docs and Paperpile.</h3> <p>This system uses <a href="https://www.google.com/docs/about/">Google Docs</a> for writing and <a href="https://paperpile.com/app">Paperpile</a> for reference management. If you have a Google account you can easily create documents and share them with your collaborators either privately or publicly. Paperpile allows you to search for academic articles and insert references into the text using a system that will be familiar if you have previously used <a href="http://endnote.com/">Endnote</a> and <a href="https://products.office.com/en-us/word">Microsoft Word</a>.</p> <p>This system has the advantage of being a what you see is what you get system - anyone with basic text processing skills should be immediately able to contribute. Google Docs also automatically saves versions of your work so that you can flip back to older versions if someone makes a mistake. You can also easily see which part of the document was written by which person and add comments.</p> <p><em>Getting started</em></p> <ol> <li>Set up accounts with <a href="https://accounts.google.com/SignUp">Google</a> and with <a href="https://paperpile.com/">Paperpile</a>. If you are an academic the Paperpile account will cost $2.99 a month, but there is a 30 day free trial.</li> <li>Go to <a href="https://docs.google.com/document/u/0/">Google Docs</a> and create a new document.</li> <li>Set up the <a href="https://paperpile.com/blog/free-google-docs-add-on/">Paperpile add-on for Google Docs</a></li> <li>In the upper right hand corner of the document, click on the <em>Share</em> link and share the document with your collaborators</li> <li>Start editing</li> <li>When you want to include a reference, place the cursor where you want the reference to go, then using the <em>Paperpile</em> menu, choose insert citation. This should give you a search box where you can search by Pubmed ID or on the web for the reference you want.</li> <li>Once you have added some references use the <em>Citation style</em> option under <em>Paperpile</em> to pick the citation style for the journal you care about.</li> <li>Then use the <em>Format citations</em> option under <em>Paperpile</em> to create the bibliography at the end of the document</li> </ol> <p>The nice thing about using this system is that everyone can easily directly edit the document simultaneously - which reduces conflict and difficulty of use. A disadvantage is getting the formatting just right for most journals is nearly impossible, so you will be sending in a version of your paper that is somewhat generic. For most journals this isn’t a problem, but a few journals are sticklers about this.</p> <h3 id="typesetting-software-overleaf-or-sharelatex">Typesetting software: Overleaf or ShareLatex</h3> <p>An alternative approach is to use typesetting software like Latex. This requires a little bit more technical expertise since you need to understand the Latex typesetting language. But it allows for more precise control over what the document will look like. Using Latex on its own you will have many of the same issues with version control that you would have for a word document. Fortunately there are now “Google Docs like” solutions for editing latex code that are readily available. Two of the most popular are <a href="https://www.overleaf.com/">Overleaf</a> and <a href="https://www.sharelatex.com/">ShareLatex</a>.</p> <p>In either system you can create a document, share it with collaborators, and edit it in a similar manner to a Google Doc, with simultaneous editing. Under both systems you can save versions of your document easily as you move along so you can quickly return to older versions if mistakes are made.</p> <p>I have used both kinds of software, but now primarily use Overleaf because they have a killer feature. Once you have finished writing your paper you can directly submit it to some preprint servers like <a href="http://arxiv.org/">arXiv</a> or <a href="http://biorxiv.org/">biorXiv</a> and even some journals like <a href="https://peerj.com">Peerj</a> or <a href="http://f1000research.com/">f1000research</a>.</p> <p><em>Getting started</em></p> <ol> <li>Create an <a href="https://www.overleaf.com/signup">Overleaf account</a>. There is a free version of the software. Paying$8/month will give you easy saving to Dropbox.</li> <li>Click on <em>New Project</em> to create a new document and select from the available templates</li> <li>Open your document and start editing</li> <li>Share with colleagues by clicking on the <em>Share</em> button within the project. You can share either a read only version or a read and edit version.</li> </ol> <p>Once you have finished writing your document you can click on the <em>Publish</em> button to automatically submit your paper to the available preprint servers and journals. Or you can download a pdf version of your document and submit it to any other journal.</p> <h2 id="writing---further-tips-and-issues">Writing - further tips and issues</h2> <h3 id="when-to-write-your-first-paper">When to write your first paper</h3> <p>As soon as possible! The purpose of graduate school is (in some order):</p> <ul> <li>Freedom</li> <li>Time to discover new knowledge</li> <li>Time to dive deep</li> <li>Opportunity for leadership</li> <li>Opportunity to make a name for yourself <ul> <li>R packages</li> <li>Papers</li> <li>Blogs</li> </ul> </li> <li>Get a job</li> </ul> <p>The first couple of years of graduate school are typically focused on (1) teaching you all the technical skills you need and (2) data dumping as much hard-won practical experience from more experienced people into your head as fast as possible.</p> <p>After that one of your main focuses should be on establishing your own program of research and reputation. Especially for Ph.D. students it can not be emphasized enough <em>no one will care about your grades in graduate school but everyone will care what you produced</em>. See for example, Sherri’s excellent <a href="http://drsherrirose.com/academic-cvs-for-statistical-science-faculty-positions">guide on CV’s for academic positions</a>.</p> <p>I firmly believe that <a href="http://simplystatistics.org/2013/01/23/statisticians-and-computer-scientists-if-there-is-no-code-there-is-no-paper/">R packages</a> and blog posts can be just as important as papers, but the primary signal to most traditional academic communities still remains published peer-reviewed papers. So you should get started on writing them as soon as you can (definitely before you feel comfortable enough to try to write one).</p> <p>Even if you aren’t going to be in academics, papers are a great way to show off that you can (a) identify a useful project, (b) finish a project, and (c) write well. So the first thing you should be asking when you start a project is “what paper are we working on?”</p> <h3 id="what-is-an-academic-paper">What is an academic paper?</h3> <p>A scientific paper can be distilled into four parts:</p> <ol> <li>A set of methodologies</li> <li>A description of data</li> <li>A set of results</li> <li>A set of claims</li> </ol> <p>When you (or anyone else) writes a paper the goal is to communicate clearly items 1-3 so that they can justify the set of claims you are making. Before you can even write down 4 you have to do 1-3. So that is where you start when writing a paper.</p> <h3 id="how-do-you-start-a-paper">How do you start a paper?</h3> <p>The first thing you do is you decide on a problem to work on. This can be a problem that your advisor thought of or it can be a problem you are interested in, or a combination of both. Ideally your first project will have the following characteristics:</p> <ol> <li>Concrete</li> <li>Solves a scientific problem</li> <li>Gives you an opportunity to learn something new</li> <li>Something you feel ownership of</li> <li>Something you want to work on</li> </ol> <p>Points 4 and 5 can’t be emphasized enough. Others can try to help you come up with a problem, but if you don’t feel like it is <em>your</em> problem it will make writing the first paper a total slog. You want to find an option where you are just insanely curious to know the answer at the end, to the point where you <em>just have to figure it out</em> and kind of don’t care what the answer is. That doesn’t always happen, but that makes the grind of writing papers go down a lot easier.</p> <p>Once you have a problem the next step is to actually do the research. I’ll leave this for another guide, but the basic idea is that you want to follow the usual <a href="https://leanpub.com/datastyle/">data analytic process</a>:</p> <ol> <li>Define the question</li> <li>Get/tidy the data</li> <li>Explore the data</li> <li>Build/borrow a model</li> <li>Perform the analysis</li> <li>Check/critique results</li> <li>Write things up</li> </ol> <p>The hardest part for the first paper is often knowing where to stop and start writing.</p> <h3 id="how-do-you-know-when-to-start-writing">How do you know when to start writing?</h3> <p>Sometimes this is an easy question to answer. If you started with a very concrete question at the beginning then once you have done enough analysis to convince yourself that you have the answer to the question. If the answer to the question is interesting/surprising then it is time to stop and write.</p> <p>If you started with a question that wasn’t so concrete then it gets a little trickier. The basic idea here is that you have convinced yourself you have a result that is worth reporting. Usually this takes the form of between 1 and 5 figures that show a coherent story that you could explain to someone in your field.</p> <p>In general one thing you should be working on in graduate school is your own internal timer that tells you, “ok we have done enough, time to write this up”. I found this one of the hardest things to learn, but if you are going to stay in academics it is a critical skill. There are rarely deadlines for paper writing (unless you are submitting to CS conferences) so it will eventually be up to you when to start writing. If you don’t have a good clock, this can really slow down your ability to get things published and promoted in academics.</p> <p>One good principle to keep in mind is “the perfect is the enemy of the very good” Another one is that a published paper in a respectable journal beats a paper you just never submit because you want to get it into the “best” journal.</p> <h3 id="a-note-on-negative-results">A note on “negative results”</h3> <p>If the answer to your research problem isn’t interesting/surprising but you started with a concrete question it is also time to stop and write. But things often get more tricky with this type of paper as most journals when reviewing papers filter for “interest” so sometimes a paper without a really “big” result will be harder to publish. <strong>This is ok!!</strong> Even though it may take longer to publish the paper, it is important to publish even results that aren’t surprising/novel. I would much rather that you come to an answer you are comfortable with and we go through a little pain trying to get it published than you keep pushing until you get an “interesting” result, which may or may not be justifiable.</p> <h3 id="how-do-you-start-writing">How do you start writing?</h3> <ol> <li>Once you have a set of results and are ready to start writing up the paper the first thing is <em>not to write</em>. The first thing you should do is create a set of 1-4 publication-quality plots (see Chapter 10 <a href="http://leanpub.com/datastyle">here</a>). Show these to someone you trust to make sure they “get” your story before proceeding.</li> <li>Start a project on <a href="https://www.overleaf.com/">Overleaf</a> or <a href="https://www.google.com/docs/about/">Google Docs</a>.</li> <li>Write up a story around the four plots in the simplest language you feel you can get away with, while still reporting all of the technical details that you can.</li> <li>Go back and add references in only after you have finished the whole first draft.</li> <li>Add in additional technical detail in the supplementary material if you need it.</li> <li>Write up a reproducible version of your code that returns exactly the same numbers/figures in your paper with no input parameters needed.</li> </ol> <h3 id="what-are-the-sections-in-a-paper">What are the sections in a paper?</h3> <p>Keep in mind that most people will read the title of your paper only, a small fraction of those people will read the abstract, a small fraction of those people will read the introduction, and a small fraction of those people will read your whole paper. So make sure you get to the point quickly!</p> <p>The sections of a paper are always some variation on the following:</p> <ol> <li><strong>Title</strong>: Should be very short, no colons if possible, and state the main result. Example, “A new method for sequencing data that shows how to cure cancer”. Here you want to make sure people will read the paper without overselling your results - this is a delicate balance.</li> <li><strong>Abstract</strong>: In (ideally) 4-5 sentences explain (a) what problem you are solving, (b) why people should care, (c) how you solved the problem, (d) what are the results and (e) a link to any data/resources/software you generated.</li> <li><strong>Introduction</strong>: A more lengthy (1-3 pages) explanation of the problem you are solving, why people should care, and how you are solving it. Here you also review what other people have done in the area. The most critical thing is never underestimate how little people know or care about what you are working on. It is your job to explain to them why they should.</li> <li><strong>Methods</strong>: You should state and explain your experimental procedures, how you collected results, your statistical model, and any strengths or weaknesses of your proposed approach.</li> <li><strong>Comparisons (for methods papers)</strong>: Compare your proposed approach to the state of the art methods. Do this with simulations (where you know the right answer) and data you haven’t simulated (where you don’t know the right answer). If you can base your simulation on data, even better. Make sure you are <a href="http://simplystatistics.org/2013/03/06/the-importance-of-simulating-the-extremes/">simulating both the easy case (where your method should be great) and harder cases where your method might be terrible</a>.</li> <li><strong>Your analysis</strong>: Explain what you did, what data you collected, how you processed it and how you analysed it.</li> <li><strong>Conclusions</strong>: Summarize what you did and explain why what you did is important one more time.</li> <li><strong>Supplementary Information</strong>: If there are a lot of technical computational, experimental or statistical details, you can include a supplement that has all of the details so folks can follow along. As far as possible, try to include the detail in the main text but explained clearly.</li> </ol> <p>The length of the paper will depend a lot on which journal you are targeting. In general the shorter/more concise the better. But unless you are shooting for a really glossy journal you should try to include the details in the paper itself. This means most papers will be in the 4-15 page range, but with a huge variance.</p> <p><em>Note</em>: Part of this chapter appeared in the <a href="https://github.com/jtleek/firstpaper">Leek group guide to writing your first paper</a></p> As a data analyst the best data repositories are the ones with the least features 2016-04-20T00:00:00+00:00 http://simplystats.github.io/2016/04/20/data-repositories <p>Lately, for a range of projects I have been working on I have needed to obtain data from previous publications. There is a growing list of data repositories where data is made available. General purpose data sharing sites include:</p> <ul> <li>The <a href="https://osf.io/">open science framework</a></li> <li>The <a href="https://dataverse.harvard.edu/">Harvard Dataverse</a></li> <li><a href="https://figshare.com/">Figshare</a></li> <li><a href="https://datadryad.org/">Datadryad</a></li> </ul> <p>There are also a host of field-specific data sharing sites.One thing that I find a little frustrating about these sites is that they add a lot of bells and whistles. For example I wanted to download a <a href="https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/6FMTT3">p-value data set</a> from Dataverse (just to pick on one, but most repositories have similar issues). I go to the page and click <code class="highlighter-rouge">Download</code> on the data set I want.</p> <p><img src="https://raw.githubusercontent.com/simplystats/simplystats.github.io/master/_images/2016-04-20/dataverse1.png" alt="Downloading a dataverse paper " /></p> <p>Then I have to accept terms:</p> <p>Then I have to <img src="https://raw.githubusercontent.com/simplystats/simplystats.github.io/master/_images/2016-04-20/dataverse2.png" alt="Downloading a dataverse paper part 2 " /></p> <p>Then the data set is downloaded. But it comes from a button that doesn’t allow me to get the direct link. There is an <a href="https://github.com/ropensci/dvn">R package</a> that you can use to download dataverse data, but again not with direct links to the data sets.</p> <p>This is a similar system to many data repositories where there is a multi-step process to downloading data rather than direct links.</p> <p>But as a data analyst I often find that I want:</p> <ul> <li>To be able to find a data set with some minimal search terms</li> <li>Find the data set in .csv or tab delimited format via a direct link</li> <li>Have the data set be available both as raw and processed versions</li> <li>The processed version will either be one or many <a href="https://www.jstatsoft.org/article/view/v059i10">tidy data sets</a>.</li> </ul> <p>As a data analyst I would rather have all of the data stored as direct links and ideally as csv files. Then you don’t need to figure out a specialized package, an API, or anything else. You just use <code class="highlighter-rouge">read.csv</code> directly using the URL in R and you are off to the races. It also makes it easier to point to which data set you are using. So I find the best data repositories are the ones with the least features.</p> Junior scientists - you don't have to publish in open access journals to be an open scientist. 2016-04-11T00:00:00+00:00 http://simplystats.github.io/2016/04/11/publishing <p><em>Editor’s note - This is a chapter from my book <a href="https://leanpub.com/modernscientist">How to be a modern scientist</a> where I talk about some of the tools and techniques that scientists have available to them now that they didn’t before.</em></p> <h2 id="publishing---what-should-i-do-and-why">Publishing - what should I do and why?</h2> <p>A modern scientific writing process goes as follows.</p> <ol> <li>You write a paper</li> <li>You post a preprint a. Everyone can read and comment</li> <li>You submit it to a journal</li> <li>It is peer reviewed privately</li> <li>The paper is accepted or rejected a. If rejected go back to step 2 and start over b. If accepted it will be published</li> </ol> <p>You can take advantage of modern writing and publishing tools to handle several steps in the process.</p> <p><strong>Post preprints of your work</strong></p> <p>Once you have finished writing you paper, you want to share it with others. Historically, this involved submitting the paper to a journal, waiting for reviews, revising the paper, resubmitting, and eventually publishing it. There is now very little reason to wait that long for your paper to appear in print. Generally you can post a paper to a preprint server and have it appear in 1-2 days. This is a dramatic improvement on the weeks or months it takes for papers to appear in peer reviewed journals even under optimal conditions. There are several advantages to posting preprints.</p> <ul> <li>Preprints establish precedence for your work so it reduces your risk of being scooped.</li> <li>Preprints allow you to collect feedback on your work and improve it quickly.</li> <li>Preprints can help you to get your work published in formal academic journals.</li> <li>Preprints can get you attention and press for your work.</li> <li>Preprints give junior scientists and other researchers gratification that helps them handle the stress and pressure of their first publications.</li> </ul> <p>The last point is underappreciated and was first pointed out to me by <a href="http://giladlab.uchicago.edu/">Yoav Gilad</a> It takes a really long time to write a scientific paper. For a student publishing their first paper, the first feedback they get is often (a) delayed by several months and (b) negative and in the form of a referee report. This can have a major impact on the motivation of those students to keep working on projects. Preprints allow students to have an immediate product they can point to as an accomplishment, allow them to get positive feedback along with constructive or negative feedback, and can ease the pain of difficult referee reports or rejections.</p> <p><strong>Choose the journal that maximizes your visibility</strong></p> <p>You should try to publish your work in the best journals for your field. There are a couple of reasons for this. First, being a scientist is both a calling and a career. To advance your career, you need visibilty among your scientific peers and among the scientists who will be judging you for grants and promotions. The best place to do this is by publishing in the top journals in your field. The important thing is to do your best to do good work and submit it to these journals, even if the results aren’t the most “sexy”. Don’t adapt your workflow to the journal, but don’t ignore the career implications either. Do this even if the journals are closed source. There are ways to make your work accessible and you will both raise your profile and disseminate your results to the broadest audience.</p> <p><strong>Share your work on social media</strong></p> <p>Academic journals are good for disseminating your work to the appropriate scientific community. As a modern scientist you have other avenues and other communities - like the general public - that you would like to reach with your work. Once your paper has been published in a preprint or in a journal, be sure to share your work through appropriate social media channels. This will also help you develop facility in coming up with one line or one figure that best describes what you think you have published so you can share it on social media sites like Twitter.</p> <h3 id="preprints-and-criticism">Preprints and criticism</h3> <p>See the section on scientific blogging for how to respond to criticism of your preprints online.</p> <h2 id="publishing---what-tools-should-i-use">Publishing - what tools should I use?</h2> <h3 id="preprint-servers">Preprint servers</h3> <p>Here are a few preprint servers you can use.</p> <ol> <li><a href="http://arxiv.org/">arXiv</a> (free) - primarily takes math/physics/computer science papers. You can submit papers and they are reviewed and posted within a couple of days. It is important to note that once you submit a paper here, you can not take it down. But you can submit revisions to the paper which are tracked over time. This outlet is followed by a large number of journalists and scientists.</li> <li><a href="http://biorxiv.org/">biorXiv</a> (free) - primarily takes biology focused papers. They are pretty strict about which categories you can submit to. You can submit papers and they are reviewed and posted within a couple of days. biorxiv also allows different versions of manuscripts, but some folks have had trouble with their versioning system, which can be a bit tricky for keeping your paper coordinated with your publication. bioXiv is pretty carefully followed by the biological and computational biology communities.</li> <li><a href="https://peerj.com/preprints/">Peerj</a> (free) - takes a wide range of different types of papers. They will again review your preprint quickly and post it online. You can also post different versions of your manuscript with this system. This system is newer and so has fewer followers, you will need to do your own publicity if you publish your paper here.</li> </ol> <h3 id="journal-preprint-policies">Journal preprint policies</h3> <p>This <a href="https://en.wikipedia.org/wiki/List_of_academic_journals_by_preprint_policy">list</a> provides information on which journals accept papers that were first posted as preprints. However, you shouldn’t</p> <h2 id="publishing---further-tips-and-issues">Publishing - further tips and issues</h2> <h3 id="open-vs-closed-access">Open vs. closed access</h3> <p>Once your paper has been posted to a preprint server you need to submit it for publication. There are a number of considerations you should keep in mind when submitting papers. One of these considerations is closed versus open access. Closed access journals do not require you to pay to submit or publish your paper. But then people who want to read your paper either need to pay or have a subscription to the journal in question.</p> <p>There has been a strong push for open access journals over the last couple of decades. There are some very good reasons justifying this type of publishing including (a) moral arguments based on using public funding for research, (2) each of access to papers, and (3) benefits in terms of people being able to use your research. In general, most modern scientists want their work to be as widely accessible as possible. So modern scientists often opt for open access publishing.</p> <p>Open access publishing does have a couple of disadvantages. First it is often expensive, with fees for publication ranging between <a href="http://simplystatistics.org/2011/11/03/free-access-publishing-is-awesome-but-expensive-how/">$1,000 and$4,000</a> depending on the journal. Second, while science is often a calling, it is also a career. Sometimes the best journals in your field may be closed access. In general, one of the most important components of an academic career is being able to publish in journals that are read by a lot of people in your field so your work will be recognized and impactful.</p> <p>However, modern systems make both closed and open access journals reasonable outlets.</p> <h3 id="closed-access--preprints">Closed access + preprints</h3> <p>If the top journals in your field are closed access and you are a junior scientist then you should try to submit your papers there. But to make sure your papers are still widely accessible you can use preprints. First, you can submit a preprint before you submit the paper to the journal. Second, you can update the preprint to keep it current with the published version of your paper. This system allows you to make sure that your paper is read widely within your field, but also allows everyone to freely read the same paper. On your website, you can then link to both the published and preprint version of your paper.</p> <h3 id="open-access">Open access</h3> <p>If the top journal in your field is open access you can submit directly to that journal. Even if the journal is open access it makes sense to submit the paper as a preprint during the review process. You can then keep the preprint up-to-date, but this system has the advantage that the formally published version of your paper is also available for everyone to read without constraints.</p> <h3 id="responding-to-referee-comments">Responding to referee comments</h3> <p>After your paper has been reviewed at an academic journal you will receive referee reports. If the paper has not been outright rejected, it is important to respond to the referee reports in a timely and direct manner. Referee reports are often maddening. There is little incentive for people to do a good job refereeing and the most qualified reviewers will likely be those with a <a href="http://simplystatistics.org/2015/02/09/the-trouble-with-evaluating-anything/">conflict of interest</a>.</p> <p>The first thing to keep in mind is that the power in the refereeing process lies entirely with the editors and referees. The first thing to do when responding to referee reports is to eliminate the impulse to argue or respond with any kind of emotion. A step-by-step process for responding to referee reports is the following.</p> <ol> <li>Create a Google Doc. Put in all referee and editor comments in italics.</li> <li>Break the comments up into each discrete criticism or request.</li> <li>In bold respond to each comment. Begin each response with “On page xx we did yy to address this comment”</li> <li>Perform the analyses and experiments that you need to fill in the yy</li> <li>Edit the document to reflect all of the experiments that you have performed</li> </ol> <p>By actively responding to each comment you will ensure you are responsive to the referees and give your paper the best chance of success. If a comment is incorrect or non-sensical, think about how you can edit the paper to remove this confusion.</p> <h3 id="finishing">Finishing</h3> <p>While I have advocated here for using preprints to disseminate your work, it is important to follow the process all the way through to completion. Responding to referee reports is drudgery and no one likes to do it. But in terms of career advancement preprints are almost entirely valueless until they are formally accepted for publication. It is critical to see all papers all the way through to the end of the publication cycle.</p> <h3 id="you-arent-done">You aren’t done!</h3> <p>Publication of your paper is only the beginning of successfully disseminating your science. Once you have published the paper, it is important to use your social media, blog, and other resources to disseminate your results to the broadest audience possible. You will also give talks, discuss the paper with colleagues, and respond to requests for data and code. The most successful papers have a long half life and the responsibilities linger long after the paper is published. But the most successful scientists continue to stay on top of requests and respond to critiques long after their papers are published.</p> <p><em>Note:</em> Part of this chapter appeared in the Simply Statistics blog post: <a href="http://simplystatistics.org/2016/02/26/preprints-and-pppr/">“Preprints are great, but post publication peer review isn’t ready for prime time”</a></p> A Natural Curiosity of How Things Work, Even If You're Not Responsible For Them 2016-04-08T00:00:00+00:00 http://simplystats.github.io/2016/04/08/eecom <p>I just read Karl’s <a href="https://kbroman.wordpress.com/2016/04/08/i-am-a-data-scientist/">great post</a> on what it means to be a data scientist. I can’t really add much to it, but reading it got me thinking about the Apollo 12 mission, the second moon landing.</p> <p>This mission is actually famous because of its launch, where the Saturn V was struck by lightning and <a href="https://en.wikipedia.org/wiki/John_Aaron">John Aaron</a> (played wonderfully by Loren Dean in the movie <a href="http://www.imdb.com/title/tt0112384/">Apollo 13</a>), the flight controller in charge of environmental, electrical, and consumables (EECOM), had to make a decision about whether to abort the launch.</p> <p>In this great clip from the movie <em>Failure is Not An Option</em>, the real John Aaron describes what makes for a good EECOM flight controller. The bottom line is that</p> <blockquote> <p>A good EECOM has a natural curiosity for how things work, even if you…are not responsible for them</p> </blockquote> <p>I think a good data scientist or statistician also has that property. They key part of that line is the “<em>even if you are not responsible for them”</em> part. I’ve found that a lot of being a statistician involves nosing around in places where you’re not supposed to be, seeing how data are collected, handled, managed, analyzed, and reported. Focusing on the development and implementation of methods is not enough.</p> <p>Here’s the clip, which describes the famous “SCE to AUX” call from John Aaron:</p> <iframe width="640" height="480" src="https://www.youtube.com/embed/eWQIryll8y8" frameborder="0" allowfullscreen=""></iframe> Not So Standard Deviations Episode 13 - It's Good that Someone is Thinking About Us 2016-04-07T00:00:00+00:00 http://simplystats.github.io/2016/04/07/nssd-episode-13 <p>In this episode, Hilary and I talk about the difficulties of separating data analysis from its context, and Feather, a new file format for storing tabular data. Also, we respond to some listener questions and Hilary announces her new job.</p> <p>If you have questions you’d like us to answer, you can send them to nssdeviations @ gmail.com or tweet us at @NSSDeviations.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Please <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">leave us a review on iTunes</a>!</p> <p>Show notes:</p> <ul> <li> <p><a href="https://www.patreon.com/NSSDeviations">NSSD Patreon page</a></p> </li> <li> <p><a href="https://github.com/wesm/feather/">Feather git repository</a></p> </li> <li> <p><a href="https://arrow.apache.org">Apache Arrow</a></p> </li> <li> <p><a href="https://google.github.io/flatbuffers/">FlatBuffers</a></p> </li> <li> <p><a href="http://simplystatistics.org/2016/03/31/feather/">Roger’s blog post on feather</a></p> </li> <li> <p><a href="https://www.etsy.com/shop/NausicaaDistribution">NausicaaDistribution</a></p> </li> <li> <p><a href="http://www.rstats.nyc">New York R Conference</a></p> </li> <li> <p><a href="https://goo.gl/J2QAWK">Every Frame a Painting</a></p> </li> </ul> <p><a href="https://soundcloud.com/nssd-podcast/episode-13-its-good-that-someone-is-thinking-about-us">Download the audio for this episode.</a></p> <iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/257851619&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe> Companies are Countries, Academia is Europe 2016-04-05T00:00:00+00:00 http://simplystats.github.io/2016/04/05/corporations-academia <p>I’ve been thinking a lot recently about the practice of data analysis in different settings and how the environment in which you work can affect the view you have on how things should be done. I’ve been working in academia for over 12 years now. I don’t have any industry data science experience, but long ago I worked as a software engineer at <a href="http://www.northropgrumman.com/Pages/default.aspx">two</a> <a href="http://kencast.com">companies</a>. Obviously, my experience is biased on the academic side.</p> <p>I’ve see an interesting divergence between what I see being written from data scientists in industry and my personal experience doing data science in academia. From the industry side, I see a lot of stuff about tooling/software and processes. This makes sense to me. Often, a company needs/wants to move quickly and doing so requires making decisions on a reasonable time scale. If decisions are made with data, then the process of collecting, organizing, analyzing, and communicating data needs to be well thought-out, systematized, rigorous, and streamlined. If everytime someone at the company had a question the data science team developed some novel custom coded-from-scratch solution, decisions would be made at a glacial pace, which is probably not good for business. In order to handle this type of situation you need solid tools and flexible workflows. You also need agreement within the company about how things are down and the processes that are followed.</p> <p>Now, I don’t mean to imply that life at a company is easy, that there isn’t politics or bureacracy to deal with. But I see companies as much like individual countries, with a clear (hierarchical) leadership structure and decision-making process (okay, maybe ideal companies). Much like in a country, it might take some time to come to a decision about a policy or problem (e.g. health insurance), with much negotiation and horse-trading, but once consensus is arrived at, often the policy can be implemented across the country at a reasonable timescale. In a company, if a certain workflow or data process can be shown to be beneficial and perhaps improve profitability down the road, then a decision could be made to implement it. Ultimately, everyone within a company is in the same boat and is interested in seeing the company succeed.</p> <p>When I worked at a company as a software developer, I’d sometimes run into a problem that was confusing or difficult to code. So I’d walk down to the systems engineer’s office (they guy who wrote the specification) and talk to him about it. We’d hash things out for a while and then figure out a way to go forward. Often the technical writers who wrote the documentation would come and ask me what exactly a certain module did and I’d explain it to them. Communication was usually quick and efficient because it usually occurred person-to-person and because we were all on the same team.</p> <p>Academia is more like Europe, a somewhat loose federation of states that only communicates with each other because they have to. Each principal investigator is a country and s/he has to engage in constant (sometimes contentious) negotiations with other investigators (“countries”). As a data scientist, this can be tricky because unless I collect/generate my own data (which sometimes, <a href="http://www.ncbi.nlm.nih.gov/pubmed/18477784">I do</a>), I have to negotiate with another investigator to obtain the data. Even if I were collaborating with that investigator from the very beginning of a study, I typically have very little direct control over the data collection process because those people don’t work for me. The result is often, the data come to me in some format over which I had little input, and I just have to deal with it. Sometimes this is a nice CSV file, but often it is not.</p> <p>In good situations, I can talk with the investigator collecting the data and we can hash out a plan to put the data into a <a href="https://www.jstatsoft.org/article/view/v059i10">certain format</a>. But even if we can agree on that, often the expertise will not be available on their end to get the data into that format, so I’ll end up having to do it myself anyway. In not-so-good situations, I can make all the arguments I want for an organized data collection and analysis workflow, but if the investigator doesn’t want to do it, can’t afford it, or doesn’t see any incentive, then it’s not going to happen. Ever.</p> <p>However, even in the good situations, every investigator works in their own personal way. I mean, that’s why people go into academia, because you can “be your own boss” and work on problems that interest you. Most people develop a process for running their group/lab that most suits their personality. If you’re a data scientist, you need to figure out a way to mesh with each and every investigator you collaborate with. In addition, you need to adapt yourself to whatever data process each investigator has developed for their group. So if you’re working with a genomics person, you might need to learn about BAM files. For a neuroimaging collaborator, you’ll need to know about SPM. If one person doesn’t like tidy data, then that’s too bad. You need to deal with it (or don’t work with them). As a result, it’s difficult to develop a useful “system” for data science because any system that works for one collaborator is unlikely to work for another collaborator. In effect, each collaboration often results in a custom coded-from-scratch solution.</p> <p>This contrast between companies and academia got me thinking about the <a href="https://en.wikipedia.org/wiki/Theory_of_the_firm">Theory of the Firm</a>. This is an economic theory that tries to explain why firms/companies develop at all, as opposed to individuals or small groups negotiating over an open market. My understanding is that it all comes down to how well you can write and enforce a contract between two parties. For example, if I need to manufacture iPhones, I can go to a contract manufacturer, given them the designs and the precise specifications/tolerances and they can just produce millions of them. However, if I need to <em>design</em> the iPhone, it’s a bit harder for me to go to another company and just say “Design an awesome new phone!” That kind of contract is difficult to write down, much less enforce. That other company will be operating off of different incentives from me and will likely not produce what I want. It’s probably better if I do the design work in-house. Ultimately, once the transaction costs of having two different companies work together become too high, it makes more sense for a company to do the work in-house.</p> <p>I think collaborating on data analysis is a high transaction cost activity. Companies have an advantage in this realm to the extent that they can hire lots of data scientists to work in-house. Academics that are well-funded and have large labs can often hire a data analyst to work for them. This is good because it makes a well-trained person available at low transaction cost, but this setup is the exception. PIs with smaller labs barely have enough funding to do their experiments and so either have to analyze the data themselves (for which they may not be appropriately trained) or collaborate with someone willing to do it. Large academic centers often have research cores that provide data analysis services, but this doesn’t change the fact that data analysis that occurs “outside the company” dramatically increases the transaction costs of doing the research. Because data analysis is a highly iterative process, each time you have to go back in forth with an outside entity, the costs go up.</p> <p>I think it’s possible to see a time when data analysis can effectively be made external. I mean, Apple used to manufacture all its products, but has shifted to contract manufacturing to great success. But I think we will have to develop a much better understanding of the data analysis process before we see the transaction costs start to go down.</p> New Feather Format for Data Frames 2016-03-31T00:00:00+00:00 http://simplystats.github.io/2016/03/31/feather <p>This past Tuesday, Hadley Wickham and Wes McKinney <a href="http://blog.cloudera.com/blog/2016/03/feather-a-fast-on-disk-format-for-data-frames-for-r-and-python-powered-by-apache-arrow/">announced</a> a new binary file format specifically for storing data frames.</p> <blockquote> <p>One thing that struck us was that, while R’s data frames and Python’s pandas data frames utilize different internal memory representations, the semantics of their user data types are mostly the same. In both R and pandas, data frames contain lists of named, equal-length columns, which can be numeric, boolean, and date-and-time, categorical (factors), or string. Additionally, these columns must support missing (null) values.</p> </blockquote> <p>Their work builds on the Apache Arrow project, which specifies a format for tabular data. There is currently a Python and R implementation for reading/writing these files but other implementations could easily be built as the file format looks pretty straightforward. The git repository is <a href="https://github.com/wesm/feather/">here</a>.</p> <p>Initial thoughts:</p> <ul> <li> <p>The possibilities for passing data between languages is I think the main point here. The potential for passing data through a pipeline without worrying about the specifics of different languages could make for much more powerful analyses where different tools are used for whatever they tend to do best. Essentially, as long as data can be made tidy going in and coming out, there should not be a communication issue between languages.</p> </li> <li> <p>R users might be wondering what the big deal is–we already have a binary serialization format (XDR). But R’s serialization format is meant to cover all possible R objects. Feather’s focus on data frames allows for the removal of many of the annoying (but seldom used) complexities of R objects and optimizing a very commonly used data format.</p> </li> <li> <p>In my testing, there’s a noticeable speed difference between reading a feather file and reading an (uncompressed) R workspace file (feather seems about 2x faster). I didn’t time writing files, but the difference didn’t seem as noticeable there. That said, it’s not clear to me that performance on files is the main point here.</p> </li> <li> <p>Given the underlying framework and representation, there seem to be some interesting possibilities for low-memory environments.</p> </li> </ul> <p>I’ve only had a chance to quickly look at the code but I’m excited to see what comes next.</p> How to create an AI startup - convince some humans to be your training set 2016-03-30T00:00:00+00:00 http://simplystats.github.io/2016/03/30/humans-as-training-set <p>The latest trend in data science is <a href="https://en.wikipedia.org/wiki/Artificial_intelligence">artificial intelligence</a>. It has been all over the news for tackling a bunch of interesting questions. For example:</p> <ul> <li><a href="https://deepmind.com/alpha-go.html">AlphaGo</a> <a href="http://www.techrepublic.com/article/how-googles-deepmind-beat-the-game-of-go-which-is-even-more-complex-than-chess/">beat</a> one of the top Go players in the world in what has been called a major advance for the field.</li> <li>Microsoft created a chatbot <a href="http://techcrunch.com/2016/03/23/microsofts-new-ai-powered-bot-tay-answers-your-tweets-and-chats-on-groupme-and-kik/">Tay</a> that ultimately <a href="http://www.bbc.com/news/technology-35902104">went very very wrong</a>.</li> <li>Google and a number of others are working on <a href="https://www.google.com/selfdrivingcar/">self driving cars</a>.</li> <li>Facebook is creating an artificial intellgence based <a href="http://www.engadget.com/2015/08/26/facebook-messenger-m-assistant/">virtual assistant called M</a></li> </ul> <p>Almost all of these applications are based (at some level) on using variations on <a href="http://neuralnetworksanddeeplearning.com/">neural networks and deep learning</a>. These models are used like any other statistical or machine learning model. They involve a prediction function that is based on a set of parameters. Using a training data set, you estimate the parameters. Then when you get a new set of data, you push it through the prediction function using those estimated parameters and make your predictions.</p> <p>So why does deep learning do so well on problems like voice recognition, image recognition, and other complicated tasks? The main reason is that these models involve hundreds of thousands or millions of parameters, that allow the model to capture even very subtle structure in large scale data sets. This type of model can be fit now because (a) we have huge training sets (think all the pictures on Facebook or all voice recordings of people using Siri) and (b) we have fast computers that allow us to estimate the parameters.</p> <p>Almost all of the high-profile examples of “artificial intelligence” we are hearing about involve this type of process. This means that the machine is “learning” from examples of how humans behave. The algorithm itself is a way to estimate subtle structure from collections of human behavior.</p> <p>The result is that the typical trajectory for an AI business is.</p> <ol> <li>Get a large collection of humans to perform some repetitive but possibly complicated behavior (play thousands of games of Go, or answer requests from people on Facebook messenger, or label pictures and videos, or drive cars.)</li> <li>Record all of the actions the humans perform to create a training set.</li> <li>Feed these data into a statistical model with a huge number of parameters - made possible by having a huge training set collected from the humans in steps 1 and 2.</li> <li>Apply the algorithm to perform the repetitive task and cut the humans out of the process.</li> </ol> <p>The question is how do you get the humans to perform the task for you? One option is to collect data from humans who are using your product (think Facebook image tagging). The other, more recent phenomenon, is to farm the task out to a large number of contractors (think <a href="http://www.theguardian.com/commentisfree/2015/jul/26/will-we-get-by-gig-economy">gig economy</a> jobs like driving for Uber, or responding to queries on Facebook).</p> <p>The interesting thing about the latter case is that in the short term it produces a market for gigs for humans. But in the long term, by performing those tasks, the humans are putting themselves out of a job. This played out in a relatively public way just recently with a service called <a href="http://www.fastcompany.com/3058060/this-is-what-it-feels-like-when-a-robot-takes-your-job">GoButler</a> that used its employees to train a model and then replaced them with that model.</p> <p>It will be interesting to see how many areas of employment this type of approach takes over. It is also interesting to think about how much value each task you perform for a company like that is worth to the training set. It will also be interesting if there is a legal claim for the gig workers at these companies to make that their labor helped “create the value” at the companies that replace them.</p> Not So Standard Deviations Episode 12 - The New Bayesian vs. Frequentist 2016-03-26T00:00:00+00:00 http://simplystats.github.io/2016/03/26/nssd-episode-12 <p>In this episode, Hilary and I discuss the new direction for the journal Biostatistics, the recent fracas over ggplot2 and base graphics in R, and whether collecting more data is always better than collecting less (fewer?) data. Also, Hilary and Roger respond to some listener questions and more free advertising.</p> <p>If you have questions you’d like us to answer, you can send them to nssdeviations @ gmail.com or tweet us at @NSSDeviations.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Please <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">leave us a review on iTunes</a>!</p> <p>Show notes:</p> <ul> <li> <p><a href="http://goo.gl/am6I3r">Jeff Leek on why he doesn’t use ggplot2</a></p> </li> <li> <p>David Robinson on <a href="http://varianceexplained.org/r/why-I-use-ggplot2/">why he uses ggplot2</a></p> </li> <li> <p><a href="http://goo.gl/6iEB2I">Nathan Yau’s post comparing ggplot2 and base graphics</a></p> </li> <li> <p><a href="https://goo.gl/YuhFgB">Biostatistics Medium post</a></p> </li> <li> <p><a href="http://goo.gl/tXNdCA">Photoviz</a></p> </li> <li> <p><a href="https://twitter.com/PigeonAir">PigeonAir</a></p> </li> <li> <p><a href="https://goo.gl/jqlg0G">I just want to plot()</a></p> </li> <li> <p><a href="https://goo.gl/vvCfkl">Hilary and Rush Limbaugh</a></p> </li> <li> <p><a href="http://imgur.com/a/K4RWn">Deep learning training set</a></p> </li> <li> <p><a href="http://patreon.com/NSSDeviations">NSSD Patreon Page</a></p> </li> </ul> <p><a href="https://soundcloud.com/nssd-podcast/episode-12-the-new-bayesian-vs-frequentist">Download the audio for this episode.</a></p> <iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/255099493&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe> The future of biostatistics 2016-03-24T00:00:00+00:00 http://simplystats.github.io/2016/03/24/the-future-of-biostatistics <p>Starting in January my colleague <a href="https://twitter.com/drizopoulos">Dimitris Rizopoulos</a> and I took over as co-editors of the journal Biostatistics. We are pretty fired up to try some new things with the journal and to make sure that the most important advances in statistical methodology and application have a good home.</p> <p>We started a blog for the journal and our first post is here: <a href="https://medium.com/@biostatistics/the-future-of-biostatistics-5aa8246e14b4#.uk1gat5sr">The future of Biostatistics</a>. Thanks to <a href="https://twitter.com/kwbroman/status/695306823365169154">Karl Broman and his famiy</a> we also have the twitter handle <a href="https://twitter.com/biostatistics">@biostatistics</a>. Follow us there to hear about all the new stuff we are rolling out.</p> The Evolution of a Data Scientist 2016-03-21T00:00:00+00:00 http://simplystats.github.io/2016/03/21/dataScientistEvo-jaffe <p><em>Editor’s note: This post is a guest post by <a href="http://aejaffe.com">Andrew Jaffe</a></em></p> <p>“How do you get to Carnegie Hall? Practice, practice, practice.” (“The Wit Parade” by E.E. Kenyon on March 13, 1955)</p> <p>”..an extraordinarily consistent answer in an incredible number of fields … you need to have practiced, to have apprenticed, for 10,000 hours before you get good.” (Malcolm Gladwell, on Outliers)</p> <p>I have been a data scientist for the last seven or eight years, probably before “data science” existed as a field. I work almost exclusively in the R statistical environment which I first toyed with as a sophomore in college, which ramped up through graduate school. I write all of my code in Notepad++ and make all of my plots with base R graphics, over newer and probably easier approaches, like R Studio, ggplot2, and R Markdown. Every so often, someone will email asking for code used in papers for analysis or plots, and I dig through old folders to track it down. Every time this happens, I come to two realizations: 1) I used to write fairly inefficient and not-so-great code as an early PhD student, and 2) I write a lot of R code.</p> <p>I think there are some pretty good ways of measuring success and growth as a data scientist – you can count software packages and their user-bases, projects and papers, citations, grants, and promotions. But I wanted to calculate one more metric to add to the list – how much R code have I written in the last 8 years? I have been using the Joint High Performance Computing Exchange (JHPCE) at Johns Hopkins University since I started graduate school, so all of my R code was pretty much all in one place. I therefore decided to spend my Friday night drinking some Guinness and chronicling my journey using R and evolution as a data scientist.</p> <p>I found all of the .R files across my four main directories on the computing cluster (after copying over my local scripts), and then removed files that came with packages, that belonged to other users, and that resulted from poorly designed simulation and permutation analyses (perm1.R,…,perm100.R) before I learned how to use array jobs, and then extracted the creation date, last modified date, file size, and line count for each R script. Based on this analysis, I have written 3257 R scripts across 13.4 megabytes and 432,753 lines of code (including whitespace and comments) since February 22, 2009.</p> <p>I found that my R coding output has generally increased over time when tabulated by month (number of scripts: p=6.3e-7, size of files: p=3.5x10-9, and number of lines: p=5.0e-9). These metrics of coding – number, size, and lines - also suggest that, on average, I wrote the most code during my PhD (p-value range: 1.7e-4-1.8e-7). Interestingly, the changes in output over time surprisingly consistent across the three phases of my academic career: PhD, postdoc, and faculty (see Figure 1) – you can see the initial dropoff in production during the first one or two months as I transitioned to a postdoc at the Lieber Institute for Brain Development after finishing my PhD. My output rate has dropped slightly as a faculty member as I started working with doctoral students who took over the analyses of some projects (month-by-output interaction p-value: 5.3e-4, 0.002, and 0.03, respectively, for number, size, and lines). The mean coding output – on average, how much code it takes for a single analysis – were also increased over time and slightly decreased at LIBD, although to lesser extents (all p-values were between 0.01-0.05). I was actually surprised that coding output increased – rather than decreased – over time, as any gains in coding efficiency were probably canceled out my often times more modular analyses at LIBD.</p> <p><img src="https://raw.githubusercontent.com/simplystats/simplystats.github.io/master/_images/2016-03-21/sizeVsMonth_rCode.jpg" alt="Figure 1: Coding output over time. Vertical bars separate my PhD, postdoc, and faculty jobs" /></p> <p>I also looked at coding output by hour of the day to better characterize my working habits – the output per hour is shown stratified by the two eras each about ~3 years (Figure 2). As expected, I never really work much in the morning – very little work get done before 8AM – and little has changed since a second year PhD student. As a faculty member, I have the highest output between 9AM-3PM. The trough between 4PM and 7PM likely corresponds to walking the dog we got three years ago, working out, and cooking (and eating) dinner. The output then increases steadily from 8PM-12AM, where I can work largely uninterrupted from meetings and people dropping by my office, with occasional days (or nights) working until 1AM.</p> <p><img src="https://raw.githubusercontent.com/simplystats/simplystats.github.io/master/_images/2016-03-21/sizeVsHour_rCode.jpg" alt="Figure 2: Coding output by hour of day. X-axis starts at 5AM to divide the day into a more temporal order." /></p> <p>Lastly, I examined R coding output by day of the week. As expected, the lowest output occurred over the weekend, especially on Saturdays. Interestingly, I tended to increase output later in the work week as a faculty member, and also work a little more on Sundays and Mondays, compared to a PhD student.</p> <p><img src="https://raw.githubusercontent.com/simplystats/simplystats.github.io/master/_images/2016-03-21/sizeVsDay_rCode.jpg" alt="Figure 3: Coding output by day of week." /></p> <p>Looking at the code itself, of the 432,753 lines, 84,343 were newlines (19.5%), 66,900 were lines that were exclusively comments (15.5%), and an additional 6,994 lines contained comments following R code (1.6%). Some of my most used syntax and symbols, as line counts containing at least one symbol, were pretty much as expected (dropping commas and requiring whitespace between characters):</p> <table> <tbody> <tr> <td>Code</td> <td>Count</td> <td>Code</td> <td>Count</td> </tr> <tr> <td>=</td> <td>175604</td> <td>==</td> <td>5542</td> </tr> <tr> <td>#</td> <td>48763</td> <td>&lt;</td> <td>5039</td> </tr> <tr> <td>&lt;-</td> <td>16492</td> <td>for(i</td> <td>5012</td> </tr> <tr> <td>{</td> <td>11879</td> <td>&amp;</td> <td>4803</td> </tr> <tr> <td>}</td> <td>11612</td> <td>the</td> <td>4734</td> </tr> <tr> <td>in</td> <td>10587</td> <td>function(x)</td> <td>4591</td> </tr> <tr> <td>##</td> <td>8508</td> <td>###</td> <td>4105</td> </tr> <tr> <td>~</td> <td>6948</td> <td>-</td> <td>4034</td> </tr> <tr> <td>&gt;</td> <td>5621</td> <td>%in%</td> <td>3896</td> </tr> </tbody> </table> <p>My code is available on GitHub: https://github.com/andrewejaffe/how-many-lines (after removing file paths and names, as many of the projects are currently unpublished and many files are placed in folders named by collaborator), so feel free to give it a try and see how much R code you’ve written over your career. While there are probably a lot more things to play around with and explore, this was about all the time I could commit to this, given other responsibilities (I’m not on sabbatical like <a href="http://jtleek.com">Jeff Leek</a>…). All in all, this was a pretty fun experience and largely reflected, with data, how my R skills and experience have progressed over the years.</p> Not So Standard Deviations Episode 11 - Start and Stop 2016-03-14T00:00:00+00:00 http://simplystats.github.io/2016/03/14/nssd-episode-11 <p>We’ve started a Patreon page! Now you can support the podcast directly by going to <a href="http://patreon.com/NSSDeviations">our page</a> and making a pledge. This will help Hilary and me build the podcast, add new features, and get some better equipment.</p> <p>Episode 11 is an all craft episode of <em>Not So Standard Deviations</em>, where Hilary and Roger discuss starting and ending a data analysis. What do you do at the very beginning of an analysis? Hilary and Roger talk about some of the things that seem to come up all the time. Also up for discussion is the American Statistical Association’s statement on <em>p</em> values, famous statisticians on Twitter, and evil data scientists on TV. Plus two new things for free advertising.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Please <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">leave us a review on iTunes</a>!</p> <p>Show notes:</p> <ul> <li> <p><a href="http://patreon.com/NSSDeviations">NSSD Patreon Page</a></p> </li> <li> <p><a href="https://twitter.com/deleeuw_jan">Jan de Leeuw</a></p> </li> <li> <p><a href="https://twitter.com/BatesDmbates">Douglas Bates</a></p> </li> <li> <p><a href="https://en.wikipedia.org/wiki/Sports_Night">Sports Night</a></p> </li> <li> <p><a href="http://goo.gl/JFz7ic">ASA’s statement on p values</a></p> </li> <li> <p><a href="http://goo.gl/O8kL60">Basic and Applied Psychology Editorial banning p values</a></p> </li> <li> <p><a href="http://www.seriouseats.com/vegan-experience">J. Kenji Alt’s Vegan Experience</a></p> </li> <li> <p><a href="http://fieldworkfail.com/">fieldworkfail</a></p> </li> </ul> <p><a href="https://soundcloud.com/nssd-podcast/episode-11-start-and-stop">Download the audio for this episode</a>.</p> <iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/251825714&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe> Not So Standard Deviations Episode 10 - It's All Counterexamples 2016-03-02T00:00:00+00:00 http://simplystats.github.io/2016/03/02/nssd-episode-10 <p>In the latest episode of Not So Standard Deviations Hilary and I talk about the motivation behind the <a href="https://github.com/hilaryparker/explainr">explainr</a> package and the general usefulness of automated reporting and interpretation of statistical tests. Also, Roger struggles to come up with a quick and easy way to explain why statistics is useful when it sometimes doesn’t produce any different results.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Please <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">leave us a review on iTunes</a>!</p> <p>Show notes:</p> <ul> <li> <p>The <a href="https://github.com/hilaryparker/explainr">explainr</a> package</p> </li> <li> <p><a href="https://google.github.io/CausalImpact/CausalImpact.html">Google’s CausalImpact package</a></p> </li> <li> <p><a href="http://www.wsj.com/articles/SB10001424053111903480904576512250915629460">Software is Eating the World</a></p> </li> <li> <p><a href="http://allendowney.blogspot.com/2015/12/many-rules-of-statistics-are-wrong.html">Many Rules of Statistics are Wrong</a></p> </li> </ul> <p><a href="https://soundcloud.com/nssd-podcast/episode-10-its-all-counterexamples">Download the audio for this episode</a>.</p> <iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/249517993&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe> Preprints are great, but post publication peer review isn't ready for prime time 2016-02-26T00:00:00+00:00 http://simplystats.github.io/2016/02/26/preprints-and-pppr <p>The current publication system works something like this:</p> <h3 id="coupled-review-and-publication">Coupled review and publication</h3> <ol> <li>You write a paper</li> <li>You submit it to a journal</li> <li>It is peer reviewed privately</li> <li>The paper is accepted or rejected a. If rejected go back to step 2 and start over b. If accepted it will be published</li> <li>If published then people can read it</li> </ol> <p>This system has several major disadvantages that bother scientists. It means all research appears on a lag (whatever the time in peer review is). It can be a major lag time if the paper is sent to “top tier journals” and rejected then filters down to “lower tier” journals before ultimate publication. Another disadvantage is that there are two options for most people to publish their papers: (a) in closed access journals where it doesn’t cost anything to publish but then the articles are beyind paywalls and (b) in open access journals where anyone can read them but it costs money to publish. Especially for junior scientists or folks without resources, this creates a difficult choice because they <a href="http://simplystatistics.org/2011/11/03/free-access-publishing-is-awesome-but-expensive-how/">might not be able to afford open access fees</a>.</p> <p>For a number of years some fields like physics (with the <a href="http://arxiv.org/">arxiv</a>) and economics (with <a href="http://www.nber.org/papers.html">NBER</a>) have solved this problem by decoupling peer review and publication. In these fields the system works like this:</p> <h3 id="decoupled-review-and-publication">Decoupled review and publication</h3> <ol> <li>You write a paper</li> <li>You post a preprint a. Everyone can read and comment</li> <li>You submit it to a journal</li> <li>It is peer reviewed privately</li> <li>The paper is accepted or rejected a. If rejected go back to step 2 and start over b. If accepted it will be published</li> </ol> <p>Lately there has been a growing interest in this same system in molecular and computational biology. I think this is a really good thing, because it makes it easier to publish papers more quickly and doesn’t cost researchers to publish. That is why the papers my group publishes all show up on <a href="http://biorxiv.org/search/author1%3AJeffrey%2BLeek%2B">biorxiv</a> or <a href="http://arxiv.org/find/stat/1/au:+Leek_J/0/1/0/all/0/1">arxiv</a> first.</p> <p>While I think this decoupling is great, there seems to be a push for this decoupling and at the same time a move to post publication peer review. I used to argue pretty strongly for <a href="http://simplystatistics.org/2012/10/04/should-we-stop-publishing-peer-reviewed-papers/">post-publication peer review</a> but Rafa <a href="http://simplystatistics.org/2012/10/08/why-we-should-continue-publishing-peer-reviewed-papers/">set me straight</a> and pointed out that at least with peer review every paper that gets submitted gets evaluated by <em>someone</em> even if the paper is ultimately rejected.</p> <p>One of the risks of post publication peer review is that there is no incentive to peer review in the current system. In a paper a few years ago I actually showed that under an economic model for closed peer review the Nash equilibrium is for <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026895">no one to peer review at all</a>. We showed in that same paper that under open peer review there is an increase in the amount of time spent reviewing, but the effect was relatively small. Moreover the dangers of open peer review are clear (junior people reviewing senior people and being punished for it) while the benefits (potentially being recognized for insightful reviews) are much hazier. Even the most vocal proponents of post publication peer review <a href="http://www.ncbi.nlm.nih.gov/myncbi/michael.eisen.1/comments/">don’t do it that often</a> when given the chance.</p> <p>The reason is that everyone in academics already have a lot of things they are asked to do. Many review papers either out of a sense of obligation or because they want to be in the good graces of a particular journal. Without this system in place there is a strong chance that peer review rates will drop and only a few papers will get reviewed. This will ultimately decrease the accuracy of science. In our (admittedly contrived/simplified) <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.002689">experiment</a> on peer review accuracy went from 39% to 78% after solutions were reviewed. You might argue that only “important” papers should be peer reviewed but then you are back in the camp of glamour. Say waht you want about glamour journals. They are for sure biased by the names of the people submitting the papers there. But it is <em>possible</em> for someone to get a paper in no matter who they are. If we go to a system where there is no curation through a journal-like mechanism then popularity/twitter followers/etc. will drive readers. I’m not sure that is better than where we are now.</p> <p>So while I think pre-prints are a great idea I’m still waiting to see a system that beats pre-publication review for maintaining scientific quality (even though it may just be an <a href="http://simplystatistics.org/2015/02/09/the-trouble-with-evaluating-anything/">impossible problem</a>)</p> Spreadsheets: The Original Analytics Dashboard 2016-02-23T08:42:30+00:00 http://simplystats.github.io4677 <p>Soon after my discussion with Hilary Parker and Jenny Bryan about spreadsheets on <em><a href="https://soundcloud.com/nssd-podcast">Not So Standard Deviations</a></em>, Brooke Anderson forwarded me <a href="https://backchannel.com/a-spreadsheet-way-of-knowledge-8de60af7146e#.gj4f2bod4">this article</a> written by Steven Levy about the original granddaddy of spreadsheets, <a href="https://en.wikipedia.org/wiki/VisiCalc">VisiCalc</a>. Actually, the real article was written back in 1984 as so-called microcomputers were just getting their start. VisiCalc was originally written for the Apple II computer and notable competitors at the time included <a href="https://en.wikipedia.org/wiki/Lotus_1-2-3">Lotus 1-2-3</a> and Microsoft <a href="https://en.wikipedia.org/wiki/Multiplan">Multiplan</a>, all since defunct.</p> <p>It’s interesting to see Levy’s perspective on spreadsheets back then and to compare it to the current thinking about data, data science, and reproducibility in science. The problem back then was “ledger sheets” (what we might now call a spreadsheet), which contained numbers and calculations related to businesses, were tedious to make and keep up to date.</p> <blockquote> <p>Making spreadsheets, however necessary, was a dull chore best left to accountants, junior analysts, or secretaries. As for sophisticated “modeling” tasks – which, among other things, enable executives to project costs for their companies – these tasks could be done only on big mainframe computers by the data-processing people who worked for the companies Harvard MBAs managed.</p> </blockquote> <p>You can see one issue here: Spreadsheets/Ledgers were a “dull chore”, and best left to junior people. However, the “real” computation was done by the people the “data processing” center on big mainframes. So what exactly does that leave for the business executive to do?</p> <p>Note that the way of doing things back then was effectively reproducible, because the presentation (ledger sheets printed on paper) and the computation (data processing on mainframes) was separated.</p> <p>The impact of the microcomputer-based spreadsheet program appears profound.</p> <blockquote> <p id="9424" class="graf--p graf-after--p"> Already, the spreadsheet has redefined the nature of some jobs; to be an accountant in the age of spreadsheet program is — well, almost sexy. And the spreadsheet has begun to be a forceful agent of decentralization, breaking down hierarchies in large companies and diminishing the power of data processing. </p> <p class="graf--p graf-after--p"> There has been much talk in recent years about an “entrepreneurial renaissance” and a new breed of risk-taker who creates businesses where none previously existed. Entrepreneurs and their venture-capitalist backers are emerging as new culture heroes, settlers of another American frontier. Less well known is that most of these new entrepreneurs depend on their economic spreadsheets as much as movie cowboys depend on their horses. </p> </blockquote> <p class="graf--p graf-after--p">  If you replace "accountant" with "statistician" and "spreadsheet" with "big data" and you are magically teleported into 2016. </p> <p class="graf--p graf-after--p"> The way I see it, in the early 80's, spreadsheets satisfied the never-ending desire that people have to interact with data. Now, with things like tablets and touch-screen phones, you can literally "touch" your data. But it took microcomputers to get to a certain point before interactive data analysis could really be done in a way that we recognize today. Spreadsheets tightened the loop between question and answer by cutting out the Data Processing department and replacing it with an Apple II (or an IBM PC, if you must) right on your desk. </p> <p class="graf--p graf-after--p"> Of course, the combining of presentation with computation comes at a cost of reproducibility and perhaps quality control. Seeing the description of how spreadsheets were originally used, it seems totally natural to me. It is not unlike today's analytic dashboards that give you a window into your business and allow you to "model" various scenarios by tweaking a few numbers of formulas. Over time, people took spreadsheets to all sorts of extremes, using them for purposes for which they were not originally designed, and problems naturally arose. </p> <p class="graf--p graf-after--p"> So now, we are trying to separate out the computation and presentation bits a little. Tools like knitr and R and shiny allow us to do this and to bring them together with a proper toolchain. The loss in interactivity is only slight because of the power of the toolchain and the speed of computers nowadays. Essentially, we've brought back the Data Processing department, but have staffed it with robots and high speed multi-core computers. </p> Non-tidy data 2016-02-17T15:47:23+00:00 http://simplystats.github.io4664 <p>During the discussion that followed the ggplot2 posts from David and I last week we started talking about tidy data and the man himself noted that matrices are often useful instead of <a href="http://vita.had.co.nz/papers/tidy-data.pdf">“tidy data”</a> and I mentioned there might be other data that are usefully “non tidy”. Here I will be using tidy/non-tidy according to Hadley’s definition. So tidy data have:</p> <ul> <li>One variable per column</li> <li>One observation per row</li> <li>Each type of observational unit forms a table</li> </ul> <p>I push this approach in my <a href="https://github.com/jtleek/datasharing">guide to data sharing</a> and in a lot of my personal work. But note that non-tidy data can definitely be already processed, cleaned, organized and ready to use.</p> <blockquote class="twitter-tweet" data-width="550"> <p lang="en" dir="ltr"> <a href="https://twitter.com/hadleywickham">@hadleywickham</a> <a href="https://twitter.com/drob">@drob</a> <a href="https://twitter.com/mark_scheuerell">@mark_scheuerell</a> I'm saying that not all data are usefully tidy (and not just matrices) so I care more abt flexibility </p> <p> &mdash; Jeff Leek (@jtleek) <a href="https://twitter.com/jtleek/status/698247927706357760">February 12, 2016</a> </p> </blockquote> <p>This led to a very specific blog request:</p> <blockquote class="twitter-tweet" data-width="550"> <p lang="en" dir="ltr"> <a href="https://twitter.com/jtleek">@jtleek</a> <a href="https://twitter.com/drob">@drob</a> I want a blog post on non-tidy data! </p> <p> &mdash; Hadley Wickham (@hadleywickham) <a href="https://twitter.com/hadleywickham/status/698251883685646336">February 12, 2016</a> </p> </blockquote> <p>So I thought I’d talk about a couple of reasons why data are usefully non-tidy. The basic reason is that I usually take a <a href="http://simplystatistics.org/2013/05/29/what-statistics-should-do-about-big-data-problem-forward-not-solution-backward/">problem first, not solution backward</a> approach to my scientific research. In other words, the goal is to solve a particular problem and the format I chose is the one that makes it most direct/easy to solve that problem, rather than one that is theoretically optimal.   To illustrate these points I’ll use an example from my area.</p> <p><strong>Example data</strong></p> <p>Often you want data in a matrix format. One good example is gene expression data or data from another high-dimensional experiment. David talks about one such example in <a href="http://varianceexplained.org/r/tidy-genomics/">his post here</a>. He makes the (valid) point that for students who aren’t going to do genomics professionally, it may be more useful to learn an abstract tool such as tidy data/dplyr. But for those working in genomics, this can make you do unnecessary work in the name of theory/abstraction.</p> <p>He analyzes the data in that post by first tidying the data.</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">library(dplyr) library(tidyr) library(stringr) library(readr) library(broom) &nbsp; original_data % separate(NAME, c("name", "BP", "MF", "systematic_name", "number"), sep = "\\|\\|") %&gt;% mutate_each(funs(trimws), name:systematic_name) %&gt;% select(-number, -GID, -YORF, -GWEIGHT) %&gt;% gather(sample, expression, G0.05:U0.3) %&gt;% separate(sample, c("nutrient", "rate"), sep = 1, convert = TRUE)</pre> </td> </tr> </table> </div> <p>It isn’t 100% tidy as data of different types are in the same data frame (gene expression and metadata/phenotype data belong in different tables). But its close enough for our purposes. Now suppose that you wanted to fit a model and test for association between the “rate” variable and gene expression for each gene. You can do this with David’s tidy data set, dplyr, and the broom package like so:</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">rate_coeffs = cleaned_data %&gt;% group_by(name) %&gt;% do(fit = lm(expression ~ rate + nutrient, data = .)) %&gt;% tidy(fit) %&gt;% dplyr::filter(term=="rate")</pre> </td> </tr> </table> </div> <p>On my computer we get something like:</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">system.time( cleaned_data %&gt;% group_by(name) %&gt;% + do(fit = lm(expression ~ rate + nutrient, data = .)) %&gt;% + tidy(fit) %&gt;% + dplyr::filter(term=="rate")) |==========================================================|100% ~0 s remaining user system elapsed 12.431 0.258 12.364</pre> </td> </tr> </table> </div> <p>Let’s now try that analysis a little bit differently. As a first step, lets store the data in two separate tables. A table of “phenotype information” and a matrix of “expression levels”. This is the more common format used for these type of data. Here is the code to do that:</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">expr = original_data %&gt;% select(grep("[0:9]",names(original_data))) &nbsp; rownames(expr) = original_data %&gt;% separate(NAME, c("name", "BP", "MF", "systematic_name", "number"), sep = "\\|\\|") %&gt;% select(systematic_name) %&gt;% mutate_each(funs(trimws),systematic_name) %&gt;% as.matrix() &nbsp; vals = data.frame(vals=names(expr)) pdata = separate(vals,vals,c("nutrient", "rate"), sep = 1, convert = TRUE) &nbsp; expr = as.matrix(expr)</pre> </td> </tr> </table> </div> <p>If we leave the data in this format we can get the model fits and the p-values using some simple linear algebra</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">expr = as.matrix(expr) &nbsp; mod = model.matrix(~ rate + as.factor(nutrient),data=pdata) rate_betas = expr %*% mod %*% solve(t(mod) %*% mod)</pre> </td> </tr> </table> </div> <p>This gives the same answer after re-ordering</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">all(abs(rate_betas[,2]- rate_coeffsestimate[ind]) &lt; 1e-5,na.rm=T) [1] TRUE</pre> </td> </tr> </table> </div> <p>But this approach is much faster.</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;"> system.time(expr %*% mod %*% solve(t(mod) %*% mod)) user system elapsed 0.015 0.000 0.015</pre> </td> </tr> </table> </div> <p>This requires some knowledge of linear algebra and isn’t pretty. But it brings us to the first general point: <strong>you might not use tidy data because some computations are more efficient if the data is in a different format. </strong></p> <p>Many examples from graphical models, to genomics, to neuroimaging, to social sciences rely on some kind of linear algebra based computations (matrix multiplication, singular value decompositions, eigen decompositions, etc.) which are all optimized to work on matrices, not tidy data frames. There are ways to improve performance with tidy data for sure, but they would require an equal amount of custom code to take advantage of say C, or vectorization properties in R.</p> <p>Ok now the linear regressions here are all treated independently, but it is very well known that you get much better performance in terms of the false positive/true positive tradeoff if you use an empirical Bayes approach for this calculation where <a href="https://bioconductor.org/packages/release/bioc/html/limma.html">you pool variances</a>.</p> <p>If the data are in this matrix format you can do it with R like so:</p> <div class="wp_syntax"> <table> <tr> <td class="code"> <pre class="r" style="font-family:monospace;">library(limma) fit_limma = lmFit(expr,mod) ebayes_limma = eBayes(fit_limma) topTable(ebayes_limma)</pre> </td> </tr> </table> </div> <p>This approach is again very fast, optimized for the calculations being performed and performs much better than the one-by-one regression approach. But it requires the data in matrix or expression set format. Which brings us to the second general point: <strong>**you might not use tidy data because many functions require a different, also very clean and useful data format, and you don’t want to have to constantly be switching back and forth. </strong>**Again, this requires you to be more specific to your application, but the potential payoffs can be really big as in the case of limma.</p> <p>I’m showing an example here with expression sets and matrices, but in NLP the data are often input in the form of lists, in graphical analyses as matrices, in genomic analyses as GRanges lists, etc. etc. etc. One option would be to rewrite all infrastructure in your area of interest to accept tidy data formats but that would be going against conventions of a community and would ultimately cost you a lot of work when most of that work has already been done for you.</p> <p>The final point, which I won’t discuss here is that data are often usefully represented in a non-tidy way. Examples include the aforementioned <a href="http://kasperdanielhansen.github.io/genbioconductor/html/GenomicRanges_GRanges.html">GRanges list</a> which consists of (potentially) ragged arrays of intervals and quantitative measurements about them. You could <strong>force</strong> these data to be tidy by the definition above, but again most of the infrastructure is built around a different format that is much more intuitive for that type of data. Similarly data from other applications may be more suited to application specific formats.</p> <p>In summary, tidy data is a useful conceptual idea and is often the right way to go for general, small data sets, but may not be appropriate for all problems. Here are some examples of data formats (biased toward my area, but there are others) that have been widely adopted, have a ton of useful software, but don’t meet the tidy data definition above. I will define these as “processed data” as opposed to “tidy data”.</p> <ul> <li><a href="http://bioconductor.org/packages/3.3/bioc/vignettes/Biobase/inst/doc/ExpressionSetIntroduction.pdf">Expression sets</a> for expression data</li> <li><a href="http://kasperdanielhansen.github.io/genbioconductor/html/SummarizedExperiment.html">Summarized experiments</a> for a variety of genomic experiments</li> <li><a href="http://kasperdanielhansen.github.io/genbioconductor/html/GenomicRanges_GRanges.html">Granges Lists</a> for genomic intervals</li> <li><a href="https://cran.r-project.org/web/packages/tm/tm.pdf">Corpus</a> objects for corpora of texts.</li> <li><a href="http://igraph.org/r/doc/">igraph objects</a> for graphs</li> </ul> <p>I’m sure there are a ton more I’m missing and would be happy to get some suggestions on Twitter too.</p> <p> </p> When it comes to science - its the economy stupid. 2016-02-16T14:57:14+00:00 http://simplystats.github.io4662 <p>I read a lot of articles about what is going wrong with science:</p> <ul> <li><a href="http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble">The reproducibility/replicability crisis</a></li> <li><a href="http://www.theatlantic.com/business/archive/2013/02/the-phd-bust-americas-awful-market-for-young-scientists-in-7-charts/273339/">Lack of jobs for PhDs</a></li> <li><a href="https://theresearchwhisperer.wordpress.com/2013/11/19/academic-scattering/">The pressure on the families (or potential families) of scientists</a></li> <li><a href="http://quillette.com/2016/02/15/the-unbearable-asymmetry-of-bullshit/?utm_content=buffer235f2&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Hype around specific papers and a more general abundance of BS</a></li> <li><a href="http://www.michaeleisen.org/blog/?p=1179">Consortia and their potential evils</a></li> <li><a href="http://www.vox.com/2015/12/7/9865086/peer-review-science-problems">Peer review not working well</a></li> <li><a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">Research parasites</a></li> <li><a href="http://gmwatch.org/news/latest-news/16691-public-science-is-broken-says-professor-who-helped-expose-water-pollution-crisis">Not enough room for applications/public good</a></li> <li><a href="http://www.statnews.com/2016/02/10/press-releases-stink/?s_campaign=stat:rss">Press releases that do evil</a></li> <li><a href="https://twitter.com/Richvn/status/697725899404349440">Scientists don’t release enough data</a></li> </ul> <p>These articles always point to the “incentives” in science and how they don’t align with how we’d like scientists to work. These discussions often frustrate me because they almost always boil down to asking scientists (especially and often junior scientists) to make some kind of change for public good without any guarantee that they are going to be ok. I’ve seen an acceleration/accumulation of people who are focusing on these issues, I think largely because it is now possible to make a very nice career by pointing out how other people are doing science wrong.</p> <p>The issue I have is that the people who propose unilateral moves seem to care less that science is both (a) a calling and (b) a career for most people. I do science because I love it. I do science because I want to discover new things about the world. It is a direct extension of the wonder and excitement I had about the world when I was a little kid. But science is also a career for me. It matters if I get my next grant, if I get my next paper. Why? Because I want to be able to support myself and my family.</p> <p>The issue with incentives is that talking about them costs nothing, but actually changing them is expensive. Right now our system, broadly defined, rewards (a) productivity - lots of papers, (b) cleverness - coming up with an idea first, and (c) measures of prestige - journal titles, job titles, etc. This is because there are tons of people going for a relatively small amount of grant money. More importantly, that money is decided on by processes that are both peer reviewed and political.</p> <p>Suppose that you wanted to change those incentives to something else. Here is a small list of things I would like:</p> <ul> <li>People can have stable careers and live in a variety of places without massive two body problems</li> <li>Scientists shouldn’t have to move every couple of years 2-3 times right at the beginning of their career</li> <li>We should distribute our money among the <a href="http://simplystatistics.org/2015/12/01/thinking-like-a-statistician-fund-more-investigator-initiated-grants/">largest number of scientists possible </a></li> <li>Incentivizing long term thinking</li> <li>Incentivizing objective peer review</li> <li>Incentivizing openness and sharing</li> </ul> <div> The key problem isn't publishing, or code, or reproducibility, or even data analysis. </div> <div> </div> <div> <b>The key problem is that the fundamental model by which we fund science is completely broken. </b> </div> <div> </div> <div> The model now is that you have to come up with an <span class="lG">idea</span> every couple of years then "sell" it to funders, your peers, etc. This is the source of the following problems: </div> <div> </div> <ul> <li>An incentive to publish only positive results so your <span class="lG">ideas</span> look good</li> <li>An incentive to be closed so people don’t discover flaws in your analysis</li> <li> An incentive to publish in specific “<span class="lG">big</span> name” journals that skews the results (again mostly in the positive direction)</li> <li> Pressure to publish quickly which leads to cutting corners</li> <li>Pressure to stay in a single area and make incremental changes so you know things will work.</li> </ul> <div> If we really want to have any measurable impact on science we need to solve the funding model. The solution is actually pretty simple. We need to give out 20+ year grants to people who meet minimum qualifications. These grants would cover their own salary plus one or two people and the minimum necessary equipment. </div> <div> </div> <div> The criteria for getting or renewing these grants should not be things like Nature papers or number of citations. It has to be designed to incentivize the things that we want to (mine are listed above). So if I was going to define the criteria for meeting the standards people would have to be: </div> <div> </div> <ul> <li>Working on a scientific problem and trained as a scientist</li> <li>Publishing all results immediately online as preprints/free code</li> <li>Responding to queries about their data/code</li> <li>Agreeing to peer review a number of papers per year</li> </ul> <p>More importantly these grants should be given out for a very long term (20+ years) and not be tied to a specific institution. This would allow people to have flexible careers and to target bigger picture problems. We saw the benefits of people working on problems they weren’t originally funded to work on with <a href="http://www.wired.com/2016/02/zika-research-utmb/">research on the Zika virus.</a></p> <p>These grants need to be awarded using a rigorous peer review system just like the NIH, HHMI, and other organizations use to ensure we are identifying scientists with potential early in their careers and letting them flourish. But they’d be given out in a different matter. I’m very confident in a peer review to detect the difference between psuedo-science and real science, or complete hype and realistic improvement. But I’m much less confident in the ability of peer review to accurately distinguish “important” from “not important” research. So I think we should <a href="http://www.wsj.com/articles/SB10001424052702303532704579477530153771424">consider seriously the lottery</a> for these grants.</p> <p>Each year all eligible scientists who meet some minimum entry requirements submit proposals for what they’d like to do scientifically. Each year those proposals are reviewed to make sure they meet the very minimum bar (are they scientific? do they have relevant training at all?). Among all the (very large) class of people who pass that bar we hold a lottery. We take the number of research dollars and divide it up to give the maximum number of these grants possible. These grants might be pretty small - just enough to fund the person’s salary and maybe one or two students/postdocs. To make this works for labs that required equipment there would have to be cooperative arrangements between multiple independent indviduals to fund/sustain equipment they needed. Renewal of these grants would happen as long as you were posting your code/data online, you were meeting peer review requirements, and responding to inquires about your work.</p> <p>One thing we’d do to fund this model is eliminate/reduce large-scale projects and super well funded labs. Instead of having 30 postdocs in a well funded lab, you’d have some fraction of those people funded as independent investigators right from the get-go. If we wanted to run a massive large scale program that would be out of a very specific pot of money that would have to be saved up and spent, completely outside of the pot of money for investigator-initiated grants. That would reduce the hierarchy in the system, reduce pressure that leads to bad incentive, and give us the best chance to fund creative, long term thinking science.</p> <p>Regardless of whether you like my proposal or not, I hope that people will start focusing on how to change the incentives, even when that means doing something big or potentially costly.</p> <p> </p> <p> </p> Not So Standard Deviations Episode 9 - Spreadsheet Drama 2016-02-12T11:24:04+00:00 http://simplystats.github.io4654 <p>For this episode, special guest Jenny Bryan (@jennybryan) joins us from the University of British Columbia! Jenny, Hilary, and I talk about spreadsheets and why some people love them and some people despise them. We also discuss blogging as part of scientific discourse.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Show notes:</p> <ul> <li><a href="http://stat545-ubc.github.io/">Jenny’s Stat 545</a></li> <li><a href="http://goo.gl/VvFyXz">Coding is not the new literacy</a></li> <li><a href="https://goo.gl/mC0Qz9">Goldman Sachs spreadsheet error</a></li> <li><a href="https://goo.gl/hNloVr">Jingmai O’Connor episode</a></li> <li><a href="http://goo.gl/IYDwn1">De-weaponizing reproducibility</a></li> <li><a href="https://goo.gl/n02EGP">Vintage Space</a></li> <li><a href="https://goo.gl/H3YgV6">Tabby Cats</a></li> </ul> <p><a href="https://soundcloud.com/nssd-podcast/episode-9-spreadsheet-drama">Download the audio for this episode</a>.</p> <iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/246296744&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe> Why I don't use ggplot2 2016-02-11T13:25:38+00:00 http://simplystats.github.io4645 <p>Some of my colleagues think of me as super data-sciencey compared to other academic statisticians. But one place I lose tons of street cred in the data science community is when I talk about ggplot2. For the 3 data type people on the planet who still don’t know what that is, <a href="https://cran.r-project.org/web/packages/ggplot2/index.html">ggplot2</a> is an R package/phenomenon for data visualization. It was created by Hadley Wickham, who is (in my opinion) perhaps the most important statistician/data scientist on the planet. It is one of the best maintained, most important, and really well done R packages. Hadley also supports R software like few other people on the planet.</p> <p>But I don’t use ggplot2 and I get nervous when other people do.</p> <p>I get no end of grief for this from <a href="https://soundcloud.com/nssd-podcast/episode-9-spreadsheet-drama">Hilary and Roger</a> and especially from <a href="https://twitter.com/drob/status/625682366913228800">drob</a>, among many others. So I thought I would explain why and defend myself from the internet hordes. To understand why I don’t use it, you have to understand the three cases where I use data visualization.</p> <ol> <li>When creating exploratory graphics - graphs that are fast, not to be shown to anyone else and help me to explore a data set</li> <li>When creating expository graphs - graphs that i want to put into a publication that have to be very carefully made.</li> <li>When grading student data analyses.</li> </ol> <p>Let’s consider each case.</p> <p><strong>Exploratory graphs</strong></p> <p>Exploratory graphs don’t have to be pretty. I’m going to be the only one who looks at 99% of them. But I have to be able to make them <em>quickly</em> and I have to be able to make a <em>broad range of plots</em> <em>with minimal code</em>. There are a large number of types of graphs, including things like heatmaps, that don’t neatly fit into ggplot2 code and therefore make it challenging to make those graphs. The flexibility of base R comes at a price, but it means you can make all sorts of things you need to without struggling against the system. Which is a huge advantage for data analysts. There are some graphs (<a href="http://rafalab.dfci.harvard.edu/images/frontb300.png">like this one</a>) that are pretty straightforward in base, but require quite a bit of work in ggplot2. In many cases qplot can be used sort of interchangably with plot, but then you really don’t get any of the advantages of the ggplot2 framework.</p> <p><strong>Expository graphs</strong></p> <p>When making graphs that are production ready or fit for publication, you can do this with any system. You can do it with ggplot2, with lattice, with base R graphics. But regardless of which system you use it will require about an equal amount of code to make a graph ready for publication. One perfect example of this is the <a href="http://motioninsocial.com/tufte/">comparison of different plotting systems</a> for creating Tufte-like graphs. To create this minimal barchart:</p> <p><img class="aligncenter" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAYAAABB4NqyAAAD8GlDQ1BJQ0MgUHJvZmlsZQAAOI2NVd1v21QUP4lvXKQWP6Cxjg4Vi69VU1u5GxqtxgZJk6XpQhq5zdgqpMl1bhpT1za2021Vn/YCbwz4A4CyBx6QeEIaDMT2su0BtElTQRXVJKQ9dNpAaJP2gqpwrq9Tu13GuJGvfznndz7v0TVAx1ea45hJGWDe8l01n5GPn5iWO1YhCc9BJ/RAp6Z7TrpcLgIuxoVH1sNfIcHeNwfa6/9zdVappwMknkJsVz19HvFpgJSpO64PIN5G+fAp30Hc8TziHS4miFhheJbjLMMzHB8POFPqKGKWi6TXtSriJcT9MzH5bAzzHIK1I08t6hq6zHpRdu2aYdJYuk9Q/881bzZa8Xrx6fLmJo/iu4/VXnfH1BB/rmu5ScQvI77m+BkmfxXxvcZcJY14L0DymZp7pML5yTcW61PvIN6JuGr4halQvmjNlCa4bXJ5zj6qhpxrujeKPYMXEd+q00KR5yNAlWZzrF+Ie+uNsdC/MO4tTOZafhbroyXuR3Df08bLiHsQf+ja6gTPWVimZl7l/oUrjl8OcxDWLbNU5D6JRL2gxkDu16fGuC054OMhclsyXTOOFEL+kmMGs4i5kfNuQ62EnBuam8tzP+Q+tSqhz9SuqpZlvR1EfBiOJTSgYMMM7jpYsAEyqJCHDL4dcFFTAwNMlFDUUpQYiadhDmXteeWAw3HEmA2s15k1RmnP4RHuhBybdBOF7MfnICmSQ2SYjIBM3iRvkcMki9IRcnDTthyLz2Ld2fTzPjTQK+Mdg8y5nkZfFO+se9LQr3/09xZr+5GcaSufeAfAww60mAPx+q8u/bAr8rFCLrx7s+vqEkw8qb+p26n11Aruq6m1iJH6PbWGv1VIY25mkNE8PkaQhxfLIF7DZXx80HD/A3l2jLclYs061xNpWCfoB6WHJTjbH0mV35Q/lRXlC+W8cndbl9t2SfhU+Fb4UfhO+F74GWThknBZ+Em4InwjXIyd1ePnY/Psg3pb1TJNu15TMKWMtFt6ScpKL0ivSMXIn9QtDUlj0h7U7N48t3i8eC0GnMC91dX2sTivgloDTgUVeEGHLTizbf5Da9JLhkhh29QOs1luMcScmBXTIIt7xRFxSBxnuJWfuAd1I7jntkyd/pgKaIwVr3MgmDo2q8x6IdB5QH162mcX7ajtnHGN2bov71OU1+U0fqqoXLD0wX5ZM005UHmySz3qLtDqILDvIL+iH6jB9y2x83ok898GOPQX3lk3Itl0A+BrD6D7tUjWh3fis58BXDigN9yF8M5PJH4B8Gr79/F/XRm8m241mw/wvur4BGDj42bzn+Vmc+NL9L8GcMn8F1kAcXgSteGGAABAAElEQVR4Ae3dBZgcRd6A8eLC4RDcLbg7h7sGC+5uwfXQwzncXQ734MH9cHcPENyPAIEgH/rlrbtaOsPs7GZ3trd75q3n2Z2ZnpaqX/VM/6equnuE34emYFJAAQUUUEABBZpI4C9NVFaLqoACCiiggAIKRAEDIHcEBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEDICarsotsAIKKKCAAgoYALkPKKCAAgoooEDTCRgANV2VW2AFFFBAAQUUMAByH1BAAQUUUECBphMwAGq6KrfACiiggAIKKGAA5D6ggAIKKKCAAk0nYADUdFVugRVQQAEFFFDAAMh9QAEFFFBAAQWaTsAAqOmq3AIroIACCiiggAGQ+4ACCiiggAIKNJ2AAVDTVbkFVkABBRRQQAEDIPcBBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEDICarsotsAIKKKCAAgoYALkPKKCAAgoooEDTCRgANV2VW2AFFFBAAQUUMAByH1BAAQUUUECBphMwAGq6KrfACiiggAIKKGAA5D6ggAIKKKCAAk0nYADUdFVugRVQQAEFFFDAAMh9QAEFFFBAAQWaTsAAqOmq3AIroIACCiigwIgSKKCAAgp0nUD//v07vPI+ffp0eFkXVECB2gK2ANX28V0FFFBAAQUUaEABA6AGrFSLpIACCiiggAK1BQyAavv4rgIKKKCAAgo0oIABUANWqkVSQAEFFFBAgdoCBkC1fXxXAQUUUEABBRpQwACoASvVIimggAIKKKBAbQEDoNo+vquAAgoooIACDShgANSAlWqRFFBAAQUUUKC2gAFQbR/fVUABBRRQQIEGFDAAasBKtUgKKKCAAgooUFvAAKi2j+8qoIACCiigQAMKGAA1YKVaJAUUUEABBRSoLWAAVNvHdxVQQAEFFFCgAQUMgBqwUi2SAgoooIACCtQWMACq7eO7CiiggAIKKNCAAgZADVipFkkBBRRQQAEFagsYANX28V0FFFBAAQUUaEABA6AGrFSLpIACCiiggAK1BQyAavv4rgIKKKCAAgo0oIABUANWaq0iffTRR+Hll1+uNYvvKaCAAgoo0PACBkANX8XDFvDMM88MRxxxxLATfaWAAgoooECTCRgANVGF/9///V+46667wrXXXhs+/PDDJiq5RVVAAQUUUGBYgR6HDE3DTvJVowpcfvnlYa211goPPPBA+O6778Kyyy7bUtQPPvggXHrppeGZZ54JU0wxRRhjjDHCLbfcEuedY445Qo8ePcJjjz0WbrrppjD++OOHccYZJy77/fffhxtvvDFMPvnk4cILL4zLjjzyyOHxxx8Pd9xxR/jkk0/CDDPM0LKd3377Ldx3333htttuC7/88kvczogjjhjXT1B21VVXhcGDB4epp546jDDCCC3LZZ88+uij4c477wxvvvlmmG666QLLk8jvPffcE3766acw5ZRTtizyyiuvhOuvvz68+uqrMX+jjjpqfK9a3il3tXKyQGvbbdmQTxSoIjBgwIAqU9s3aaaZZmrfjM6lgALDLWAL0HCTlXcBAo+ll1467LDDDuHcc88NBAApEfS888474eyzzw4TTzxxnEwAMsooo4S//vWv4aSTTgovvvhiWGCBBcJqq60Wg51vvvkmbLPNNmGrrbYKZ511Vujfv38cX7TTTjuFN954I2y55Zbh+OOPj4EQK/z999/DUkstFSabbLKw6KKLxr999tknDBw4MDz77LPhmGOOCUsssUS45JJLwqqrrpqyNszjrbfeGghoNtlkk3D//feHr776Kr6/+eabh7/85S+hb9++Ydttt435443zzz8/5m2zzTaLgducc84Zl28t79XKyXpa2y7vmRRQQAEFyicwwtCD0u/ly7Y5Hl4BWmSefvrpQHDy5ZdfxhabE044IWy//fYtq/r8889jC8lTTz0VaPU54IADwsEHHxw+++yzsNJKK4X9998/znvNNdeE999/PzAfQcgKK6wQBg0aFFtzmOHkk08Offr0Cb169QobbbRRbAFiPbQ8EagQaJFmm222sOeee4YtttgiLLnkkmG99daLLUu0RqXAiHVk02GHHRaef/75cNlllwWCGFpzaA264oorYksU89JSQ/DGr2cCO1p+UovQyiuvHFueWKYy77RAtVbOatvt2bNnNms+H04BAuaOJvavsqRmKWdZ6sN8KpAE/tt3kF752LACDH6mhWTnnXeOZZxooonCKaecErbbbruWrqYJJ5wwrLPOOrF16Oijj47BBa0/Tz75ZGwVIkghpUee00LEH11HKbGNK6+8MgYYdGcNGTIkvjXppJOGd999N/7RxTX66KOHZZZZJvz666+xy4wutNQ9RQvP2GOPnVbZ8rj11luH3r17x6CKVidaimjZomUnpYUXXjg+TV19E0wwQXorLLbYYrGliQmVea9VzmrbbVmpTxRQQAEFSidgF1jpqmz4M8w4HMbsXHTRReG0006Lf4wHYmzC7bffPswKd9xxx9i6QjCy/vrrx/doRXn99dcDgQTdY/yNN954cazOMAv/78WGG24Yu7to2SHoSWn66aePwdXGG28c7r777sDwM1pmGF80ySSTxPFCaf08fvvtt2nRlkdaXZ544onYarTmmmvG/NPaw+DubCLoSgEULV8pkZ80PU1Lj7XKWW27aTkfFVBAAQXKJ2AAVL46G+4cn3rqqYGgJJtoJaGb67jjjstODgsttFCYZpppYoAy7bTTxvfmm2++wODlPfbYI3afMYD6jDPOCKONNloMdGjBSenrr78OV199dex6Ytrbb78dW3h4znJcg+jhhx8Oyy23XGzJYTpp9dVXD7vttlt44YUX4joZtPzpp5/+983M/9NPPz223Bx++OFhr732iuOSVlxxxRgU0TVHjy4BEgOwZ5111vh37733tqyBVp4U2DFvNu+1ylltuy0r9YkCCiigQOkEDIBKV2XDl2HGyhAAMS4mO9yLQIOWF8bBHHroocOslFagTTfdtGUaLSyM6znvvPPiuB4GFC+//PKxhYZWJQIbBg//+OOPsXWFYIaAi3E8dJf169cvBlQENAxwHnfcccNUU00V5plnnnDsscfG7ey+++5xPXPNNVdYfPHF4wDt2WefvSUP6Qnjkej2YlDyzz//HAdgMx7kH//4RxxvNNZYY4WLL744rLHGGvHsMIIi5k2tX1wK4MADD4yBXGXeWysn3YDVtpvy5KMCCiigQPkEHARdvjrr8hzTKkIwwJihbCLgYEwPp8G3lQg0OB2exOnunKr+3HPPxQHJiyyySAxAWBfBChdm5MwwEkES3V+tJVqi+Mt2caV5CcDSqfVpGo8EfrREjTnmmIFxTm2lauWstd221uf71QWaZXBws5Szei07VYHiCgx7hCtuPs1ZjgK0DFUGP2yelpD2BD/Mm4Ifnqfr9DCwmbFDUw8dAE3rDy09tARlA57sc5atTOSL9VUbx1M5GDstSzBHd157gh+WqVbOWttN2/FRAQUUUKA8Ap4FVp66Kn1OGfRM9xN/888/f+wK45pEBFwmBRRQQAEF8hQwAMpTu8m3tfbaawf+st1jTU5i8RVQQAEFuknALrBugm/mzWa7x5rZwbIroIACCnSfgAFQ99m7ZQUUUEABBRToJgEDoG6Cd7MKKKCAAgoo0H0CBkDdZ++WFVBAAQUUUKCbBAyAugnezSqggAIKKKBA9wkYAHWfvVtWQAEFFFBAgW4SMADqJvjObPbcc88N3Om8zKloZfjpp5/CnnvuGb766qsys5p3BRRQQIF2ChgAtROqu2fjNg8pffzxx2HQoEHpZWke8yjDDz/80CEPbn/xzjvvxPuZpRV0dF1peR8VUEABBYorYABU3LppydmJJ54YXnrppZbXXFF5zTXXbHldhid5lIH7gPXt27dDHKOPPnrgDvSTTDJJXL4z6+pQBlxIAQUUUCBXgR5DD6aH5LrFAmzslVdeiQe7V199NUwxxRRh1FFHjbmiVeWGG24IM844Y7j22mvDU089Feacc854Y9BstmktuPfee+OkAQMGhOuuuy7e04qbbab02GOPhZtuuineO2ucccZJk+ONPO+7775w2223tdy4k3tbcTuI1157LXZt8f5ss80W70nF3dx32mmneB+rkUYaKUw55ZRh4MCB4eWXX443EOXu6g899FC84efkk08eyM+VV14Z52e7dO1wYH/66afDdNNNF1hHtfThhx+Gq666Kt7slHt1cf8s0vPPPx/v+k6rEzd15KalBAuU7/bbb4/54TWJm44y7Y477ghffPFFmH766eN6apWBe4HVWrY91izPfNxr7M477wwTTDBBmGiiiaLpjTfeGHC58MILY11zEcbHH3885vGTTz4JM8wwQ9w+Nzu9//774zKjjTban9ZF3T766KNx/W+++Wa0TPc4iyvw33ALsK92NM0000wdXTT35ZqlnLnDukEFOinQdC1A559/fjjrrLPCZpttFoMTAhwCIgKFQw89NGy55ZbhhBNOCHR/HHbYYeGMM874EzEH2T59+oR//vOfcdl///vfYb311muZ76STTgovvvhiWGCBBcJqq60WD768SYCw1FJLxSBi0UUXDfzts88+MaC555574hgUbhXB3dh33nnnuL5lllkmBkoLLbRQmHnmmePd1Nk2wQoHYIK1XXfdNd5XiwUIOghOevXqFcu0/fbbx+1xM0+CPe62XpmeffbZcMwxx4QlllgiEFCtuuqqcRbKxTRi5GeeeSbwmpuYHnvsseGtt94KV199ddh0003jvAQQq6++esz7NttsEwOMFVZYIb6uVQYWrrVsW9Zx40P/EbBxc9UxxhgjsD1syMdWW20V65vgjaCRYPKNN96I9Xz88cfHfLIOAt5ll1021kXlusYdd9xw6623xromyCJQcqxQkvdRAQUUKKdAUwVAHLQIFvbee+/Y6rPSSiuFOeaYI+yxxx6xZWSLLbaIQQODYXlOMEIwUZlWWWWVGFQsvfTSYeuttw4HHHBAeOKJJ2KAQ0sKrQ09e/aMQQIByplnnhlX8eCDD4b3338/8Ot1rrnmCrPMMksgsOE1rVBsk0SrCK0UpNQlwyMHYpZZZJFF4nv8Y3kO3JdeemmcRiBFEEf617/+FQMQtskdzmnZSPPFGf73j/Kz3ueeey7MOuussXWK8TAEawRdBEGbb755LBetJiuuuGIMfAiECJ5ItDp99tlnMXjiruxHHnlkoCXroosuarMMtZatZf2/7McHAh7uVJ9aybjzOwEQ9x0j6KFVCifKs+SSS8YAidYs6o207rrrxjrjeeW6KA8BIK1bBKcHH3xwtGRekwIKKKBAOQWaKgCiVea7776LXSSpuhZbbLHw5JNPxpep2yc90u2RHbibluGRedJ8tDrQgkRLBuuaeOKJ40GWAy0tSDfffHNcdNJJJw3vvvtu/GMCXUe0VpAIagh8Tj/99Nh6RD6zKW2LadnnvN5xxx3DBRdcEFuKaKVJ63z44YfDvPPO25IXDuIEBdnEAZ1gi6CG/NLCQXdXCrxoOeKPRCBAntP2KXcajE33EAFIStjRupZsmZ6Wq3ze1rIsl5bNWqdttfZIfvljmZRoWXvkkUdiMDd48OAwZMiQ9FbLNlomZJ4Q6NL1SJcZXaMEuCYFFFBAgfIKNFUANPbYY8eaYjxMSgQlaXqa1plHuplef/31GGQRCPE33njjxbE1dE9x+vfGG28c7r777ti1xJgeEt1JRx11VAxmaNWpTCkAqJzOa1pJaLUgCKLlI81LXjhop3zwyFiZbGLsEcEOLTvZ+b799tvsbC3PUzDUMuF/TxhvRICVTdhmA4WUr+w8PG/PspXLtPa6tW2k+TfccMPYUkdrG/mrlbLrohy0FrEcA9BpDTIpoIACCpRXoKkCILp3+EsDmKk2WijWX3/9WIO0hmRTZbCQfY/WnpQ4Y4jEGJ/55psvtgTRrfTll1/GFidageh+olWHcSi0zCy33HKhd+/eaRWxu4oDMgfdt99+O7bmpDdpxSAgSfkjX9m8EcRst912Ya+99gobbbRRWiyOyaEbjOCK/LLeagduxu7stttu4YUXXohlYNB0GitEmfgjsQ66lFI+0nTeW2uttcJ//vOfOEaJ1yQCTbqWSLXK0NayrVnHFWf+0Y2YWnTII/lLeWW2r7/+OloQGJLwyL6PaarLynXRMkcZDj/88OhMa6JJAQUUUKC8Ak0VANFKcs0118QBrYxNOe200+IB/cADD4wHviuuuCLWJGctMZaHAbi0atCik01chPC9996LY2U424l1kS6//PIYwJx88snhvPPOiwORGWy9/PLLxzE4BBUMMmYsD91daUAxy26wwQbxPQIYWhsIeA466CDeiq0ODDZm4DPdL3ThEEQRsKREFw2Do2kBSmnBBReMwR0BHl03p5xySkuwl+bhcffdd4/BGeOSGEj8/fffh9lnnz22eHCgv+uuu8Lnn38eW5jo6mMgOV1flIUAgjFPc889dywzg64Zb0NAxqByAkISLSetlaHWsrWs44oz/+aff/5oyyBufKgXgk4GpdOVSUsfwR6tQAw+p8uvX79+sZ4pA+aMRyJwza6LSxCk8U0MhiZQYnC1SQEFFFCgvAIjDP2V/N+f9+Utw3DnnCLz659xKtmAYbhXVGMBDpKMMcmOi2GQMafeM96HgyzvX3zxxeGII46Ig6ppXeE0bRItEQRsKXEgT6ebp2mVj2yTwc6ViZYPWi/4q5UI0OgG60yiDHS79Rp6Flq6vEBaX1tlqLVsWkdbjxhQv62d7s/ytZyz68+ui1Yo/mhhqmeXaXZ7zfacM/M6mgj2y5KapZxlqQ/zqUAS+OMIm6YM5yMtKrSCcMBmbEv2mjfDuarcZqebadppp+3S7RGIZIMfNsYAY069nnrodXb4o/WEU6pT0JGCH+bNBj+8biv4YZ5qwQ/T23vATvlgmY4mysAZZdVSW2WotWy19VWb1ppBdt5aztn5sutKg8Hba5ldj88VUEABBYon0KkAiOup0EXBheYIfMoQ/HRnFRwy9Ho6dLfxl7pYdthhh3jxve7Ml9tWQAEFFFCg2QQ6FQCdeuqpYZpppolnxdCiYaotwHWF+Mt2wdRewncVUEABBRRQoCsEOjUGiLObOKOKwcJnn3127OLpiky6TgUUaDwBxqZ1NJWpK7JZytnRunQ5BbpLoFMBUMo03WCcXcNp0JVjV9I8PiqggAIKKKCAAkUR6FQXWCoEt0rg+jrcaoIbUZoUUECBtgSyVwlva97K9//2t79VTirs62YpZ2ErwIwp0IpAXQIgTg8ea6yxDH5aQXayAgr8WYCrjzdDapZyNkNdWsbGEuhwAMTNMun24gJ3nM7NRfZMCiiggAIKKKBAGQQ6HABxoTuuSMxF57igoEkBBRRQQAEFFCiLQIcDIAqYvct2WQpsPhVQQAEFFFBAgaa6F1hXVDe3t+CeYvvtt1/LjTi7YjuV62TcFZcg4H5b2bvbp/m4hxk3Qn3sscdabmaa3qPljjxz77IffvghTR7mkXkuvfTSkPdNP88999x4cc1hMlPlxfPPPx+vql3lLScpoIACCijQpoABUJtErc/ADTKPPvrosPPOOwduEppnsMB2X3nllXj7i8UWWyzcc889LRnt27dv+Oabb+INOwmACM5SOvPMM+PNS7fddtswySSTxBu1ZoMgysTNUzlzhVubzDHHHGnRLnvkRqUpffzxx/FGq+l1a49cW+X9999v7e26TM+61GWFrkQBBRRQoDACdbkOUGFKk3NGuDs7dwenpSTPRJBACxC3ICFtvvnm8fk///nPeCkC7jTPPHRR8rjooovGm79yl3fu93XzzTcHLl1AmmmmmQK349hll13i/dyYzt3rDz744Ph+V/878cQTAwEctwYpUuLedltuuWW8432R8tVIeWmWm4Q2Szkbad+0LM0h0GPo/akOaY6i1reUd999dwx8aIXgRpkMBOdu6zfeeGMMRi688MIwxRRTxCCElprrr78+3gmeaeku6Z9//nnsvmJZginuPj7ZZJPFlo1+/frFDE866aR/yjjzc9mBlE4//fTYysPNV1k3XV/cbX7ZZZcNl112WZh++unDkksuGbfP7Uuo8nQl3SeeeCI8++yzYdNNN433KLvvvvvCDTfcEIOhHj16pE0M80hwQPm5Weijjz4arrvuujDRRBOF8cYbL843aNCg8PDDDwe++Lk/XLop7HvvvRfouuKu8Lz32muvhZ122ilMOOGE8e7tU045ZbyT/MsvvxwI4lK65ZZbYgsXrWzMQ6KFi0COG6+2lR9cH3/88XDHHXcETkmeYYYZ4jqoF1rtuHP8JZdcEtdD/TA/N67lCudc14qyYU5Zmfbmm2+G6aabzot+RsWO/xswYECHFyZwL0tqlnKWpT7MpwJJwC6wJDGcj1yIjRYYupFWWWWV0LNnz7DNNtvEbqezzjorHuA5kJ9//vmB15tttlkMBOacc87YdfXhhx/G+4IRABCkELAQpNBdRXDw7rvvxlYagqTW0gcffBAvQ0DgRRCREtsj0FlqqaVidxI3XyVx3zYCJMYtpUTAxRW8SVdeeWUMrMjTMsssE8vy6aefpllbHgnmVlxxxbDvvvsGghqCgvnmmy8MHjw4BjfzzDNPWHDBBWNX2iKLLBIDC4IN8kPLEtu56KKLYssPwctCCy0UZp555hig9enTJwaDaWO0bhFg0q1Htx2BJVZse/vtt4+z1coPM1AebtxLi87xxx8fAyEu40CdsJ6rr7465nvppZeO5RlhhBHC4osvHoNXHMYdd9zY0kcZCIzuv//+2NKW8uijAgoooED5BAyAOlhnBDyjjTZaGHnkkWMrAS0FBEDc6JQDLq0N8847b9h1113D3nvvHQOPlVZaKY6p2WOPPWLwxHWU6JJinr///e8x6OBgz5iiI488MrY80ErSWqL1iTE6tOKwjZToxlpjjTViSwUtUl988UV8i1aM1VZbLQZkjKF566234sGc4IMgiKDgqKOOivd14yD/6quvhnPOOSettuVx3XXXjQEfAQRBCOOPuAXKbbfdFq8JxXRaqGgRotWG4IMrhffu3Tu2CNFVxyDr1MpDEEmQQWsOAVNKBCbkEzdao2jZ4pc/86611lpptlArP8xE+QguySMBH15cxmG55ZYLtLBRH/vss0+sC4JD5qPVipYhWpwIMBlUfvvtt8fyEcRR9yYFFFBAgfIKGADVse44UPKXLg9A9wrdPdnbgzDeJV0an5YG/lLioEsAlBIBC60drSWChd133z12WdGqwrZI66yzThzTQ3cT61h55ZVbVkGLFHngoM/ZYwQYjBGi24eUuocIONZcc82QuuJaVvC/J+SbLjAS8xJMEDQR+BDA0S1HSxbzDRkyJM6HTeoOixP+9y9rkH1OdxwtZiktvPDCsbWI19n50utq+eE98kPAResRrVQpP6wjux7qLTsgm2VT2nrrrWP3HD5PPfVUDADTez4qoIACCpRP4I+jbfnyXvgcp3E22dPUaXFI0ysLkA1+Kt+r9ZruJoIQWqPoWqO1gi46gg3GFtGqQWsPafTRRw8HHHBAuOCCC2KgxlW8ObinfL3wwgstm6Jlhi6q9iQCKVpXPvroo9jVRRBGy0x7ypQNQrLbYvpdd92VndQSvAwzscqLlB/e2nDDDeOlALhqebUxVVUWj5Oy+aLFj5Yj1kFgSGuQSQEFFFCgvAIGQJ2oO7q76OJJiatiE1CkRLcPf1yvJyVafzjNnMT82UTrQ3Z53q+ch/nZZrZliAHHnMlF1w0Hbbqz0v2H6Iai1YLBvdlE9xndPgRCBE4kXtOKkxKDo1Ne07TsI+UnkR9aRej64qw4bopLNxXjhziVPJWp0odlaRX69ttvW+ahJSq1RjHOiKDjmmuuiQ48p2uRlJ0vThj6r1p+CIToSkvlf/vtt1u2xZl02USwl7wZK5Vaisg/LVrk9fDDDw977bVXrpc8yObR5woooIAC9RHwLLAOOhLIcAo3ZwRxlhXdXCeccEI824jWgrnnnjuOIWHg77HHHhufP/TQQ/G0dMbZEKQcc8wx8UwoBt9y/R2WpwWH8SoMoOZihQQ0vE5njpFdWpQYOExLDy07HKgZQ0RrC11QjFuh64rAgm4kxv2k6/kQONEVRPfUxRdfHMfUJALWSR7pOmM5BmCTVwKrykTeOa2e0+wJnBizxPIEXKyfsUd0SREAsS5akxhPxLgiurUYg0NieQYm40cAwz3lGNxNNx1jmQhK6K477rjjYnDCc/J18sknx+CIs7EoW2v5IWihVYtB4Sw344wzxgHY5Iduw4EDB8bT/skX28ZwhRVWiPm59tprY0sPQSxngBEEUbcEUXSrOQ6ocq8YvtfNcnZUs5Rz+GrfuRXofgGvA5RDHdCqwEGT8TjZs7U6s2laJQgu0nijauuiJYbT0LOJ4IkWoexp9Nn3eU5gQwsLB/vWEuslQJh99tlj0EMXXErkjTITOPHIX62uMMYu0TXXWqJljECoVllr5Yf1Elylli7WVS2oq9w+BuSdwdC0FvFHsNlaF2bl8r6uLUAQ3tHE2YJlSc1SzrLUh/lUIAn8+ad9esfHugnQijPttNPWbX2siICjVkDAPJXBD9M4Xb2t1N6WDQKCagFdNhii7PzVSrWCH5ajFac9qbX8sGwKfnjenuCH+dKgap4TwPFn8IOGSQEFFCi/gGOAyl+HuZeACxAytoYWILqwujsVLT/d7eH2FVBAAQXaFrAFqG0j56gQYKwPY5ZIbbVCVSzaJS+Llp8uKaQrVUABBRSoq4ABUF05m2Nl1a7l050lL1p+utPCbSuggAIKtE/ALrD2OTmXAgoooIACCjSQgAFQA1WmRVFAAQUUUECB9gkYALXPybkUUEABBRRQoIEEDIAaqDItigIKKKCAAgq0T8AAqH1OzqWAAgoooIACDSTgWWA5Vib31uLqwq+99lrc6sQTTxwWWGCBcNttt8XpXGiPm5jONNNMYdCgQfE2DFy1eJ555mm5lUWO2e3yTXEF6DvvvDM89thj8VYXHd0gtwSZaKKJwuSTT96yCty4ESzXK5ptttnCkksu2fIeT1iG+uA9biabvVgj91HjnmhcMHHllVeOt7zg1iVcIXuqqaYaZj2+UEABBRQop4AtQDnVGzcd5aah3DrihhtuCFtttVVYZJFF4tWauZdV37594/2mCH5I3FOL2zdwIE738aqVVW4X0d2JW3MMT+LKzW+88Ua49NJLh2exlnm5FtFJJ50Ur27NzV1T4vYV3EiVW4/suOOO8YKN3NMsJay/+eabWAcEX/vtt196K9x9993h6KOPjneyJzjiGkPc2oN7lRE0vfjiiy3z+kQBBRRQoLwCBkA51B3BD607tDRwm4nDDjss3lfqwQcfjFsnKOLeRtyINJvee++9sP/++2cnVX3OTVlfeumlqu/lNZEWFwKL4UkEKLRudTTR6rPbbrvF24Jk10GAiQemtOxsu+224eCDDw4ESdwfjZugzj///LFOuNs9d4tPiYCKeuJ2HjPPPHO80CM3hyWtueaa8W7w3OXepIACCihQbgEDoC6uP7p5uJM5B9qUuBs69+QiMEqJgy53JH/88cfTpHineW5cmhLdYnfddVe8C/3rr78eJ1922WXxbuzcDuKRRx6J0wicOMhzV3buVp8Sd4K//PLLwxlnnDFMwMR6mU6LE91GLEfrTGuJVhPWwQ1eSXTrbbzxxrErj2Wr3R6jWt5bW//wTK92rzG6trJdVdzNnft60d3G/dG4cOIRRxwRN8ONKjfbbLOWTRKMnnDCCTFQoiuMVq1skLb55pvHALZlAZ8ooIACCpRSwACoi6vtuuuuC1NPPfWfbujJgZSghbElJO7Sznznn39+fP3www/H7pf4Yug/AikOxARKBFN0n9Hqsswyy8RHumposeCAve+++4YNNtggjou544474ioIhNZaa62wxBJLxJaMDTfcMJx++umxW+7QQw8NW265ZTzwszwtVAQ41RItJHQDMXZptdVWi8ESQcjiiy8eW0vIz7jjjjvMoq3lfZiZ6viCgOfll1+OLqyW1hzGWyXrs846K5x66qlhqaWWimOtDjzwwJat77nnnjGgo3z77LNPwC87Pgjjf/3rX/Gu8C0L+UQBBRRQoHQCBkBdXGW06nDwrUypRYjxLwyKnn766cMWW2wR+vXrF4OdK664Iqy33notizEOhZaKscYaK44PYjwR42cmmWSSOA+PBB7vvvtuoGuNda600kphueWWi+/vtddeYZVVVokDhZn373//e/xjkDDbZX0c/Hm+9tprx4HJLRv/35MPP/wwBjw9e/YMb731VphxxhnDmWeeGQcL06oy0kgjhSmnnPJPwV5rea9cf71eE5jR4kOAx9goghhacwheSASBa6yxRmwdu/HGG8MXX3zRsmnubs/4LLrn6Ep74YUXWt7jyRRTTBFbx1IL3DBv+kIBBRRQoDQCBkBdXFUfffRRGH300f+0FQY5c4YR3WAXXXRR7EIi+KC15JJLLoldMIxxSYnAZ+edd44HdbptaJUYMmRIerullYKDPGNf5pprrtgSNN1008V5Hn300TiQNy2w6KKLxi4vWnNSC0d65OBfbVD1k08+GYM5zqjij1YiWrHaSm3lva3lh/d9vCnvO++8E/bee+8Y4NBaxhl2pHXWWSfssssugbE9lJV6SAn7Dz74ILbIpaCRQDMlAisCWgJNkwIKKKBAeQUMgLq47qaddtqqY2LY7CabbBJeeeWVeHYRLUC0Liy//PLxoL3qqqsOkzMCKbpsOHivu+66cQBvdoYUvBBA0SpzzTXXBMYHbbfddnG2scceOx7U0zK0ArEM09ubyB8tH5wRRRDAH4Hc4MGD4ypSHirX11beK+evx2u6wU455ZTY1cWZXQQ8k002WaAVi7O5CIZotWLM03PPPRdbtNjuOeecE7v2KMtBBx0UuxrpxswmAs/U8pad7nMFFFBAgfIIGAB1cV1xCjsBQLVEFxUtEJxdlBJjcWitqAyAbr311tgqRDcXZyExVoeuJdIoo4wSvv322/iaAcoc4OkGuuWWW2IrB/PQnXbfffcFThEnMeaILiwGZKf1xDeG/mNQc7XEwG0GR++xxx6BAdUEW7QCcWbbqKOO2tIiVbm+WnlnfSlPaZu33357y7ronrr++utbxvOkedIjy1Yun97j8eKLL46DtTkLjERgw1ggusRIBHAMNCe4I3G2Xjrri9fTTDNNmHfeeXkaE2WmdSx1p6XpPiqggAIKlEugxyFDU7myXK7c0gJ03nnnxUHJI4888jCZ50J7AwcOjKdyE0SQmJ+zq9IYobQAB2rOsGLMCt0wBEAENMsuu2wMfo4//vjYMkNAxbVv2BbrXn311eMBfuGFF45Bzz333BMDHlo6CF5ozWFgM4FTr1694usjjzwyjo/p3bt3bCVJeSB4YIwPLSOnnXZaHAy99dZbx5YgxgVde+218eKNtL5kxz21lneCLwYkc+bbLLPMEliO1hW67wi2uCYSZ7NxhhnX5OH9bCI4orWLM+MoL/OnAdi0VFGOMcYYI5aTIJFEdxxlYKwVQSOGBIvpWku0DDH25/PPP4+tcwSDlDElxhNRbwwyN3VOYMCAAR1eQbpeVodXkOOCzVLOHEndlAJ1ERhh6K/n/zYJ1GV1rqSaAFd6ZhwJ16ypTLQoVI4RqjaN5WhZobo4APPIHy0WpLQMLSpMIzigi6cycTo6rTeMDWqty6pymcrXBAV0e1Wun+nkicHQlalW3ivnZWB2tmuOIIUWJs50a0/iIocEkZzSzhlgrSWuCcRp8dUS66BeKpdnnNYhQ38zZE+zr7a809oWYCxbRxPj3MqSmqWcZakP86lAErALLEl04SNdXZNOOukwY3DS5iqDH6ZXm8Z0DsYEPySClxT88Dotk6ZVBifMQ6I1hvFGHQ1+WActUNXWz/RqwQ/L1Mo772dTNvghWGNQcnuDH9ZDKw+tSJXBS3YbPG8t+OE91lG5PNcOYiyRwQ9CJgUUUKDcAgZAOdUfA5dT8JLTJhtiM3RpZS9U2F2FokuMrri55567u7LgdhVQQAEF6ijw3+aEOq7QVbUuQKuEqZwCXB/IpIACCijQOAK2ADVOXVoSBRRQQAEFFGingAFQO6GcTQEFFFBAAQUaR8AAqHHq0pIooIACCiigQDsFDIDaCeVsCiiggAIKKNA4AgZAjVOXlkQBBRRQQAEF2ilgANROKGdTQAEFFFBAgcYRMABqnLq0JAoooIACCijQTgEDoHZCOZsCCiiggAIKNI6AAVDj1KUlUUABBRRQQIF2ChgAtRPK2RRQQAEFFFCgcQQMgBqnLi2JAgoooIACCrRTwAConVDOpoACCiiggAKNI2AA1Dh1aUkUUEABBRRQoJ0CBkDthHI2BRRQQAEFFGgcAQOgxqlLS6KAAgoooIAC7RQwAGonlLMpoIACCiigQOMI1CUA+uCDD8IDDzzQOCqWRAEFFFBAAQUaWqDTAdBvv/0Wdthhh/DII480NJSFU0ABBRRQQIHGERixs0W57LLLwsILLxx+//33zq7K5RVQoIkExhxzzKYobbOUsykq00I2lECnAqBnn302TDXVVOHHH38MX3zxRUPBWBgFFOhagaWXXrprN1CQtTdLOQvCbTYUaLdAhwOg7777Ljz55JNhu+22CwMGDGj3Bp1RAQUUQOCXX37pMMSII3b4q6vD2+zogs1Szo76uJwC3SXQ4W+RU089NbzxxhvhhRdeCK+++mpsBZp88snDpptu2l1lcbsKKFAigVtvvbXDue3Tp0+Hl817wWYpZ96ubk+Bzgp0OADq27dv+Prrr+P2+/XrFwYNGhR69+7d2fy4vAIKKKCAAgoo0OUCHQ6Axh133MAfaYIJJoiDoHk0KaCAAgoooIACRRfocACULdjWW2+dfelzBRRQQAEFFFCg0AKdvg5QoUtn5hRQQAEFFFBAgSoCBkBVUJykgAIKKKCAAo0tYADU2PVr6RRQQAEFFFCgioABUBUUJymggAIKKKBAYwsYADV2/Vo6BRRQQAEFFKgiYABUBcVJCiiggAIKKNDYAgZAjV2/lk4BBRRQQAEFqggYAFVBcZICCiiggAIKNLaAAVBj16+lU0ABBRRQQIEqAgZAVVCcpIACCiiggAKNLWAA1Nj1a+kUUEABBRRQoIqAAVAVFCcpoIACCiigQGMLGAA1dv1aOgUUUEABBRSoImAAVAXFSQoooIACCijQ2AIGQI1dv5ZOAQUUUEABBaoIGABVQXGSAgoooIACCjS2gAFQY9evpVNAAQUUUECBKgIGQFVQnKSAAgoooIACjS1gANTY9WvpFFBAAQUUUKCKgAFQFRQnKaCAAgoooEBjCxgANXb9WjoFFFBAAQUUqCJgAFQFxUkKKKCAAgoo0NgCBkCNXb+WTgEFFFBAAQWqCIxYZZqTFFBAAQUUGC6B/v37D9f82Zn79OmTfelzBXIRsAUoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEhixSJkxLwoooIACCijQ/QL9+/fvcCb69OnT4WXzXNAWoDy13ZYCCiiggAIKFEKgUy1A3377bbj66qvDTz/9FDbccMPQs2fPQhTKTCiggAIKKKCAArUEOtUCdN1114W55porvPrqq2G//fartR3fU0ABBRRQQAEFCiPQ4RYgWn3WX3/9MMooo4Sff/459OvXrzCFMiMKKFB8gVlmmaX4maxDDi1nHRBdRe4CzbDfjvD70NQZ2YEDB4Zdd901nHzyyWG66abrzKpcVgEFFFBAAQUUyEWgwy1AKXfPPfdcGDx4cNhoo43CE088kSb7qIACCtQUeOedd2q+X+vNXr161Xq7UO9Zzraro0z12XZpGmOOZthvOx0Arb322oG/eeaZJ3z88cdh0kknbYzatxQKKNClAi+++GKH11+mA6blbLuay1SfbZemMeZohv22U4Ogs9XMDjzxxBNnJ/lcAQUUUEABBRQopECHA6Cvv/46LLroouHiiy8OTz75ZDjyyCPDX/7S4dUVEsdMKaCAAgoooEBjCnS4C2zssccODzzwQFTp0aNHY+pYKgW6QaAZrsDaDaxuUgEFFBhGoMMBEGsx8BnG0hcKKKCAAgooUBIB+6xKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJzBiZ1Y1ZMiQcP3114cRRxwxrLnmmmGUUUbpzOpcVgEFFFBAAQUUyEWgUy1AJ510Uvjll1/C1VdfHdZff/1cMuxGFFBAAQUUUECBzgp0uAXok08+CX379g0TTjhhWH311UOvXr3Cr7/+Gnr06NHZPLm8AgoooIACCijQpQIj/D40dXYL7733Xth5553DTTfd1NlVubwCTS/w448/dtigTN3QlrPtarY+2zZyjq4RaIbPZ10CoGOOOSYsueSSYYEFFuiamnCtCiiggAIKKKBAHQU63AWW8vDMM8+EWWed1eAngfioQCcFHnzwwQ6vYfHFF+/wsnkvaDnbFrc+2zZyjq4RaIbPZ6cCoIEDB4ZPP/00rLLKKrEGXnvttTDzzDN3TW24VgWaROCrr75qipJazsaq5mapz8aqtdZL0wz12eGzwBgE3bt377D77ruHGWaYIUw22WRh0KBBrWv6jgIKKKCAAgooUBCBDrcATTLJJOGNN94oSDHMhgIKKKCAAgoo0H6BDrcAtX8TzqmAAgoooIACChRLwACoWPVhbhRQQAEFFFAgBwEDoByQ3YQCCiiggAIKFEvAAKhY9WFuFFBAAQUUUCAHAQOgHJDdhAIKKKCAAgoUS8AAqFj1YW4UUEABBRRQIAeBDp8Gn0Pe3IQCwwj0799/mNftfdGnT5/2zup8CiiggAJNImALUJNUtMVUQAEFFFBAgT8EDID+sPCZAgoooIACCjSJgAFQk1S0xVRAAQUUUECBPwQMgP6w8JkCCiiggAIKNImAAVCTVLTFVEABBRRQQIE/BAyA/rDwmQIKKKCAAgo0iYABUJNUtMVUQAEFFFBAgT8EDID+sPCZAgoooIACCjSJgBdCbJKKtpgKKKCAAp0X6OgFWdmyF2XtvH8912ALUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCoERS5FLM1lToH///jXfb+3NPn36tPaW0xVQQAEFFGhoAVuAGrp6LZwCCiiggAIKVBNo6BYgW0aqVbnTFFBAAQUUUMAWIPcBBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEOh0Afffdd+G+++5rOjgLrIACCiiggALlFejUafCDBg0KRx55ZHjvvffC0ksvXV4Fc66AAgoooIACTSXQqQBovPHGC8stt1w477zzCok2/vjjFzJf9c6U5ay3aPeuz/rsXv96b936rLdo967P+uxe/3pufYTfh6bOrPCOO+6IAdC1117bmdW4rAIKKKCAAgookJtAp1qAcstlBzfE+KSOpNFHH70ji3XbMpazNn2z1CcKZSprR/dby1l7f++ud63PtuX9fLZtlOccDR0A3XPPPR2yLNtNQi1n7WpulvpEoUxl7eh+azlr7+/d9a712ba8n8+2jfKco9NngeWZWbelgAIKKKCAAgrUQ6BTAdBXX30VGAM0YMCA8Prrr9cjP65DAQUUUEABBRTocoFOdYGNM8444eSTT+7yTLoBBRRQQAEFFFCgngKdagGqZ0ZclwIKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKIzDC70NTYXJjRhRQQAEFFFBAgRwEbAHKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQaOgA6NVXXw0XXHBBuPjii9uk/PXXX8NBBx0Ufv/992Hmfeutt8Kxxx4bvvvuu2Gmd+eL77//Plx22WXh0UcfzSUb9957bzj//PNz2dbwbiRbb59++mk4+eSTwyOPPDK8q8l1/gceeCDul/XY6CuvvBIuv/zyuKqnnnoq7LPPPuHnn3+ux6o7vI5LLrkkvP76620un627tmZ+8803wzHHHBN++umnllkp57XXXhvOOuuslmnd/YTvG/I6PGXr7jy7/e4TKOLxpfs08t9yQwdARxxx3+7eSQAADlFJREFURNhiiy3CCCOM0KbsX/7ylzD11FP/aV6mnXrqqeG3335rcx15zTDqqKOGJ598Mrz88su5bPKvf/1ruOaaa3LZ1vBuJFtvE088cXj++efDwIED42o++eST8MMPPwzvKrt8fvbHK6+8slPbeeedd+LyY4wxRphoooni8znmmCOce+658eDbqZV3cmHyQ77aStm6Y96333671UUmn3zy+EPkl19+aZmH/ZLPwm233dYyrbufsA9S9lplK+p+2d12zbj9Ih5f6lkPRd/XGyYA+vHHH8PgwYNj3dGKwwHigw8+iAfATTbZJP4q5sszzcOMn3/+eUtdc1DaeOONh2kB+vrrrwOtLaOMMkr8QmuZOYcn//nPf/4UdH300UexHOR1nHHGibngF3H2VzETK5fFg7ITDKRADq/KRFkr08cffxwPMnyh552++uqrltYMflFTH5T1//7v/+JzWgAq661nz54xm5Rl3XXXDV988UVcB8ullhEeswfSvMs17rjjtmxyyJAhLc/Tk2r1kOqeeY4++uhw0003xfJMNdVUYdFFF42LjjzyyLGu0np4rLau7Ptd8XyFFVYI448/fsuq2e8oJ/tesudzmK27F154Ieyyyy7DBKzZ/ZhAZ7TRRovr/Pbbb1vWPfbYY7c8T0+6o8xp2yuttFIYb7zxWi1bdr9Mn0X25/Sc9WSNeM6+y/7P55jn2XnTdrv6kTrkM/PNN98Ms6laeefzyueP7xqW5Tn5TylbT6yXfYQypu/dNN+XX34ZW+C7o9zZzx35IQ+fffZZylp8zB57mJCtvzRjpRPTUzm74/iS8pUeW/t+5f20/1EP2cR0Wt2z9ZItZ7V9Pbt8EZ7nf1TrglLT5N6vX7/Qv3//cN5558UDJM/5wN1xxx3h2WefDdNNN104/vjjwyKLLBJbCHbYYYfw/vvvh759+8Yc8QXMr8z0oTzttNMC62U9BAF5pq222ioQOe+7777htddei5s+5JBDYkC3/fbbt2TloYceCocffnjggJNaPSqX/fDDD8M888wTfz3vvffeYc455wyXXnppOOCAA8LKK68cv5z4IF5xxRXhoosuCqusskr0Y0febrvtwnPPPRe7azhY5ZnojqSraL755guUmYB2yy23jHmkO3K11VaLdVNZbymP1C3dMHfeeWc0nGKKKeL8vH/ccccF7Loz8cVx5JFHhvXWWy92Z5IXWq/OOeec8PDDD8dgILWIZOue/ZP3X3rppUD3F11+BHqVqVqdVs7TFa/5kqRurrrqqkCgQvn4jK2zzjrhzDPPHOZzSJdd+szRnfvee+8FultJlftxyivd0csvv3wg0KgM4rurzClvHBiXXXbZuM9l98ts2bL7JQf966+/Ptx6661hjTXWiF3adNtnv6v4DsKIHwMccHDJ/ohL2+6qR74HNtxww7D55psHWtTnn3/+WK9sr6280xXdq1ev+H3CQXKPPfaIdVZtP2ddGDDf6aefHrfHNvg+f+ONN8KBBx7Yrm5VlqlXyn7uWCffhXzfcjxZeumlY1BXeeyprD/8Kp1YV3ceX9h+ZSJArfb9OmDAgFiHhx56aDweLLDAAnFffPfdd+PnmTo+5ZRT4uoqy1m5r1duswivGyIAGmussULv3r1jRTE2hoh6ueWWi79C11xzzTDvvPMGfnUTKKRuo7XWWitMP/308SDLL00Cg5FGGinWCV+sHDj5RUrrEa0teQUA/AJacMEFw2yzzRa/8P7973/HR7o2yN/BBx/cst9QLj6QfClx4Ki2LF+ek002WVh88cXjh47uiWmnnTYGg6yIPmiCH4JFyslBi3XRrUAeCJIwyDMRzN1///1h9dVXD1tvvXWgW4GDwgwzzBCzQV1Sd6RsvcUJ//s300wzxRaDtddeO9A1tOeee4bHH388vovjUkstlZ099+djjjlm2H///QNfsgSeJL5kqEsO8NjzJclBL1v3tILMPPPMgS+iueaaKz5Wa82qVqdxI138j7ph/yJRRuqHffD2228PO+200zCfQ8qaPnPsy5NMMkkMwKvtxynbu+66awwUOFDyGc2m7ipzygOfLQJtUna/zJYtu1/yPXPDDTfEAIGuEMYqzjLLLMMYEfDQysdngnpeeOGFW1p/03a78pFWRfY18sd3T/rhQQDfVt757uB7h+56WpD4Tua7utp+Pvvss8di9OjRI34GUpn4/uO7gB8LE0wwQZrc5Y+Vnzs2yIGeH6Uca9gPSZXHnsr6I3CtdOrO40vMdJV/1HO179cZZ5wx7m+bbrppuPrqq8M000wTbr755kCAT7k4pvIdW21/yO7r1Vpqq2Qj90kNEQDxRZoOFpW/CpMoX7SpEviCpgWASJ4vaT6cpBTk3HPPPS3zMp3m97wSeZh00knD2WefHejOIW/knYMgXygXXnhhS1b4siCNPvrocb5qy/I+3VdpXgzSc8pFwEMrGV+sG2ywQQwI+XV94403tnyZM1+yYX1dnThgDho0KH7IaBYniK2VauUtvUcrBK2EdIvSRdHdacQRR4xZoO7YZznoE3im4IH6uOuuu1qt+1Su1spRrU5bm7fe07N544s1dUuyneznkNfZedNzHis/A8xL4vPK+/wCp2Ugm7qzzCkf2a7iVB7eyz5Pr/meoZx87ji40vpHqjSi5faEE04It9xyS2wliTPl+I+8p/zzXUALVHvzvuOOO8Zy0ZLOD5rW9vPWinPUUUfFctMynbVtbf56Ta/8ziX4fPDBB2PdsI0+ffoEPsPVjj3Z+qvm1J3Hl474UPeMtyMttNBCsaWWY9Hf/va3QID07tDWoGrlTNtK+056XaTHhgiAiMb5cPHriw9YStnnaRqPRLIcdGgFINJPifn5m3XWWePOTl8uKfW/p/m68pEDP7+Q6PZhpyM/NO3TAkKXB1E3O1y2bOl5tWUr85rtr03v8SGm5YzEB51mTQzYqUmUv9py8c0u+MdBk9YRuhRohUu/DvlioUmZRD5Tnih/Mshmh0AvzcNBmAMNf+wr3Zmy+U3P+ZIg8KS7hERX0oorrli17jkQUCe1UrU6rTV/vd+rVh/VtpHKny1Te/Zjmtcru/66u8yV5atWNuZJ+yX55fOczjDlAFst8QOAX9y0XtMaWoTU3ryzD5Nvxlum4LXafp79bPO9m/Yfhh/wY5VufM6wyytVfucylIDjC2cdkjjTj89oa8eelM9qTt15fEn5qvaYrYPs92t2XvZDvj+ffvrpOJSAk2MI0KuVk+XSvp5dR5GeN0QARDMkB0zGRjB25r777otfLPxCpKIYR8P4AnZeDhwcYHnO2V00W9O6wnwccPkVSb81/aF0RXHqLV/OTM8j8auC5kR2Kg72NDfy4SMQoA+aJmgGmNKsTKBC9x3POXDypVG57Isvvhj7znmf9fBlRFn4YqF/ly9dzpSjK4zxNvvtt1+Ye+65w2abbRbN6JfnjCVs0piUrnbgi+Uf//hHOPHEE+NYAMaOkPjgMeaF8QC0jDGWgHEkqd4o+zPPPBNdeJ/xGHR9cbAk7bbbbvFLLA0gjxO74R/N+nRpEMjS6oMrdcM+zHt84dNaxa9nAvRs3dMdsthii8XLErAs3UCMdWJdTzzxROzKvPvuu6vWaR5FpQ74LLFf8Zz9jrFcPK/8HGY/c7R8UY+05JKYP/sZ4D1+sPCZZR/gBwFdu+zLfM75fFfbj/Moc7VttFY2Dixpv6Q1he4DDoiMSeR1pRHr5iDCvkCXZ96JYISuY743qAOe85mjvtqTd747t9lmm2G6nKvt53T58pmlDpMdQROXOGDMIvs9gVNeqdrnjh+mdAPS+sH3DN29lccexgRljzW0kFQ6defxpZZfte/XFIhSLr6P6UHgBynHFS65wf7MsaJaOdlW2tfTd3Ct7XfHeyMMLeAfTSbdkYM6bZOKIHjgFz8furZSmp/it9ZERwBCsJTmbWud9XqfII0ykK9UHvLJL8W2Ti+utmx788UA23SmTVqGljJaovgSzivxBUhgR386ze10x3GAIPBLdZJc2spTmp/5+OKmPAQQRU4c/CeccMKW/bha3WfLVass1eq01vzd+R4HHfY19vvW9mMOksyTuhCr5bc7y8wPp2233Tb+eMrmLVs2pmfrrz35JTDkpATKXqTUnrxTl9W+P6rt5wRc1G36Dq+27+dV/mrbZhrlye5/6fhQ6zupmlPaB9LyeZWr1nZSnrJlIeChC5PvpHT8wYF8Mx/HyJRqlTPNU6TH/w5EKFKOOpiXtEOmD05bq0nztxb8sHyq2DRvW+us1/vZL4tUHvKZdr5a26m2bK35s+9VBj+8x4DyvBNnEzD2h19X6UsynVad6iS5tJU35qeFhF8v2BTponmt5b2ym6Na3SeH1taRpler0/Re0R5pgk+ptf2YVpK2UneUmSCdX/YcQGjNqEzZsvFetv5q5feioQPkadmji6FowQ/lqJV33idl6/K/U/77v9p+XulUbd/PrqMrn1fbNtMqjwfpda3vpGpOaR9Iy3dlWdq77pSnbFn40cFf9viDQ7X9sVY525uHPOdrmBagPNHcVtcK8MuC7j26hZZccsnY1dGZLRJMcd0cuvPacwDtzLZctjkF6Makq49uuqmHnjFVr8S4P7r16UowKZC3AF2fdLXT3UwXWZGCtXpYGADVQ9F1KKCAAgoooECpBNoeLFOq4phZBRRQQAEFFFCgbQEDoLaNnEMBBRRQQAEFGkzAAKjBKtTiKKCAAgoooEDbAgZAbRs5hwIKKKCAAgo0mIABUINVqMVRQAEFFFBAgbYFDIDaNnIOBRRQQAEFFGgwAQOgBqtQi6OAAgoooIACbQsYALVt5BwKKKCAAgoo0GACBkANVqEWRwEFFFBAAQXaFjAAatvIORRQQAEFFFCgwQT+Hxr6T8CDMgAiAAAAAElFTkSuQmCC" alt="" width="373" height="280" /></p> <p> </p> <p>The code they use in base graphics is this (super blurry sorry, you can also <a href="http://motioninsocial.com/tufte/">go to the website</a> for a better view).</p> <p><img class="aligncenter wp-image-4646" src="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-300x82.png" alt="Screen Shot 2016-02-11 at 12.56.53 PM" width="483" height="132" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-300x82.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-768x209.png 768w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-1024x279.png 1024w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-260x71.png 260w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM.png 1248w" sizes="(max-width: 483px) 100vw, 483px" /></p> <p>in ggplot2 the code is:</p> <p><img class="aligncenter wp-image-4647" src="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-300x73.png" alt="Screen Shot 2016-02-11 at 12.56.39 PM" width="526" height="128" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-300x73.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-768x187.png 768w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-1024x249.png 1024w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-260x63.png 260w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM.png 1334w" sizes="(max-width: 526px) 100vw, 526px" /></p> <p> </p> <p>Both require a significant amount of coding. The ggplot2 plot also takes advantage of the ggthemes package here. Which means, without that package for some specific plot, it would require more coding.</p> <p>The bottom line is for production graphics, any system requires work. So why do I still use base R like an old person? Because I learned all the stupid little tricks for that system, it was a huge pain, and it would be a huge pain to learn it again for ggplot2, to make very similar types of plots. This is one where neither system is particularly better, but the time-optimal solution is to stick with whichever system you learned first.</p> <p><strong>Grading student work</strong></p> <p>People I seriously respect suggest teaching ggplot2 before base graphics as a way to get people up and going quickly making pretty visualizations. This is a good solution to the <a href="http://simplystatistics.org/2014/08/13/swirl-and-the-little-data-scientists-predicament/">little data scientist’s predicament</a>. The tricky thing is that the defaults in ggplot2 are just pretty enough that they might trick you into thinking the graph is production ready using defaults. Say for example you make a plot of the latitude and longitude of <a href="https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/quakes.html">quakes</a> data in R, colored by the number of stations reporting. This is one case where ggplot2 crushes base R for simplicity because of the automated generation of a color scale. You can make this plot with just the line:</p> <p>ggplot() + geom_point(data=quakes,aes(x=lat,y=long,colour=stations))</p> <p>And get this out:</p> <p><img class="aligncenter wp-image-4649" src="http://simplystatistics.org/wp-content/uploads/2016/02/quakes-300x264.png" alt="quakes" width="420" height="370" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/quakes-300x264.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/quakes-227x200.png 227w, http://simplystatistics.org/wp-content/uploads/2016/02/quakes.png 627w" sizes="(max-width: 420px) 100vw, 420px" /></p> <p>That is a pretty amazing plot in one line of code! What often happens with students in a first serious data analysis class is they think that plot is done. But it isn’t even close. Here are a few things you would need to do to make this plot production ready: (1) make the axes bigger, (2) make the labels bigger, (3) make the labels be full names (latitude and longitude, ideally with units when variables need them), (4) make the legend title be number of stations reporting. Those are the bare minimum. But a very common move by a person who knows a little R/data analysis would be to leave that graph as it is and submit it directly. I know this from lots of experience.</p> <p>The one nice thing about teaching base R here is that the base version for this plot is either (a) a ton of work or (b) ugly. In either case, it makes the student think very hard about what they need to do to make the plot better, rather than just assuming it is ok.</p> <p><strong>Where ggplot2 is better for sure</strong></p> <p>ggplot2 being compatible with piping, having a simple system for theming, having a good animation package, and in general being an excellent platform for developers who create [Some of my colleagues think of me as super data-sciencey compared to other academic statisticians. But one place I lose tons of street cred in the data science community is when I talk about ggplot2. For the 3 data type people on the planet who still don’t know what that is, <a href="https://cran.r-project.org/web/packages/ggplot2/index.html">ggplot2</a> is an R package/phenomenon for data visualization. It was created by Hadley Wickham, who is (in my opinion) perhaps the most important statistician/data scientist on the planet. It is one of the best maintained, most important, and really well done R packages. Hadley also supports R software like few other people on the planet.</p> <p>But I don’t use ggplot2 and I get nervous when other people do.</p> <p>I get no end of grief for this from <a href="https://soundcloud.com/nssd-podcast/episode-9-spreadsheet-drama">Hilary and Roger</a> and especially from <a href="https://twitter.com/drob/status/625682366913228800">drob</a>, among many others. So I thought I would explain why and defend myself from the internet hordes. To understand why I don’t use it, you have to understand the three cases where I use data visualization.</p> <ol> <li>When creating exploratory graphics - graphs that are fast, not to be shown to anyone else and help me to explore a data set</li> <li>When creating expository graphs - graphs that i want to put into a publication that have to be very carefully made.</li> <li>When grading student data analyses.</li> </ol> <p>Let’s consider each case.</p> <p><strong>Exploratory graphs</strong></p> <p>Exploratory graphs don’t have to be pretty. I’m going to be the only one who looks at 99% of them. But I have to be able to make them <em>quickly</em> and I have to be able to make a <em>broad range of plots</em> <em>with minimal code</em>. There are a large number of types of graphs, including things like heatmaps, that don’t neatly fit into ggplot2 code and therefore make it challenging to make those graphs. The flexibility of base R comes at a price, but it means you can make all sorts of things you need to without struggling against the system. Which is a huge advantage for data analysts. There are some graphs (<a href="http://rafalab.dfci.harvard.edu/images/frontb300.png">like this one</a>) that are pretty straightforward in base, but require quite a bit of work in ggplot2. In many cases qplot can be used sort of interchangably with plot, but then you really don’t get any of the advantages of the ggplot2 framework.</p> <p><strong>Expository graphs</strong></p> <p>When making graphs that are production ready or fit for publication, you can do this with any system. You can do it with ggplot2, with lattice, with base R graphics. But regardless of which system you use it will require about an equal amount of code to make a graph ready for publication. One perfect example of this is the <a href="http://motioninsocial.com/tufte/">comparison of different plotting systems</a> for creating Tufte-like graphs. To create this minimal barchart:</p> <p><img class="aligncenter" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAYAAABB4NqyAAAD8GlDQ1BJQ0MgUHJvZmlsZQAAOI2NVd1v21QUP4lvXKQWP6Cxjg4Vi69VU1u5GxqtxgZJk6XpQhq5zdgqpMl1bhpT1za2021Vn/YCbwz4A4CyBx6QeEIaDMT2su0BtElTQRXVJKQ9dNpAaJP2gqpwrq9Tu13GuJGvfznndz7v0TVAx1ea45hJGWDe8l01n5GPn5iWO1YhCc9BJ/RAp6Z7TrpcLgIuxoVH1sNfIcHeNwfa6/9zdVappwMknkJsVz19HvFpgJSpO64PIN5G+fAp30Hc8TziHS4miFhheJbjLMMzHB8POFPqKGKWi6TXtSriJcT9MzH5bAzzHIK1I08t6hq6zHpRdu2aYdJYuk9Q/881bzZa8Xrx6fLmJo/iu4/VXnfH1BB/rmu5ScQvI77m+BkmfxXxvcZcJY14L0DymZp7pML5yTcW61PvIN6JuGr4halQvmjNlCa4bXJ5zj6qhpxrujeKPYMXEd+q00KR5yNAlWZzrF+Ie+uNsdC/MO4tTOZafhbroyXuR3Df08bLiHsQf+ja6gTPWVimZl7l/oUrjl8OcxDWLbNU5D6JRL2gxkDu16fGuC054OMhclsyXTOOFEL+kmMGs4i5kfNuQ62EnBuam8tzP+Q+tSqhz9SuqpZlvR1EfBiOJTSgYMMM7jpYsAEyqJCHDL4dcFFTAwNMlFDUUpQYiadhDmXteeWAw3HEmA2s15k1RmnP4RHuhBybdBOF7MfnICmSQ2SYjIBM3iRvkcMki9IRcnDTthyLz2Ld2fTzPjTQK+Mdg8y5nkZfFO+se9LQr3/09xZr+5GcaSufeAfAww60mAPx+q8u/bAr8rFCLrx7s+vqEkw8qb+p26n11Aruq6m1iJH6PbWGv1VIY25mkNE8PkaQhxfLIF7DZXx80HD/A3l2jLclYs061xNpWCfoB6WHJTjbH0mV35Q/lRXlC+W8cndbl9t2SfhU+Fb4UfhO+F74GWThknBZ+Em4InwjXIyd1ePnY/Psg3pb1TJNu15TMKWMtFt6ScpKL0ivSMXIn9QtDUlj0h7U7N48t3i8eC0GnMC91dX2sTivgloDTgUVeEGHLTizbf5Da9JLhkhh29QOs1luMcScmBXTIIt7xRFxSBxnuJWfuAd1I7jntkyd/pgKaIwVr3MgmDo2q8x6IdB5QH162mcX7ajtnHGN2bov71OU1+U0fqqoXLD0wX5ZM005UHmySz3qLtDqILDvIL+iH6jB9y2x83ok898GOPQX3lk3Itl0A+BrD6D7tUjWh3fis58BXDigN9yF8M5PJH4B8Gr79/F/XRm8m241mw/wvur4BGDj42bzn+Vmc+NL9L8GcMn8F1kAcXgSteGGAABAAElEQVR4Ae3dBZgcRd6A8eLC4RDcLbg7h7sGC+5uwfXQwzncXQ734MH9cHcPENyPAIEgH/rlrbtaOsPs7GZ3trd75q3n2Z2ZnpaqX/VM/6equnuE34emYFJAAQUUUEABBZpI4C9NVFaLqoACCiiggAIKRAEDIHcEBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEDICarsotsAIKKKCAAgoYALkPKKCAAgoooEDTCRgANV2VW2AFFFBAAQUUMAByH1BAAQUUUECBphMwAGq6KrfACiiggAIKKGAA5D6ggAIKKKCAAk0nYADUdFVugRVQQAEFFFDAAMh9QAEFFFBAAQWaTsAAqOmq3AIroIACCiiggAGQ+4ACCiiggAIKNJ2AAVDTVbkFVkABBRRQQAEDIPcBBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEDICarsotsAIKKKCAAgoYALkPKKCAAgoooEDTCRgANV2VW2AFFFBAAQUUMAByH1BAAQUUUECBphMwAGq6KrfACiiggAIKKGAA5D6ggAIKKKCAAk0nYADUdFVugRVQQAEFFFDAAMh9QAEFFFBAAQWaTsAAqOmq3AIroIACCiigwIgSKKCAAgp0nUD//v07vPI+ffp0eFkXVECB2gK2ANX28V0FFFBAAQUUaEABA6AGrFSLpIACCiiggAK1BQyAavv4rgIKKKCAAgo0oIABUANWqkVSQAEFFFBAgdoCBkC1fXxXAQUUUEABBRpQwACoASvVIimggAIKKKBAbQEDoNo+vquAAgoooIACDShgANSAlWqRFFBAAQUUUKC2gAFQbR/fVUABBRRQQIEGFDAAasBKtUgKKKCAAgooUFvAAKi2j+8qoIACCiigQAMKGAA1YKVaJAUUUEABBRSoLWAAVNvHdxVQQAEFFFCgAQUMgBqwUi2SAgoooIACCtQWMACq7eO7CiiggAIKKNCAAgZADVipFkkBBRRQQAEFagsYANX28V0FFFBAAQUUaEABA6AGrFSLpIACCiiggAK1BQyAavv4rgIKKKCAAgo0oIABUANWaq0iffTRR+Hll1+uNYvvKaCAAgoo0PACBkANX8XDFvDMM88MRxxxxLATfaWAAgoooECTCRgANVGF/9///V+46667wrXXXhs+/PDDJiq5RVVAAQUUUGBYgR6HDE3DTvJVowpcfvnlYa211goPPPBA+O6778Kyyy7bUtQPPvggXHrppeGZZ54JU0wxRRhjjDHCLbfcEuedY445Qo8ePcJjjz0WbrrppjD++OOHccYZJy77/fffhxtvvDFMPvnk4cILL4zLjjzyyOHxxx8Pd9xxR/jkk0/CDDPM0LKd3377Ldx3333htttuC7/88kvczogjjhjXT1B21VVXhcGDB4epp546jDDCCC3LZZ88+uij4c477wxvvvlmmG666QLLk8jvPffcE3766acw5ZRTtizyyiuvhOuvvz68+uqrMX+jjjpqfK9a3il3tXKyQGvbbdmQTxSoIjBgwIAqU9s3aaaZZmrfjM6lgALDLWAL0HCTlXcBAo+ll1467LDDDuHcc88NBAApEfS888474eyzzw4TTzxxnEwAMsooo4S//vWv4aSTTgovvvhiWGCBBcJqq60Wg51vvvkmbLPNNmGrrbYKZ511Vujfv38cX7TTTjuFN954I2y55Zbh+OOPj4EQK/z999/DUkstFSabbLKw6KKLxr999tknDBw4MDz77LPhmGOOCUsssUS45JJLwqqrrpqyNszjrbfeGghoNtlkk3D//feHr776Kr6/+eabh7/85S+hb9++Ydttt435443zzz8/5m2zzTaLgducc84Zl28t79XKyXpa2y7vmRRQQAEFyicwwtCD0u/ly7Y5Hl4BWmSefvrpQHDy5ZdfxhabE044IWy//fYtq/r8889jC8lTTz0VaPU54IADwsEHHxw+++yzsNJKK4X9998/znvNNdeE999/PzAfQcgKK6wQBg0aFFtzmOHkk08Offr0Cb169QobbbRRbAFiPbQ8EagQaJFmm222sOeee4YtttgiLLnkkmG99daLLUu0RqXAiHVk02GHHRaef/75cNlllwWCGFpzaA264oorYksU89JSQ/DGr2cCO1p+UovQyiuvHFueWKYy77RAtVbOatvt2bNnNms+H04BAuaOJvavsqRmKWdZ6sN8KpAE/tt3kF752LACDH6mhWTnnXeOZZxooonCKaecErbbbruWrqYJJ5wwrLPOOrF16Oijj47BBa0/Tz75ZGwVIkghpUee00LEH11HKbGNK6+8MgYYdGcNGTIkvjXppJOGd999N/7RxTX66KOHZZZZJvz666+xy4wutNQ9RQvP2GOPnVbZ8rj11luH3r17x6CKVidaimjZomUnpYUXXjg+TV19E0wwQXorLLbYYrGliQmVea9VzmrbbVmpTxRQQAEFSidgF1jpqmz4M8w4HMbsXHTRReG0006Lf4wHYmzC7bffPswKd9xxx9i6QjCy/vrrx/doRXn99dcDgQTdY/yNN954cazOMAv/78WGG24Yu7to2SHoSWn66aePwdXGG28c7r777sDwM1pmGF80ySSTxPFCaf08fvvtt2nRlkdaXZ544onYarTmmmvG/NPaw+DubCLoSgEULV8pkZ80PU1Lj7XKWW27aTkfFVBAAQXKJ2AAVL46G+4cn3rqqYGgJJtoJaGb67jjjstODgsttFCYZpppYoAy7bTTxvfmm2++wODlPfbYI3afMYD6jDPOCKONNloMdGjBSenrr78OV199dex6Ytrbb78dW3h4znJcg+jhhx8Oyy23XGzJYTpp9dVXD7vttlt44YUX4joZtPzpp5/+983M/9NPPz223Bx++OFhr732iuOSVlxxxRgU0TVHjy4BEgOwZ5111vh37733tqyBVp4U2DFvNu+1ylltuy0r9YkCCiigQOkEDIBKV2XDl2HGyhAAMS4mO9yLQIOWF8bBHHroocOslFagTTfdtGUaLSyM6znvvPPiuB4GFC+//PKxhYZWJQIbBg//+OOPsXWFYIaAi3E8dJf169cvBlQENAxwHnfcccNUU00V5plnnnDsscfG7ey+++5xPXPNNVdYfPHF4wDt2WefvSUP6Qnjkej2YlDyzz//HAdgMx7kH//4RxxvNNZYY4WLL744rLHGGvHsMIIi5k2tX1wK4MADD4yBXGXeWysn3YDVtpvy5KMCCiigQPkEHARdvjrr8hzTKkIwwJihbCLgYEwPp8G3lQg0OB2exOnunKr+3HPPxQHJiyyySAxAWBfBChdm5MwwEkES3V+tJVqi+Mt2caV5CcDSqfVpGo8EfrREjTnmmIFxTm2lauWstd221uf71QWaZXBws5Szei07VYHiCgx7hCtuPs1ZjgK0DFUGP2yelpD2BD/Mm4Ifnqfr9DCwmbFDUw8dAE3rDy09tARlA57sc5atTOSL9VUbx1M5GDstSzBHd157gh+WqVbOWttN2/FRAQUUUKA8Ap4FVp66Kn1OGfRM9xN/888/f+wK45pEBFwmBRRQQAEF8hQwAMpTu8m3tfbaawf+st1jTU5i8RVQQAEFuknALrBugm/mzWa7x5rZwbIroIACCnSfgAFQ99m7ZQUUUEABBRToJgEDoG6Cd7MKKKCAAgoo0H0CBkDdZ++WFVBAAQUUUKCbBAyAugnezSqggAIKKKBA9wkYAHWfvVtWQAEFFFBAgW4SMADqJvjObPbcc88N3Om8zKloZfjpp5/CnnvuGb766qsys5p3BRRQQIF2ChgAtROqu2fjNg8pffzxx2HQoEHpZWke8yjDDz/80CEPbn/xzjvvxPuZpRV0dF1peR8VUEABBYorYABU3LppydmJJ54YXnrppZbXXFF5zTXXbHldhid5lIH7gPXt27dDHKOPPnrgDvSTTDJJXL4z6+pQBlxIAQUUUCBXgR5DD6aH5LrFAmzslVdeiQe7V199NUwxxRRh1FFHjbmiVeWGG24IM844Y7j22mvDU089Feacc854Y9BstmktuPfee+OkAQMGhOuuuy7e04qbbab02GOPhZtuuineO2ucccZJk+ONPO+7775w2223tdy4k3tbcTuI1157LXZt8f5ss80W70nF3dx32mmneB+rkUYaKUw55ZRh4MCB4eWXX443EOXu6g899FC84efkk08eyM+VV14Z52e7dO1wYH/66afDdNNNF1hHtfThhx+Gq666Kt7slHt1cf8s0vPPPx/v+k6rEzd15KalBAuU7/bbb4/54TWJm44y7Y477ghffPFFmH766eN6apWBe4HVWrY91izPfNxr7M477wwTTDBBmGiiiaLpjTfeGHC58MILY11zEcbHH3885vGTTz4JM8wwQ9w+Nzu9//774zKjjTban9ZF3T766KNx/W+++Wa0TPc4iyvw33ALsK92NM0000wdXTT35ZqlnLnDukEFOinQdC1A559/fjjrrLPCZpttFoMTAhwCIgKFQw89NGy55ZbhhBNOCHR/HHbYYeGMM874EzEH2T59+oR//vOfcdl///vfYb311muZ76STTgovvvhiWGCBBcJqq60WD768SYCw1FJLxSBi0UUXDfzts88+MaC555574hgUbhXB3dh33nnnuL5lllkmBkoLLbRQmHnmmePd1Nk2wQoHYIK1XXfdNd5XiwUIOghOevXqFcu0/fbbx+1xM0+CPe62XpmeffbZcMwxx4QlllgiEFCtuuqqcRbKxTRi5GeeeSbwmpuYHnvsseGtt94KV199ddh0003jvAQQq6++esz7NttsEwOMFVZYIb6uVQYWrrVsW9Zx40P/EbBxc9UxxhgjsD1syMdWW20V65vgjaCRYPKNN96I9Xz88cfHfLIOAt5ll1021kXlusYdd9xw6623xromyCJQcqxQkvdRAQUUKKdAUwVAHLQIFvbee+/Y6rPSSiuFOeaYI+yxxx6xZWSLLbaIQQODYXlOMEIwUZlWWWWVGFQsvfTSYeuttw4HHHBAeOKJJ2KAQ0sKrQ09e/aMQQIByplnnhlX8eCDD4b3338/8Ot1rrnmCrPMMksgsOE1rVBsk0SrCK0UpNQlwyMHYpZZZJFF4nv8Y3kO3JdeemmcRiBFEEf617/+FQMQtskdzmnZSPPFGf73j/Kz3ueeey7MOuussXWK8TAEawRdBEGbb755LBetJiuuuGIMfAiECJ5ItDp99tlnMXjiruxHHnlkoCXroosuarMMtZatZf2/7McHAh7uVJ9aybjzOwEQ9x0j6KFVCifKs+SSS8YAidYs6o207rrrxjrjeeW6KA8BIK1bBKcHH3xwtGRekwIKKKBAOQWaKgCiVea7776LXSSpuhZbbLHw5JNPxpep2yc90u2RHbibluGRedJ8tDrQgkRLBuuaeOKJ40GWAy0tSDfffHNcdNJJJw3vvvtu/GMCXUe0VpAIagh8Tj/99Nh6RD6zKW2LadnnvN5xxx3DBRdcEFuKaKVJ63z44YfDvPPO25IXDuIEBdnEAZ1gi6CG/NLCQXdXCrxoOeKPRCBAntP2KXcajE33EAFIStjRupZsmZ6Wq3ze1rIsl5bNWqdttfZIfvljmZRoWXvkkUdiMDd48OAwZMiQ9FbLNlomZJ4Q6NL1SJcZXaMEuCYFFFBAgfIKNFUANPbYY8eaYjxMSgQlaXqa1plHuplef/31GGQRCPE33njjxbE1dE9x+vfGG28c7r777ti1xJgeEt1JRx11VAxmaNWpTCkAqJzOa1pJaLUgCKLlI81LXjhop3zwyFiZbGLsEcEOLTvZ+b799tvsbC3PUzDUMuF/TxhvRICVTdhmA4WUr+w8PG/PspXLtPa6tW2k+TfccMPYUkdrG/mrlbLrohy0FrEcA9BpDTIpoIACCpRXoKkCILp3+EsDmKk2WijWX3/9WIO0hmRTZbCQfY/WnpQ4Y4jEGJ/55psvtgTRrfTll1/GFidageh+olWHcSi0zCy33HKhd+/eaRWxu4oDMgfdt99+O7bmpDdpxSAgSfkjX9m8EcRst912Ya+99gobbbRRWiyOyaEbjOCK/LLeagduxu7stttu4YUXXohlYNB0GitEmfgjsQ66lFI+0nTeW2uttcJ//vOfOEaJ1yQCTbqWSLXK0NayrVnHFWf+0Y2YWnTII/lLeWW2r7/+OloQGJLwyL6PaarLynXRMkcZDj/88OhMa6JJAQUUUKC8Ak0VANFKcs0118QBrYxNOe200+IB/cADD4wHviuuuCLWJGctMZaHAbi0atCik01chPC9996LY2U424l1kS6//PIYwJx88snhvPPOiwORGWy9/PLLxzE4BBUMMmYsD91daUAxy26wwQbxPQIYWhsIeA466CDeiq0ODDZm4DPdL3ThEEQRsKREFw2Do2kBSmnBBReMwR0BHl03p5xySkuwl+bhcffdd4/BGeOSGEj8/fffh9lnnz22eHCgv+uuu8Lnn38eW5jo6mMgOV1flIUAgjFPc889dywzg64Zb0NAxqByAkISLSetlaHWsrWs44oz/+aff/5oyyBufKgXgk4GpdOVSUsfwR6tQAw+p8uvX79+sZ4pA+aMRyJwza6LSxCk8U0MhiZQYnC1SQEFFFCgvAIjDP2V/N+f9+Utw3DnnCLz659xKtmAYbhXVGMBDpKMMcmOi2GQMafeM96HgyzvX3zxxeGII46Ig6ppXeE0bRItEQRsKXEgT6ebp2mVj2yTwc6ViZYPWi/4q5UI0OgG60yiDHS79Rp6Flq6vEBaX1tlqLVsWkdbjxhQv62d7s/ytZyz68+ui1Yo/mhhqmeXaXZ7zfacM/M6mgj2y5KapZxlqQ/zqUAS+OMIm6YM5yMtKrSCcMBmbEv2mjfDuarcZqebadppp+3S7RGIZIMfNsYAY069nnrodXb4o/WEU6pT0JGCH+bNBj+8biv4YZ5qwQ/T23vATvlgmY4mysAZZdVSW2WotWy19VWb1ppBdt5aztn5sutKg8Hba5ldj88VUEABBYon0KkAiOup0EXBheYIfMoQ/HRnFRwy9Ho6dLfxl7pYdthhh3jxve7Ml9tWQAEFFFCg2QQ6FQCdeuqpYZpppolnxdCiYaotwHWF+Mt2wdRewncVUEABBRRQoCsEOjUGiLObOKOKwcJnn3127OLpiky6TgUUaDwBxqZ1NJWpK7JZytnRunQ5BbpLoFMBUMo03WCcXcNp0JVjV9I8PiqggAIKKKCAAkUR6FQXWCoEt0rg+jrcaoIbUZoUUECBtgSyVwlva97K9//2t79VTirs62YpZ2ErwIwp0IpAXQIgTg8ea6yxDH5aQXayAgr8WYCrjzdDapZyNkNdWsbGEuhwAMTNMun24gJ3nM7NRfZMCiiggAIKKKBAGQQ6HABxoTuuSMxF57igoEkBBRRQQAEFFCiLQIcDIAqYvct2WQpsPhVQQAEFFFBAgaa6F1hXVDe3t+CeYvvtt1/LjTi7YjuV62TcFZcg4H5b2bvbp/m4hxk3Qn3sscdabmaa3qPljjxz77IffvghTR7mkXkuvfTSkPdNP88999x4cc1hMlPlxfPPPx+vql3lLScpoIACCijQpoABUJtErc/ADTKPPvrosPPOOwduEppnsMB2X3nllXj7i8UWWyzcc889LRnt27dv+Oabb+INOwmACM5SOvPMM+PNS7fddtswySSTxBu1ZoMgysTNUzlzhVubzDHHHGnRLnvkRqUpffzxx/FGq+l1a49cW+X9999v7e26TM+61GWFrkQBBRRQoDACdbkOUGFKk3NGuDs7dwenpSTPRJBACxC3ICFtvvnm8fk///nPeCkC7jTPPHRR8rjooovGm79yl3fu93XzzTcHLl1AmmmmmQK349hll13i/dyYzt3rDz744Ph+V/878cQTAwEctwYpUuLedltuuWW8432R8tVIeWmWm4Q2Szkbad+0LM0h0GPo/akOaY6i1reUd999dwx8aIXgRpkMBOdu6zfeeGMMRi688MIwxRRTxCCElprrr78+3gmeaeku6Z9//nnsvmJZginuPj7ZZJPFlo1+/frFDE866aR/yjjzc9mBlE4//fTYysPNV1k3XV/cbX7ZZZcNl112WZh++unDkksuGbfP7Uuo8nQl3SeeeCI8++yzYdNNN433KLvvvvvCDTfcEIOhHj16pE0M80hwQPm5Weijjz4arrvuujDRRBOF8cYbL843aNCg8PDDDwe++Lk/XLop7HvvvRfouuKu8Lz32muvhZ122ilMOOGE8e7tU045ZbyT/MsvvxwI4lK65ZZbYgsXrWzMQ6KFi0COG6+2lR9cH3/88XDHHXcETkmeYYYZ4jqoF1rtuHP8JZdcEtdD/TA/N67lCudc14qyYU5Zmfbmm2+G6aabzot+RsWO/xswYECHFyZwL0tqlnKWpT7MpwJJwC6wJDGcj1yIjRYYupFWWWWV0LNnz7DNNtvEbqezzjorHuA5kJ9//vmB15tttlkMBOacc87YdfXhhx/G+4IRABCkELAQpNBdRXDw7rvvxlYagqTW0gcffBAvQ0DgRRCREtsj0FlqqaVidxI3XyVx3zYCJMYtpUTAxRW8SVdeeWUMrMjTMsssE8vy6aefpllbHgnmVlxxxbDvvvsGghqCgvnmmy8MHjw4BjfzzDNPWHDBBWNX2iKLLBIDC4IN8kPLEtu56KKLYssPwctCCy0UZp555hig9enTJwaDaWO0bhFg0q1Htx2BJVZse/vtt4+z1coPM1AebtxLi87xxx8fAyEu40CdsJ6rr7465nvppZeO5RlhhBHC4osvHoNXHMYdd9zY0kcZCIzuv//+2NKW8uijAgoooED5BAyAOlhnBDyjjTZaGHnkkWMrAS0FBEDc6JQDLq0N8847b9h1113D3nvvHQOPlVZaKY6p2WOPPWLwxHWU6JJinr///e8x6OBgz5iiI488MrY80ErSWqL1iTE6tOKwjZToxlpjjTViSwUtUl988UV8i1aM1VZbLQZkjKF566234sGc4IMgiKDgqKOOivd14yD/6quvhnPOOSettuVx3XXXjQEfAQRBCOOPuAXKbbfdFq8JxXRaqGgRotWG4IMrhffu3Tu2CNFVxyDr1MpDEEmQQWsOAVNKBCbkEzdao2jZ4pc/86611lpptlArP8xE+QguySMBH15cxmG55ZYLtLBRH/vss0+sC4JD5qPVipYhWpwIMBlUfvvtt8fyEcRR9yYFFFBAgfIKGADVse44UPKXLg9A9wrdPdnbgzDeJV0an5YG/lLioEsAlBIBC60drSWChd133z12WdGqwrZI66yzThzTQ3cT61h55ZVbVkGLFHngoM/ZYwQYjBGi24eUuocIONZcc82QuuJaVvC/J+SbLjAS8xJMEDQR+BDA0S1HSxbzDRkyJM6HTeoOixP+9y9rkH1OdxwtZiktvPDCsbWI19n50utq+eE98kPAResRrVQpP6wjux7qLTsgm2VT2nrrrWP3HD5PPfVUDADTez4qoIACCpRP4I+jbfnyXvgcp3E22dPUaXFI0ysLkA1+Kt+r9ZruJoIQWqPoWqO1gi46gg3GFtGqQWsPafTRRw8HHHBAuOCCC2KgxlW8ObinfL3wwgstm6Jlhi6q9iQCKVpXPvroo9jVRRBGy0x7ypQNQrLbYvpdd92VndQSvAwzscqLlB/e2nDDDeOlALhqebUxVVUWj5Oy+aLFj5Yj1kFgSGuQSQEFFFCgvAIGQJ2oO7q76OJJiatiE1CkRLcPf1yvJyVafzjNnMT82UTrQ3Z53q+ch/nZZrZliAHHnMlF1w0Hbbqz0v2H6Iai1YLBvdlE9xndPgRCBE4kXtOKkxKDo1Ne07TsI+UnkR9aRej64qw4bopLNxXjhziVPJWp0odlaRX69ttvW+ahJSq1RjHOiKDjmmuuiQ48p2uRlJ0vThj6r1p+CIToSkvlf/vtt1u2xZl02USwl7wZK5Vaisg/LVrk9fDDDw977bVXrpc8yObR5woooIAC9RHwLLAOOhLIcAo3ZwRxlhXdXCeccEI824jWgrnnnjuOIWHg77HHHhufP/TQQ/G0dMbZEKQcc8wx8UwoBt9y/R2WpwWH8SoMoOZihQQ0vE5njpFdWpQYOExLDy07HKgZQ0RrC11QjFuh64rAgm4kxv2k6/kQONEVRPfUxRdfHMfUJALWSR7pOmM5BmCTVwKrykTeOa2e0+wJnBizxPIEXKyfsUd0SREAsS5akxhPxLgiurUYg0NieQYm40cAwz3lGNxNNx1jmQhK6K477rjjYnDCc/J18sknx+CIs7EoW2v5IWihVYtB4Sw344wzxgHY5Iduw4EDB8bT/skX28ZwhRVWiPm59tprY0sPQSxngBEEUbcEUXSrOQ6ocq8YvtfNcnZUs5Rz+GrfuRXofgGvA5RDHdCqwEGT8TjZs7U6s2laJQgu0nijauuiJYbT0LOJ4IkWoexp9Nn3eU5gQwsLB/vWEuslQJh99tlj0EMXXErkjTITOPHIX62uMMYu0TXXWqJljECoVllr5Yf1Elylli7WVS2oq9w+BuSdwdC0FvFHsNlaF2bl8r6uLUAQ3tHE2YJlSc1SzrLUh/lUIAn8+ad9esfHugnQijPttNPWbX2siICjVkDAPJXBD9M4Xb2t1N6WDQKCagFdNhii7PzVSrWCH5ajFac9qbX8sGwKfnjenuCH+dKgap4TwPFn8IOGSQEFFCi/gGOAyl+HuZeACxAytoYWILqwujsVLT/d7eH2FVBAAQXaFrAFqG0j56gQYKwPY5ZIbbVCVSzaJS+Llp8uKaQrVUABBRSoq4ABUF05m2Nl1a7l050lL1p+utPCbSuggAIKtE/ALrD2OTmXAgoooIACCjSQgAFQA1WmRVFAAQUUUECB9gkYALXPybkUUEABBRRQoIEEDIAaqDItigIKKKCAAgq0T8AAqH1OzqWAAgoooIACDSTgWWA5Vib31uLqwq+99lrc6sQTTxwWWGCBcNttt8XpXGiPm5jONNNMYdCgQfE2DFy1eJ555mm5lUWO2e3yTXEF6DvvvDM89thj8VYXHd0gtwSZaKKJwuSTT96yCty4ESzXK5ptttnCkksu2fIeT1iG+uA9biabvVgj91HjnmhcMHHllVeOt7zg1iVcIXuqqaYaZj2+UEABBRQop4AtQDnVGzcd5aah3DrihhtuCFtttVVYZJFF4tWauZdV37594/2mCH5I3FOL2zdwIE738aqVVW4X0d2JW3MMT+LKzW+88Ua49NJLh2exlnm5FtFJJ50Ur27NzV1T4vYV3EiVW4/suOOO8YKN3NMsJay/+eabWAcEX/vtt196K9x9993h6KOPjneyJzjiGkPc2oN7lRE0vfjiiy3z+kQBBRRQoLwCBkA51B3BD607tDRwm4nDDjss3lfqwQcfjFsnKOLeRtyINJvee++9sP/++2cnVX3OTVlfeumlqu/lNZEWFwKL4UkEKLRudTTR6rPbbrvF24Jk10GAiQemtOxsu+224eCDDw4ESdwfjZugzj///LFOuNs9d4tPiYCKeuJ2HjPPPHO80CM3hyWtueaa8W7w3OXepIACCihQbgEDoC6uP7p5uJM5B9qUuBs69+QiMEqJgy53JH/88cfTpHineW5cmhLdYnfddVe8C/3rr78eJ1922WXxbuzcDuKRRx6J0wicOMhzV3buVp8Sd4K//PLLwxlnnDFMwMR6mU6LE91GLEfrTGuJVhPWwQ1eSXTrbbzxxrErj2Wr3R6jWt5bW//wTK92rzG6trJdVdzNnft60d3G/dG4cOIRRxwRN8ONKjfbbLOWTRKMnnDCCTFQoiuMVq1skLb55pvHALZlAZ8ooIACCpRSwACoi6vtuuuuC1NPPfWfbujJgZSghbElJO7Sznznn39+fP3www/H7pf4Yug/AikOxARKBFN0n9Hqsswyy8RHumposeCAve+++4YNNtggjou544474ioIhNZaa62wxBJLxJaMDTfcMJx++umxW+7QQw8NW265ZTzwszwtVAQ41RItJHQDMXZptdVWi8ESQcjiiy8eW0vIz7jjjjvMoq3lfZiZ6viCgOfll1+OLqyW1hzGWyXrs846K5x66qlhqaWWimOtDjzwwJat77nnnjGgo3z77LNPwC87Pgjjf/3rX/Gu8C0L+UQBBRRQoHQCBkBdXGW06nDwrUypRYjxLwyKnn766cMWW2wR+vXrF4OdK664Iqy33notizEOhZaKscYaK44PYjwR42cmmWSSOA+PBB7vvvtuoGuNda600kphueWWi+/vtddeYZVVVokDhZn373//e/xjkDDbZX0c/Hm+9tprx4HJLRv/35MPP/wwBjw9e/YMb731VphxxhnDmWeeGQcL06oy0kgjhSmnnPJPwV5rea9cf71eE5jR4kOAx9goghhacwheSASBa6yxRmwdu/HGG8MXX3zRsmnubs/4LLrn6Ep74YUXWt7jyRRTTBFbx1IL3DBv+kIBBRRQoDQCBkBdXFUfffRRGH300f+0FQY5c4YR3WAXXXRR7EIi+KC15JJLLoldMIxxSYnAZ+edd44HdbptaJUYMmRIerullYKDPGNf5pprrtgSNN1008V5Hn300TiQNy2w6KKLxi4vWnNSC0d65OBfbVD1k08+GYM5zqjij1YiWrHaSm3lva3lh/d9vCnvO++8E/bee+8Y4NBaxhl2pHXWWSfssssugbE9lJV6SAn7Dz74ILbIpaCRQDMlAisCWgJNkwIKKKBAeQUMgLq47qaddtqqY2LY7CabbBJeeeWVeHYRLUC0Liy//PLxoL3qqqsOkzMCKbpsOHivu+66cQBvdoYUvBBA0SpzzTXXBMYHbbfddnG2scceOx7U0zK0ArEM09ubyB8tH5wRRRDAH4Hc4MGD4ypSHirX11beK+evx2u6wU455ZTY1cWZXQQ8k002WaAVi7O5CIZotWLM03PPPRdbtNjuOeecE7v2KMtBBx0UuxrpxswmAs/U8pad7nMFFFBAgfIIGAB1cV1xCjsBQLVEFxUtEJxdlBJjcWitqAyAbr311tgqRDcXZyExVoeuJdIoo4wSvv322/iaAcoc4OkGuuWWW2IrB/PQnXbfffcFThEnMeaILiwGZKf1xDeG/mNQc7XEwG0GR++xxx6BAdUEW7QCcWbbqKOO2tIiVbm+WnlnfSlPaZu33357y7ronrr++utbxvOkedIjy1Yun97j8eKLL46DtTkLjERgw1ggusRIBHAMNCe4I3G2Xjrri9fTTDNNmHfeeXkaE2WmdSx1p6XpPiqggAIKlEugxyFDU7myXK7c0gJ03nnnxUHJI4888jCZ50J7AwcOjKdyE0SQmJ+zq9IYobQAB2rOsGLMCt0wBEAENMsuu2wMfo4//vjYMkNAxbVv2BbrXn311eMBfuGFF45Bzz333BMDHlo6CF5ozWFgM4FTr1694usjjzwyjo/p3bt3bCVJeSB4YIwPLSOnnXZaHAy99dZbx5YgxgVde+218eKNtL5kxz21lneCLwYkc+bbLLPMEliO1hW67wi2uCYSZ7NxhhnX5OH9bCI4orWLM+MoL/OnAdi0VFGOMcYYI5aTIJFEdxxlYKwVQSOGBIvpWku0DDH25/PPP4+tcwSDlDElxhNRbwwyN3VOYMCAAR1eQbpeVodXkOOCzVLOHEndlAJ1ERhh6K/n/zYJ1GV1rqSaAFd6ZhwJ16ypTLQoVI4RqjaN5WhZobo4APPIHy0WpLQMLSpMIzigi6cycTo6rTeMDWqty6pymcrXBAV0e1Wun+nkicHQlalW3ivnZWB2tmuOIIUWJs50a0/iIocEkZzSzhlgrSWuCcRp8dUS66BeKpdnnNYhQ38zZE+zr7a809oWYCxbRxPj3MqSmqWcZakP86lAErALLEl04SNdXZNOOukwY3DS5iqDH6ZXm8Z0DsYEPySClxT88Dotk6ZVBifMQ6I1hvFGHQ1+WActUNXWz/RqwQ/L1Mo772dTNvghWGNQcnuDH9ZDKw+tSJXBS3YbPG8t+OE91lG5PNcOYiyRwQ9CJgUUUKDcAgZAOdUfA5dT8JLTJhtiM3RpZS9U2F2FokuMrri55567u7LgdhVQQAEF6ijw3+aEOq7QVbUuQKuEqZwCXB/IpIACCijQOAK2ADVOXVoSBRRQQAEFFGingAFQO6GcTQEFFFBAAQUaR8AAqHHq0pIooIACCiigQDsFDIDaCeVsCiiggAIKKNA4AgZAjVOXlkQBBRRQQAEF2ilgANROKGdTQAEFFFBAgcYRMABqnLq0JAoooIACCijQTgEDoHZCOZsCCiiggAIKNI6AAVDj1KUlUUABBRRQQIF2ChgAtRPK2RRQQAEFFFCgcQQMgBqnLi2JAgoooIACCrRTwAConVDOpoACCiiggAKNI2AA1Dh1aUkUUEABBRRQoJ0CBkDthHI2BRRQQAEFFGgcAQOgxqlLS6KAAgoooIAC7RQwAGonlLMpoIACCiigQOMI1CUA+uCDD8IDDzzQOCqWRAEFFFBAAQUaWqDTAdBvv/0Wdthhh/DII480NJSFU0ABBRRQQIHGERixs0W57LLLwsILLxx+//33zq7K5RVQoIkExhxzzKYobbOUsykq00I2lECnAqBnn302TDXVVOHHH38MX3zxRUPBWBgFFOhagaWXXrprN1CQtTdLOQvCbTYUaLdAhwOg7777Ljz55JNhu+22CwMGDGj3Bp1RAQUUQOCXX37pMMSII3b4q6vD2+zogs1Szo76uJwC3SXQ4W+RU089NbzxxhvhhRdeCK+++mpsBZp88snDpptu2l1lcbsKKFAigVtvvbXDue3Tp0+Hl817wWYpZ96ubk+Bzgp0OADq27dv+Prrr+P2+/XrFwYNGhR69+7d2fy4vAIKKKCAAgoo0OUCHQ6Axh133MAfaYIJJoiDoHk0KaCAAgoooIACRRfocACULdjWW2+dfelzBRRQQAEFFFCg0AKdvg5QoUtn5hRQQAEFFFBAgSoCBkBVUJykgAIKKKCAAo0tYADU2PVr6RRQQAEFFFCgioABUBUUJymggAIKKKBAYwsYADV2/Vo6BRRQQAEFFKgiYABUBcVJCiiggAIKKNDYAgZAjV2/lk4BBRRQQAEFqggYAFVBcZICCiiggAIKNLaAAVBj16+lU0ABBRRQQIEqAgZAVVCcpIACCiiggAKNLWAA1Nj1a+kUUEABBRRQoIqAAVAVFCcpoIACCiigQGMLGAA1dv1aOgUUUEABBRSoImAAVAXFSQoooIACCijQ2AIGQI1dv5ZOAQUUUEABBaoIGABVQXGSAgoooIACCjS2gAFQY9evpVNAAQUUUECBKgIGQFVQnKSAAgoooIACjS1gANTY9WvpFFBAAQUUUKCKgAFQFRQnKaCAAgoooEBjCxgANXb9WjoFFFBAAQUUqCJgAFQFxUkKKKCAAgoo0NgCBkCNXb+WTgEFFFBAAQWqCIxYZZqTFFBAAQUUGC6B/v37D9f82Zn79OmTfelzBXIRsAUoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEjAAKlJtmBcFFFBAAQUUyEXAACgXZjeigAIKKKCAAkUSMAAqUm2YFwUUUEABBRTIRcAAKBdmN6KAAgoooIACRRIwACpSbZgXBRRQQAEFFMhFwAAoF2Y3ooACCiiggAJFEhixSJkxLwoooIACCijQ/QL9+/fvcCb69OnT4WXzXNAWoDy13ZYCCiiggAIKFEKgUy1A3377bbj66qvDTz/9FDbccMPQs2fPQhTKTCiggAIKKKCAArUEOtUCdN1114W55porvPrqq2G//fartR3fU0ABBRRQQAEFCiPQ4RYgWn3WX3/9MMooo4Sff/459OvXrzCFMiMKKFB8gVlmmaX4maxDDi1nHRBdRe4CzbDfjvD70NQZ2YEDB4Zdd901nHzyyWG66abrzKpcVgEFFFBAAQUUyEWgwy1AKXfPPfdcGDx4cNhoo43CE088kSb7qIACCtQUeOedd2q+X+vNXr161Xq7UO9Zzraro0z12XZpGmOOZthvOx0Arb322oG/eeaZJ3z88cdh0kknbYzatxQKKNClAi+++GKH11+mA6blbLuay1SfbZemMeZohv22U4Ogs9XMDjzxxBNnJ/lcAQUUUEABBRQopECHA6Cvv/46LLroouHiiy8OTz75ZDjyyCPDX/7S4dUVEsdMKaCAAgoooEBjCnS4C2zssccODzzwQFTp0aNHY+pYKgW6QaAZrsDaDaxuUgEFFBhGoMMBEGsx8BnG0hcKKKCAAgooUBIB+6xKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJ2AAVD9L16SAAgoooIACJREwACpJRZlNBRRQQAEFFKifgAFQ/SxdkwIKKKCAAgqURMAAqCQVZTYVUEABBRRQoH4CBkD1s3RNCiiggAIKKFASAQOgklSU2VRAAQUUUECB+gkYANXP0jUpoIACCiigQEkEDIBKUlFmUwEFFFBAAQXqJzBiZ1Y1ZMiQcP3114cRRxwxrLnmmmGUUUbpzOpcVgEFFFBAAQUUyEWgUy1AJ510Uvjll1/C1VdfHdZff/1cMuxGFFBAAQUUUECBzgp0uAXok08+CX379g0TTjhhWH311UOvXr3Cr7/+Gnr06NHZPLm8AgoooIACCijQpQIj/D40dXYL7733Xth5553DTTfd1NlVubwCTS/w448/dtigTN3QlrPtarY+2zZyjq4RaIbPZ10CoGOOOSYsueSSYYEFFuiamnCtCiiggAIKKKBAHQU63AWW8vDMM8+EWWed1eAngfioQCcFHnzwwQ6vYfHFF+/wsnkvaDnbFrc+2zZyjq4RaIbPZ6cCoIEDB4ZPP/00rLLKKrEGXnvttTDzzDN3TW24VgWaROCrr75qipJazsaq5mapz8aqtdZL0wz12eGzwBgE3bt377D77ruHGWaYIUw22WRh0KBBrWv6jgIKKKCAAgooUBCBDrcATTLJJOGNN94oSDHMhgIKKKCAAgoo0H6BDrcAtX8TzqmAAgoooIACChRLwACoWPVhbhRQQAEFFFAgBwEDoByQ3YQCCiiggAIKFEvAAKhY9WFuFFBAAQUUUCAHAQOgHJDdhAIKKKCAAgoUS8AAqFj1YW4UUEABBRRQIAeBDp8Gn0Pe3IQCwwj0799/mNftfdGnT5/2zup8CiiggAJNImALUJNUtMVUQAEFFFBAgT8EDID+sPCZAgoooIACCjSJgAFQk1S0xVRAAQUUUECBPwQMgP6w8JkCCiiggAIKNImAAVCTVLTFVEABBRRQQIE/BAyA/rDwmQIKKKCAAgo0iYABUJNUtMVUQAEFFFBAgT8EDID+sPCZAgoooIACCjSJgBdCbJKKtpgKKKCAAp0X6OgFWdmyF2XtvH8912ALUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCgEDoFJUk5lUQAEFFFBAgXoKGADVU9N1KaCAAgoooEApBAyASlFNZlIBBRRQQAEF6ilgAFRPTdelgAIKKKCAAqUQMAAqRTWZSQUUUEABBRSop4ABUD01XZcCCiiggAIKlELAAKgU1WQmFVBAAQUUUKCeAgZA9dR0XQoooIACCihQCoERS5FLM1lToH///jXfb+3NPn36tPaW0xVQQAEFFGhoAVuAGrp6LZwCCiiggAIKVBNo6BYgW0aqVbnTFFBAAQUUUMAWIPcBBRRQQAEFFGg6AQOgpqtyC6yAAgoooIACBkDuAwoooIACCijQdAIGQE1X5RZYAQUUUEABBQyA3AcUUEABBRRQoOkEOh0Afffdd+G+++5rOjgLrIACCiiggALlFejUafCDBg0KRx55ZHjvvffC0ksvXV4Fc66AAgoooIACTSXQqQBovPHGC8stt1w477zzCok2/vjjFzJf9c6U5ay3aPeuz/rsXv96b936rLdo967P+uxe/3pufYTfh6bOrPCOO+6IAdC1117bmdW4rAIKKKCAAgookJtAp1qAcstlBzfE+KSOpNFHH70ji3XbMpazNn2z1CcKZSprR/dby1l7f++ud63PtuX9fLZtlOccDR0A3XPPPR2yLNtNQi1n7WpulvpEoUxl7eh+azlr7+/d9a712ba8n8+2jfKco9NngeWZWbelgAIKKKCAAgrUQ6BTAdBXX30VGAM0YMCA8Prrr9cjP65DAQUUUEABBRTocoFOdYGNM8444eSTT+7yTLoBBRRQQAEFFFCgngKdagGqZ0ZclwIKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKI2AAVJiqMCMKKKCAAgookJeAAVBe0m5HAQUUUEABBQojYABUmKowIwoooIACCiiQl4ABUF7SbkcBBRRQQAEFCiNgAFSYqjAjCiiggAIKKJCXgAFQXtJuRwEFFFBAAQUKIzDC70NTYXJjRhRQQAEFFFBAgRwEbAHKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQMADKAdlNKKCAAgoooECxBAyAilUf5kYBBRRQQAEFchAwAMoB2U0ooIACCiigQLEEDICKVR/mRgEFFFBAAQVyEDAAygHZTSiggAIKKKBAsQQMgIpVH+ZGAQUUUEABBXIQaOgA6NVXXw0XXHBBuPjii9uk/PXXX8NBBx0Ufv/992Hmfeutt8Kxxx4bvvvuu2Gmd+eL77//Plx22WXh0UcfzSUb9957bzj//PNz2dbwbiRbb59++mk4+eSTwyOPPDK8q8l1/gceeCDul/XY6CuvvBIuv/zyuKqnnnoq7LPPPuHnn3+ux6o7vI5LLrkkvP76620un627tmZ+8803wzHHHBN++umnllkp57XXXhvOOuuslmnd/YTvG/I6PGXr7jy7/e4TKOLxpfs08t9yQwdARxxx3+7eSQAADlFJREFURNhiiy3CCCOM0KbsX/7ylzD11FP/aV6mnXrqqeG3335rcx15zTDqqKOGJ598Mrz88su5bPKvf/1ruOaaa3LZ1vBuJFtvE088cXj++efDwIED42o++eST8MMPPwzvKrt8fvbHK6+8slPbeeedd+LyY4wxRphoooni8znmmCOce+658eDbqZV3cmHyQ77aStm6Y96333671UUmn3zy+EPkl19+aZmH/ZLPwm233dYyrbufsA9S9lplK+p+2d12zbj9Ih5f6lkPRd/XGyYA+vHHH8PgwYNj3dGKwwHigw8+iAfATTbZJP4q5sszzcOMn3/+eUtdc1DaeOONh2kB+vrrrwOtLaOMMkr8QmuZOYcn//nPf/4UdH300UexHOR1nHHGibngF3H2VzETK5fFg7ITDKRADq/KRFkr08cffxwPMnyh552++uqrltYMflFTH5T1//7v/+JzWgAq661nz54xm5Rl3XXXDV988UVcB8ullhEeswfSvMs17rjjtmxyyJAhLc/Tk2r1kOqeeY4++uhw0003xfJMNdVUYdFFF42LjjzyyLGu0np4rLau7Ptd8XyFFVYI448/fsuq2e8oJ/tesudzmK27F154Ieyyyy7DBKzZ/ZhAZ7TRRovr/Pbbb1vWPfbYY7c8T0+6o8xp2yuttFIYb7zxWi1bdr9Mn0X25/Sc9WSNeM6+y/7P55jn2XnTdrv6kTrkM/PNN98Ms6laeefzyueP7xqW5Tn5TylbT6yXfYQypu/dNN+XX34ZW+C7o9zZzx35IQ+fffZZylp8zB57mJCtvzRjpRPTUzm74/iS8pUeW/t+5f20/1EP2cR0Wt2z9ZItZ7V9Pbt8EZ7nf1TrglLT5N6vX7/Qv3//cN5558UDJM/5wN1xxx3h2WefDdNNN104/vjjwyKLLBJbCHbYYYfw/vvvh759+8Yc8QXMr8z0oTzttNMC62U9BAF5pq222ioQOe+7777htddei5s+5JBDYkC3/fbbt2TloYceCocffnjggJNaPSqX/fDDD8M888wTfz3vvffeYc455wyXXnppOOCAA8LKK68cv5z4IF5xxRXhoosuCqusskr0Y0febrvtwnPPPRe7azhY5ZnojqSraL755guUmYB2yy23jHmkO3K11VaLdVNZbymP1C3dMHfeeWc0nGKKKeL8vH/ccccF7Loz8cVx5JFHhvXWWy92Z5IXWq/OOeec8PDDD8dgILWIZOue/ZP3X3rppUD3F11+BHqVqVqdVs7TFa/5kqRurrrqqkCgQvn4jK2zzjrhzDPPHOZzSJdd+szRnfvee+8FultJlftxyivd0csvv3wg0KgM4rurzClvHBiXXXbZuM9l98ts2bL7JQf966+/Ptx6661hjTXWiF3adNtnv6v4DsKIHwMccHDJ/ohL2+6qR74HNtxww7D55psHWtTnn3/+WK9sr6280xXdq1ev+H3CQXKPPfaIdVZtP2ddGDDf6aefHrfHNvg+f+ONN8KBBx7Yrm5VlqlXyn7uWCffhXzfcjxZeumlY1BXeeyprD/8Kp1YV3ceX9h+ZSJArfb9OmDAgFiHhx56aDweLLDAAnFffPfdd+PnmTo+5ZRT4uoqy1m5r1duswivGyIAGmussULv3r1jRTE2hoh6ueWWi79C11xzzTDvvPMGfnUTKKRuo7XWWitMP/308SDLL00Cg5FGGinWCV+sHDj5RUrrEa0teQUA/AJacMEFw2yzzRa/8P7973/HR7o2yN/BBx/cst9QLj6QfClx4Ki2LF+ek002WVh88cXjh47uiWmnnTYGg6yIPmiCH4JFyslBi3XRrUAeCJIwyDMRzN1///1h9dVXD1tvvXWgW4GDwgwzzBCzQV1Sd6RsvcUJ//s300wzxRaDtddeO9A1tOeee4bHH388vovjUkstlZ099+djjjlm2H///QNfsgSeJL5kqEsO8NjzJclBL1v3tILMPPPMgS+iueaaKz5Wa82qVqdxI138j7ph/yJRRuqHffD2228PO+200zCfQ8qaPnPsy5NMMkkMwKvtxynbu+66awwUOFDyGc2m7ipzygOfLQJtUna/zJYtu1/yPXPDDTfEAIGuEMYqzjLLLMMYEfDQysdngnpeeOGFW1p/03a78pFWRfY18sd3T/rhQQDfVt757uB7h+56WpD4Tua7utp+Pvvss8di9OjRI34GUpn4/uO7gB8LE0wwQZrc5Y+Vnzs2yIGeH6Uca9gPSZXHnsr6I3CtdOrO40vMdJV/1HO179cZZ5wx7m+bbrppuPrqq8M000wTbr755kCAT7k4pvIdW21/yO7r1Vpqq2Qj90kNEQDxRZoOFpW/CpMoX7SpEviCpgWASJ4vaT6cpBTk3HPPPS3zMp3m97wSeZh00knD2WefHejOIW/knYMgXygXXnhhS1b4siCNPvrocb5qy/I+3VdpXgzSc8pFwEMrGV+sG2ywQQwI+XV94403tnyZM1+yYX1dnThgDho0KH7IaBYniK2VauUtvUcrBK2EdIvSRdHdacQRR4xZoO7YZznoE3im4IH6uOuuu1qt+1Su1spRrU5bm7fe07N544s1dUuyneznkNfZedNzHis/A8xL4vPK+/wCp2Ugm7qzzCkf2a7iVB7eyz5Pr/meoZx87ji40vpHqjSi5faEE04It9xyS2wliTPl+I+8p/zzXUALVHvzvuOOO8Zy0ZLOD5rW9vPWinPUUUfFctMynbVtbf56Ta/8ziX4fPDBB2PdsI0+ffoEPsPVjj3Z+qvm1J3Hl474UPeMtyMttNBCsaWWY9Hf/va3QID07tDWoGrlTNtK+056XaTHhgiAiMb5cPHriw9YStnnaRqPRLIcdGgFINJPifn5m3XWWePOTl8uKfW/p/m68pEDP7+Q6PZhpyM/NO3TAkKXB1E3O1y2bOl5tWUr85rtr03v8SGm5YzEB51mTQzYqUmUv9py8c0u+MdBk9YRuhRohUu/DvlioUmZRD5Tnih/Mshmh0AvzcNBmAMNf+wr3Zmy+U3P+ZIg8KS7hERX0oorrli17jkQUCe1UrU6rTV/vd+rVh/VtpHKny1Te/Zjmtcru/66u8yV5atWNuZJ+yX55fOczjDlAFst8QOAX9y0XtMaWoTU3ryzD5Nvxlum4LXafp79bPO9m/Yfhh/wY5VufM6wyytVfucylIDjC2cdkjjTj89oa8eelM9qTt15fEn5qvaYrYPs92t2XvZDvj+ffvrpOJSAk2MI0KuVk+XSvp5dR5GeN0QARDMkB0zGRjB25r777otfLPxCpKIYR8P4AnZeDhwcYHnO2V00W9O6wnwccPkVSb81/aF0RXHqLV/OTM8j8auC5kR2Kg72NDfy4SMQoA+aJmgGmNKsTKBC9x3POXDypVG57Isvvhj7znmf9fBlRFn4YqF/ly9dzpSjK4zxNvvtt1+Ye+65w2abbRbN6JfnjCVs0piUrnbgi+Uf//hHOPHEE+NYAMaOkPjgMeaF8QC0jDGWgHEkqd4o+zPPPBNdeJ/xGHR9cbAk7bbbbvFLLA0gjxO74R/N+nRpEMjS6oMrdcM+zHt84dNaxa9nAvRs3dMdsthii8XLErAs3UCMdWJdTzzxROzKvPvuu6vWaR5FpQ74LLFf8Zz9jrFcPK/8HGY/c7R8UY+05JKYP/sZ4D1+sPCZZR/gBwFdu+zLfM75fFfbj/Moc7VttFY2Dixpv6Q1he4DDoiMSeR1pRHr5iDCvkCXZ96JYISuY743qAOe85mjvtqTd747t9lmm2G6nKvt53T58pmlDpMdQROXOGDMIvs9gVNeqdrnjh+mdAPS+sH3DN29lccexgRljzW0kFQ6defxpZZfte/XFIhSLr6P6UHgBynHFS65wf7MsaJaOdlW2tfTd3Ct7XfHeyMMLeAfTSbdkYM6bZOKIHjgFz8furZSmp/it9ZERwBCsJTmbWud9XqfII0ykK9UHvLJL8W2Ti+utmx788UA23SmTVqGljJaovgSzivxBUhgR386ze10x3GAIPBLdZJc2spTmp/5+OKmPAQQRU4c/CeccMKW/bha3WfLVass1eq01vzd+R4HHfY19vvW9mMOksyTuhCr5bc7y8wPp2233Tb+eMrmLVs2pmfrrz35JTDkpATKXqTUnrxTl9W+P6rt5wRc1G36Dq+27+dV/mrbZhrlye5/6fhQ6zupmlPaB9LyeZWr1nZSnrJlIeChC5PvpHT8wYF8Mx/HyJRqlTPNU6TH/w5EKFKOOpiXtEOmD05bq0nztxb8sHyq2DRvW+us1/vZL4tUHvKZdr5a26m2bK35s+9VBj+8x4DyvBNnEzD2h19X6UsynVad6iS5tJU35qeFhF8v2BTponmt5b2ym6Na3SeH1taRpler0/Re0R5pgk+ptf2YVpK2UneUmSCdX/YcQGjNqEzZsvFetv5q5feioQPkadmji6FowQ/lqJV33idl6/K/U/77v9p+XulUbd/PrqMrn1fbNtMqjwfpda3vpGpOaR9Iy3dlWdq77pSnbFn40cFf9viDQ7X9sVY525uHPOdrmBagPNHcVtcK8MuC7j26hZZccsnY1dGZLRJMcd0cuvPacwDtzLZctjkF6Makq49uuqmHnjFVr8S4P7r16UowKZC3AF2fdLXT3UwXWZGCtXpYGADVQ9F1KKCAAgoooECpBNoeLFOq4phZBRRQQAEFFFCgbQEDoLaNnEMBBRRQQAEFGkzAAKjBKtTiKKCAAgoooEDbAgZAbRs5hwIKKKCAAgo0mIABUINVqMVRQAEFFFBAgbYFDIDaNnIOBRRQQAEFFGgwAQOgBqtQi6OAAgoooIACbQsYALVt5BwKKKCAAgoo0GACBkANVqEWRwEFFFBAAQXaFjAAatvIORRQQAEFFFCgwQT+Hxr6T8CDMgAiAAAAAElFTkSuQmCC" alt="" width="373" height="280" /></p> <p> </p> <p>The code they use in base graphics is this (super blurry sorry, you can also <a href="http://motioninsocial.com/tufte/">go to the website</a> for a better view).</p> <p><img class="aligncenter wp-image-4646" src="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-300x82.png" alt="Screen Shot 2016-02-11 at 12.56.53 PM" width="483" height="132" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-300x82.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-768x209.png 768w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-1024x279.png 1024w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM-260x71.png 260w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.53-PM.png 1248w" sizes="(max-width: 483px) 100vw, 483px" /></p> <p>in ggplot2 the code is:</p> <p><img class="aligncenter wp-image-4647" src="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-300x73.png" alt="Screen Shot 2016-02-11 at 12.56.39 PM" width="526" height="128" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-300x73.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-768x187.png 768w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-1024x249.png 1024w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM-260x63.png 260w, http://simplystatistics.org/wp-content/uploads/2016/02/Screen-Shot-2016-02-11-at-12.56.39-PM.png 1334w" sizes="(max-width: 526px) 100vw, 526px" /></p> <p> </p> <p>Both require a significant amount of coding. The ggplot2 plot also takes advantage of the ggthemes package here. Which means, without that package for some specific plot, it would require more coding.</p> <p>The bottom line is for production graphics, any system requires work. So why do I still use base R like an old person? Because I learned all the stupid little tricks for that system, it was a huge pain, and it would be a huge pain to learn it again for ggplot2, to make very similar types of plots. This is one where neither system is particularly better, but the time-optimal solution is to stick with whichever system you learned first.</p> <p><strong>Grading student work</strong></p> <p>People I seriously respect suggest teaching ggplot2 before base graphics as a way to get people up and going quickly making pretty visualizations. This is a good solution to the <a href="http://simplystatistics.org/2014/08/13/swirl-and-the-little-data-scientists-predicament/">little data scientist’s predicament</a>. The tricky thing is that the defaults in ggplot2 are just pretty enough that they might trick you into thinking the graph is production ready using defaults. Say for example you make a plot of the latitude and longitude of <a href="https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/quakes.html">quakes</a> data in R, colored by the number of stations reporting. This is one case where ggplot2 crushes base R for simplicity because of the automated generation of a color scale. You can make this plot with just the line:</p> <p>ggplot() + geom_point(data=quakes,aes(x=lat,y=long,colour=stations))</p> <p>And get this out:</p> <p><img class="aligncenter wp-image-4649" src="http://simplystatistics.org/wp-content/uploads/2016/02/quakes-300x264.png" alt="quakes" width="420" height="370" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/quakes-300x264.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/quakes-227x200.png 227w, http://simplystatistics.org/wp-content/uploads/2016/02/quakes.png 627w" sizes="(max-width: 420px) 100vw, 420px" /></p> <p>That is a pretty amazing plot in one line of code! What often happens with students in a first serious data analysis class is they think that plot is done. But it isn’t even close. Here are a few things you would need to do to make this plot production ready: (1) make the axes bigger, (2) make the labels bigger, (3) make the labels be full names (latitude and longitude, ideally with units when variables need them), (4) make the legend title be number of stations reporting. Those are the bare minimum. But a very common move by a person who knows a little R/data analysis would be to leave that graph as it is and submit it directly. I know this from lots of experience.</p> <p>The one nice thing about teaching base R here is that the base version for this plot is either (a) a ton of work or (b) ugly. In either case, it makes the student think very hard about what they need to do to make the plot better, rather than just assuming it is ok.</p> <p><strong>Where ggplot2 is better for sure</strong></p> <p>ggplot2 being compatible with piping, having a simple system for theming, having a good animation package, and in general being an excellent platform for developers who create](https://ggplot2-exts.github.io/index.html) are all huge advantages. It is also great for getting absolute newbies up and making medium-quality graphics in a huge hurry. This is a great way to get more people engaged in data science and I’m psyched about the reach and power ggplot2 has had. Still, I probably won’t use it for my own work, even thought it disappoints my data scientist friends.</p> Data handcuffs 2016-02-10T15:38:37+00:00 http://simplystats.github.io4643 <p>A few years ago, if you asked me what the top skills I got asked about for students going into industry, I’d definitely have said things like data cleaning, data transformation, database pulls, and other non-traditional statistical tasks. But as companies have progressed from the point of storing data to actually wanting to do something with it, I would say one of the hottest skills is understanding and dealing with data from randomized trials.</p> <p>In particular I see data scientists talking more about <a href="https://medium.com/@InVisionApp/a-b-and-see-a-beginner-s-guide-to-a-b-testing-a16406f1a239#.p7hoxirwo">A/B testing</a>, <a href="http://varianceexplained.org/r/bayesian-ab-testing/">sequential stopping rules</a>, <a href="https://twitter.com/hspter/status/696820603945414656">hazard regression</a> and other ideas that are really common in Biostatistics, which has traditionally focused on the analysis of data from designed experiments in biology.</p> <p>I think it is great that companies are choosing to do experiments, as this <a href="http://simplystatistics.org/2013/07/15/yes-clinical-trials-work/">still remains</a> the gold standard for how to generate knowledge about causal effects. One interesting new development though is the extreme lengths it appears some organizations are going to to be “data-driven”. They make all decisions based on data they have collected or experiments they have performed.</p> <p>But data mostly tell you about small scale effects and things that happened in the past. To be able to make big discoveries/improvements requires (a) having creative ideas that are not data supported and (b) trying them in experiments to see if they work. If you get too caught up in experimenting on the same set of conditions you will inevitably asymptote to a maximum and quickly reach diminishing returns. This is where the data handcuffs come in. Data can only tell you about the conditions that existed in the past, they often can’t predict conditions in the future or ideas that may work out or might not.</p> <p>In an interesting parallel to academic research a good strategy appears to be: (a) trying a bunch of things, including some things that have only a pretty modest chance of success, (b) doing experiments early and often when trying those things, and (c) getting very good at recognizing failure quickly and moving on to ideas that will be fruitful. The challenges are that in part (a) it is often difficult to generate really knew ideas, especially if you are already doing something that has had any level of success. There will be extreme pressure not to change what you are doing. In part (c) the challenge is that if you discard ideas too quickly you might miss a big opportunity, but if you don’t discard them quickly enough you will sink a lot of time/cost into utlimately not very fruitful projects.</p> <p>Regardless, almost all of the most <a href="http://simplystatistics.org/2013/09/25/is-most-science-false-the-titans-weigh-in/">interesting projects</a> I’ve worked on in my life were not driven by data that suggested they would be successful. They were often risks where the data either wasn’t in, or the data supported not doing at all. But as a statistician I decided to straight up ignore the data and try anyway. Then again, these ideas have also been the sources of <a href="http://simplystatistics.org/2012/01/11/healthnewsrater/">my biggest flameouts</a>.</p> Leek group guide to reading scientific papers 2016-02-09T13:59:53+00:00 http://simplystats.github.io4640 <p>The other day on Twitter Amelia requested a guide for reading papers</p> <blockquote class="twitter-tweet" data-width="550"> <p lang="en" dir="ltr"> I love <a href="https://twitter.com/jtleek">@jtleek</a>’s github guides to reviewing papers, writing R packages, giving talks, etc. Would love one on reading papers, for students. </p> <p> &mdash; Amelia McNamara (@AmeliaMN) <a href="https://twitter.com/AmeliaMN/status/695633602751635456">February 5, 2016</a> </p> </blockquote> <p> </p> <p>So I came up with a guide which you can find here: <a href="https://github.com/jtleek/readingpapers">Leek group guide to reading papers</a>. I actually found this to be one that I had the hardest time with. I described how I tend to read a paper but I’m not sure that is really the optimal (or even a very good) way. I’d really appreciate pull requests if you have ideas on how to improve the guide.</p> A menagerie of messed up data analyses and how to avoid them 2016-02-01T13:39:57+00:00 http://simplystats.github.io4612 <p><em>Update: I realize this may seem like I’m picking on people. I really don’t mean to, I have for sure made all of these mistakes and many more. I can give many examples, but the one I always remember is the time Rafa saved me from “I got a big one here” when I made a huge mistake as a first year assistant professor.</em></p> <p>In any introductory statistics or data analysis class they might teach you the basics, how to load a data set, how to munge it, how to do t-tests, maybe how to write a report. But there are a whole bunch of ways that a data analysis can be screwed up that often get skipped over. Here is my first crack at creating a “menagerie” of messed up data analyses and how you can avoid them. Depending on interest I could probably list a ton more, but as always I’m doing the non-comprehensive list :).</p> <p> </p> <p> </p> <p><span style="text-decoration: underline;"><strong>Outco<img class="alignleft wp-image-4613" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png" alt="direction411" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction411-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png 256w" sizes="(max-width: 125px) 100vw, 125px" />me switching</strong></span></p> <p>_What it is: _Outcome switching is where you collect data looking at say, the relationship between exercise and blood pressure. Once you have the data, you realize that blood pressure isn’t really related to exercise. So you change the outcome and ask if HDL levels are related to exercise and you find a relationship. It turns out that when you do this kind of switch you have now biased your analysis because you would have just stopped if you found the original relationship.</p> <p style="text-align: left;"> <em>An example: </em><a href="http://www.vox.com/2015/12/29/10654056/ben-goldacre-compare-trials">In this article</a> they discuss how Paxil, an anti-depressant, was originally studied for several main outcomes, none of which showed an effect - but some of the secondary outcomes did. So they switched the outcome of the trial and used this result to market the drug. </p> <p style="text-align: left;"> <em>What you can do: </em>Pre-specify your analysis plan, including which outcomes you want to look at. Then very clearly state when you are analyzing a primary outcome or a secondary analysis. That way people know to take the secondary analyses with a grain of salt. You can even get paid to pre-specify with the OSF's <a href="https://cos.io/prereg/">pre-registration challenge</a>. </p> <p><img class="alignleft wp-image-4618" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png" alt="direction398" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398.png 512w" sizes="(max-width: 125px) 100vw, 125px" /></p> <p><span style="text-decoration: underline;"><strong>Garden of forking paths</strong></span></p> <p>_What it is: _In this case you may or may not have specified your outcome and stuck with it. Let’s assume you have, so you are still looking at blood pressure and exercise. But it turns out a bunch of people had apparently erroneous measures of blood pressure. So you dropped those measurements and did the analysis with the remaining values. This is a totally sensible thing to do, but if you didn’t specify in advance how you would handle bad measurements, you can make a bunch of different choices here (the forking paths). You could drop them, impute them, multiply impute them, weight them, etc. Each of these gives a different result and you can accidentally pick the one that works best even if you are being “sensible”</p> <p><em>An example</em>: <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf">This article</a> gives several examples of the forking paths. One is where authors report that at peak fertility women are more likely to wear red or pink shirts. They made several inclusion/exclusion choices (which women to include in which comparison group) for who to include that could easily have gone a different direction or were against stated rules.</p> <p>_What you can do: _Pre-specify every part of your analysis plan, down to which observations you are going to drop, transform, etc. To be honest this is super hard to do because almost every data set is messy in a unique way. So the best thing here is to point out steps in your analysis where you made a choice that wasn’t pre-specified and you could have made differently. Or, even better, try some of the different choices and make sure your results aren’t dramatically different.</p> <p> </p> <p><strong><img class="alignleft wp-image-4621" src="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png" alt="emoticon149" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">P-hacking</span></strong></p> <p>_What it is: _The nefarious cousin of the garden of forking paths. Basically here the person outcome switches, uses the garden of forking paths, intentionally doesn’t correct for multiple testing, or uses any of these other means to cheat and get a result that they like.</p> <p><em>An example:</em> This one gets talked about a lot and there is <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106">some evidence that it happens</a>. But it is usually pretty hard to ascribe purely evil intentions to people and I’d rather not point the finger here. I think that often the garden of forking paths results in just as bad an outcome without people having to try.</p> <p><em>What to do:</em> Know how to do an analysis well and don’t cheat.</p> <p><em>Update: </em> Some [<em>Update: I realize this may seem like I’m picking on people. I really don’t mean to, I have for sure made all of these mistakes and many more. I can give many examples, but the one I always remember is the time Rafa saved me from “I got a big one here” when I made a huge mistake as a first year assistant professor.</em></p> <p>In any introductory statistics or data analysis class they might teach you the basics, how to load a data set, how to munge it, how to do t-tests, maybe how to write a report. But there are a whole bunch of ways that a data analysis can be screwed up that often get skipped over. Here is my first crack at creating a “menagerie” of messed up data analyses and how you can avoid them. Depending on interest I could probably list a ton more, but as always I’m doing the non-comprehensive list :).</p> <p> </p> <p> </p> <p><span style="text-decoration: underline;"><strong>Outco<img class="alignleft wp-image-4613" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png" alt="direction411" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction411-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png 256w" sizes="(max-width: 125px) 100vw, 125px" />me switching</strong></span></p> <p>_What it is: _Outcome switching is where you collect data looking at say, the relationship between exercise and blood pressure. Once you have the data, you realize that blood pressure isn’t really related to exercise. So you change the outcome and ask if HDL levels are related to exercise and you find a relationship. It turns out that when you do this kind of switch you have now biased your analysis because you would have just stopped if you found the original relationship.</p> <p style="text-align: left;"> <em>An example: </em><a href="http://www.vox.com/2015/12/29/10654056/ben-goldacre-compare-trials">In this article</a> they discuss how Paxil, an anti-depressant, was originally studied for several main outcomes, none of which showed an effect - but some of the secondary outcomes did. So they switched the outcome of the trial and used this result to market the drug. </p> <p style="text-align: left;"> <em>What you can do: </em>Pre-specify your analysis plan, including which outcomes you want to look at. Then very clearly state when you are analyzing a primary outcome or a secondary analysis. That way people know to take the secondary analyses with a grain of salt. You can even get paid to pre-specify with the OSF's <a href="https://cos.io/prereg/">pre-registration challenge</a>. </p> <p><img class="alignleft wp-image-4618" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png" alt="direction398" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398.png 512w" sizes="(max-width: 125px) 100vw, 125px" /></p> <p><span style="text-decoration: underline;"><strong>Garden of forking paths</strong></span></p> <p>_What it is: _In this case you may or may not have specified your outcome and stuck with it. Let’s assume you have, so you are still looking at blood pressure and exercise. But it turns out a bunch of people had apparently erroneous measures of blood pressure. So you dropped those measurements and did the analysis with the remaining values. This is a totally sensible thing to do, but if you didn’t specify in advance how you would handle bad measurements, you can make a bunch of different choices here (the forking paths). You could drop them, impute them, multiply impute them, weight them, etc. Each of these gives a different result and you can accidentally pick the one that works best even if you are being “sensible”</p> <p><em>An example</em>: <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf">This article</a> gives several examples of the forking paths. One is where authors report that at peak fertility women are more likely to wear red or pink shirts. They made several inclusion/exclusion choices (which women to include in which comparison group) for who to include that could easily have gone a different direction or were against stated rules.</p> <p>_What you can do: _Pre-specify every part of your analysis plan, down to which observations you are going to drop, transform, etc. To be honest this is super hard to do because almost every data set is messy in a unique way. So the best thing here is to point out steps in your analysis where you made a choice that wasn’t pre-specified and you could have made differently. Or, even better, try some of the different choices and make sure your results aren’t dramatically different.</p> <p> </p> <p><strong><img class="alignleft wp-image-4621" src="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png" alt="emoticon149" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">P-hacking</span></strong></p> <p>_What it is: _The nefarious cousin of the garden of forking paths. Basically here the person outcome switches, uses the garden of forking paths, intentionally doesn’t correct for multiple testing, or uses any of these other means to cheat and get a result that they like.</p> <p><em>An example:</em> This one gets talked about a lot and there is <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106">some evidence that it happens</a>. But it is usually pretty hard to ascribe purely evil intentions to people and I’d rather not point the finger here. I think that often the garden of forking paths results in just as bad an outcome without people having to try.</p> <p><em>What to do:</em> Know how to do an analysis well and don’t cheat.</p> <p><em>Update: </em> Some](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2649230) “when honest researchers face ambiguity about what analyses to run, and convince themselves those leading to better results are the correct ones (see e.g., Gelman &amp; Loken, 2014; John, Loewenstein, &amp; Prelec, 2012; Simmons, Nelson, &amp; Simonsohn, 2011; Vazire, 2015).” This coincides with the definition of “garden of forking paths”. I have been asked to point this out <a href="https://twitter.com/talyarkoni/status/694576205089996800">on Twitter.</a> It was never my intention to accuse anyone of accusing people of fraud. That being said, I still think that the connotation that many people think of when they think “p-hacking” corresponds to my definition above, although I agree with folks that isn’t helpful - which is why I prefer we call the non-nefarious version the garden of forking paths.</p> <p> </p> <p><strong><img class="alignleft wp-image-4623" src="http://simplystatistics.org/wp-content/uploads/2016/02/paypal15.png" alt="paypal15" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/paypal15-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/paypal15.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">Uncorrected multiple testing </span></strong></p> <p>_What it is: _This one is related to the garden of forking paths and outcome switching. Most statistical methods for measuring the potential for error assume you are only evaluating one hypothesis at a time. But in reality you might be measuring a ton either on purpose (in a big genomics or neuroimaging study) or accidentally (because you consider a bunch of outcomes). In either case, the expected error rate changes a lot if you consider many hypotheses.</p> <p><em>An example: </em> The <a href="http://users.stat.umn.edu/~corbett/classes/5303/Bennett-Salmon-2009.pdf">most famous example</a> is when someone did an fMRI on a dead fish and showed that there were a bunch of significant regions at the P &lt; 0.05 level. The reason is that there is natural variation in the background of these measurements and if you consider each pixel independently ignoring that you are looking at a bunch of them, a few will have P &lt; 0.05 just by chance.</p> <p><em>What you can do</em>: Correct for multiple testing. When you calculate a large number of p-values make sure you <a href="http://varianceexplained.org/statistics/interpreting-pvalue-histogram/">know what their distribution</a> is expected to be and you use a method like Bonferroni, Benjamini-Hochberg, or q-value to correct for multiple testing.</p> <p> </p> <p><strong><img class="alignleft wp-image-4625" src="http://simplystatistics.org/wp-content/uploads/2016/02/animal162.png" alt="animal162" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/animal162-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/animal162.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">I got a big one here</span></strong></p> <p><em>What it is:</em> One of the most painful experiences for all new data analysts. You collect data and discover a huge effect. You are super excited so you write it up and submit it to one of the best journals or convince your boss to be the farm. The problem is that huge effects are incredibly rare and are usually due to some combination of experimental artifacts and biases or mistakes in the analysis. Almost no effects you detect with statistics are huge. Even the relationship between smoking and cancer is relatively weak in observational studies and requires very careful calibration and analysis.</p> <p><em>An example:</em> <a href="http://www.ncbi.nlm.nih.gov/pubmed/17206142">In a paper</a> authors claimed that 78% of genes were differentially expressed between Asians and Europeans. But it turns out that most of the Asian samples were measured in one sample and the Europeans in another. [<em>Update: I realize this may seem like I’m picking on people. I really don’t mean to, I have for sure made all of these mistakes and many more. I can give many examples, but the one I always remember is the time Rafa saved me from “I got a big one here” when I made a huge mistake as a first year assistant professor.</em></p> <p>In any introductory statistics or data analysis class they might teach you the basics, how to load a data set, how to munge it, how to do t-tests, maybe how to write a report. But there are a whole bunch of ways that a data analysis can be screwed up that often get skipped over. Here is my first crack at creating a “menagerie” of messed up data analyses and how you can avoid them. Depending on interest I could probably list a ton more, but as always I’m doing the non-comprehensive list :).</p> <p> </p> <p> </p> <p><span style="text-decoration: underline;"><strong>Outco<img class="alignleft wp-image-4613" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png" alt="direction411" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction411-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png 256w" sizes="(max-width: 125px) 100vw, 125px" />me switching</strong></span></p> <p>_What it is: _Outcome switching is where you collect data looking at say, the relationship between exercise and blood pressure. Once you have the data, you realize that blood pressure isn’t really related to exercise. So you change the outcome and ask if HDL levels are related to exercise and you find a relationship. It turns out that when you do this kind of switch you have now biased your analysis because you would have just stopped if you found the original relationship.</p> <p style="text-align: left;"> <em>An example: </em><a href="http://www.vox.com/2015/12/29/10654056/ben-goldacre-compare-trials">In this article</a> they discuss how Paxil, an anti-depressant, was originally studied for several main outcomes, none of which showed an effect - but some of the secondary outcomes did. So they switched the outcome of the trial and used this result to market the drug. </p> <p style="text-align: left;"> <em>What you can do: </em>Pre-specify your analysis plan, including which outcomes you want to look at. Then very clearly state when you are analyzing a primary outcome or a secondary analysis. That way people know to take the secondary analyses with a grain of salt. You can even get paid to pre-specify with the OSF's <a href="https://cos.io/prereg/">pre-registration challenge</a>. </p> <p><img class="alignleft wp-image-4618" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png" alt="direction398" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398.png 512w" sizes="(max-width: 125px) 100vw, 125px" /></p> <p><span style="text-decoration: underline;"><strong>Garden of forking paths</strong></span></p> <p>_What it is: _In this case you may or may not have specified your outcome and stuck with it. Let’s assume you have, so you are still looking at blood pressure and exercise. But it turns out a bunch of people had apparently erroneous measures of blood pressure. So you dropped those measurements and did the analysis with the remaining values. This is a totally sensible thing to do, but if you didn’t specify in advance how you would handle bad measurements, you can make a bunch of different choices here (the forking paths). You could drop them, impute them, multiply impute them, weight them, etc. Each of these gives a different result and you can accidentally pick the one that works best even if you are being “sensible”</p> <p><em>An example</em>: <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf">This article</a> gives several examples of the forking paths. One is where authors report that at peak fertility women are more likely to wear red or pink shirts. They made several inclusion/exclusion choices (which women to include in which comparison group) for who to include that could easily have gone a different direction or were against stated rules.</p> <p>_What you can do: _Pre-specify every part of your analysis plan, down to which observations you are going to drop, transform, etc. To be honest this is super hard to do because almost every data set is messy in a unique way. So the best thing here is to point out steps in your analysis where you made a choice that wasn’t pre-specified and you could have made differently. Or, even better, try some of the different choices and make sure your results aren’t dramatically different.</p> <p> </p> <p><strong><img class="alignleft wp-image-4621" src="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png" alt="emoticon149" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">P-hacking</span></strong></p> <p>_What it is: _The nefarious cousin of the garden of forking paths. Basically here the person outcome switches, uses the garden of forking paths, intentionally doesn’t correct for multiple testing, or uses any of these other means to cheat and get a result that they like.</p> <p><em>An example:</em> This one gets talked about a lot and there is <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106">some evidence that it happens</a>. But it is usually pretty hard to ascribe purely evil intentions to people and I’d rather not point the finger here. I think that often the garden of forking paths results in just as bad an outcome without people having to try.</p> <p><em>What to do:</em> Know how to do an analysis well and don’t cheat.</p> <p><em>Update: </em> Some [<em>Update: I realize this may seem like I’m picking on people. I really don’t mean to, I have for sure made all of these mistakes and many more. I can give many examples, but the one I always remember is the time Rafa saved me from “I got a big one here” when I made a huge mistake as a first year assistant professor.</em></p> <p>In any introductory statistics or data analysis class they might teach you the basics, how to load a data set, how to munge it, how to do t-tests, maybe how to write a report. But there are a whole bunch of ways that a data analysis can be screwed up that often get skipped over. Here is my first crack at creating a “menagerie” of messed up data analyses and how you can avoid them. Depending on interest I could probably list a ton more, but as always I’m doing the non-comprehensive list :).</p> <p> </p> <p> </p> <p><span style="text-decoration: underline;"><strong>Outco<img class="alignleft wp-image-4613" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png" alt="direction411" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction411-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction411.png 256w" sizes="(max-width: 125px) 100vw, 125px" />me switching</strong></span></p> <p>_What it is: _Outcome switching is where you collect data looking at say, the relationship between exercise and blood pressure. Once you have the data, you realize that blood pressure isn’t really related to exercise. So you change the outcome and ask if HDL levels are related to exercise and you find a relationship. It turns out that when you do this kind of switch you have now biased your analysis because you would have just stopped if you found the original relationship.</p> <p style="text-align: left;"> <em>An example: </em><a href="http://www.vox.com/2015/12/29/10654056/ben-goldacre-compare-trials">In this article</a> they discuss how Paxil, an anti-depressant, was originally studied for several main outcomes, none of which showed an effect - but some of the secondary outcomes did. So they switched the outcome of the trial and used this result to market the drug. </p> <p style="text-align: left;"> <em>What you can do: </em>Pre-specify your analysis plan, including which outcomes you want to look at. Then very clearly state when you are analyzing a primary outcome or a secondary analysis. That way people know to take the secondary analyses with a grain of salt. You can even get paid to pre-specify with the OSF's <a href="https://cos.io/prereg/">pre-registration challenge</a>. </p> <p><img class="alignleft wp-image-4618" src="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png" alt="direction398" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/direction398-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/direction398.png 512w" sizes="(max-width: 125px) 100vw, 125px" /></p> <p><span style="text-decoration: underline;"><strong>Garden of forking paths</strong></span></p> <p>_What it is: _In this case you may or may not have specified your outcome and stuck with it. Let’s assume you have, so you are still looking at blood pressure and exercise. But it turns out a bunch of people had apparently erroneous measures of blood pressure. So you dropped those measurements and did the analysis with the remaining values. This is a totally sensible thing to do, but if you didn’t specify in advance how you would handle bad measurements, you can make a bunch of different choices here (the forking paths). You could drop them, impute them, multiply impute them, weight them, etc. Each of these gives a different result and you can accidentally pick the one that works best even if you are being “sensible”</p> <p><em>An example</em>: <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf">This article</a> gives several examples of the forking paths. One is where authors report that at peak fertility women are more likely to wear red or pink shirts. They made several inclusion/exclusion choices (which women to include in which comparison group) for who to include that could easily have gone a different direction or were against stated rules.</p> <p>_What you can do: _Pre-specify every part of your analysis plan, down to which observations you are going to drop, transform, etc. To be honest this is super hard to do because almost every data set is messy in a unique way. So the best thing here is to point out steps in your analysis where you made a choice that wasn’t pre-specified and you could have made differently. Or, even better, try some of the different choices and make sure your results aren’t dramatically different.</p> <p> </p> <p><strong><img class="alignleft wp-image-4621" src="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png" alt="emoticon149" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/emoticon149.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">P-hacking</span></strong></p> <p>_What it is: _The nefarious cousin of the garden of forking paths. Basically here the person outcome switches, uses the garden of forking paths, intentionally doesn’t correct for multiple testing, or uses any of these other means to cheat and get a result that they like.</p> <p><em>An example:</em> This one gets talked about a lot and there is <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106">some evidence that it happens</a>. But it is usually pretty hard to ascribe purely evil intentions to people and I’d rather not point the finger here. I think that often the garden of forking paths results in just as bad an outcome without people having to try.</p> <p><em>What to do:</em> Know how to do an analysis well and don’t cheat.</p> <p><em>Update: </em> Some](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2649230) “when honest researchers face ambiguity about what analyses to run, and convince themselves those leading to better results are the correct ones (see e.g., Gelman &amp; Loken, 2014; John, Loewenstein, &amp; Prelec, 2012; Simmons, Nelson, &amp; Simonsohn, 2011; Vazire, 2015).” This coincides with the definition of “garden of forking paths”. I have been asked to point this out <a href="https://twitter.com/talyarkoni/status/694576205089996800">on Twitter.</a> It was never my intention to accuse anyone of accusing people of fraud. That being said, I still think that the connotation that many people think of when they think “p-hacking” corresponds to my definition above, although I agree with folks that isn’t helpful - which is why I prefer we call the non-nefarious version the garden of forking paths.</p> <p> </p> <p><strong><img class="alignleft wp-image-4623" src="http://simplystatistics.org/wp-content/uploads/2016/02/paypal15.png" alt="paypal15" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/paypal15-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/paypal15.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">Uncorrected multiple testing </span></strong></p> <p>_What it is: _This one is related to the garden of forking paths and outcome switching. Most statistical methods for measuring the potential for error assume you are only evaluating one hypothesis at a time. But in reality you might be measuring a ton either on purpose (in a big genomics or neuroimaging study) or accidentally (because you consider a bunch of outcomes). In either case, the expected error rate changes a lot if you consider many hypotheses.</p> <p><em>An example: </em> The <a href="http://users.stat.umn.edu/~corbett/classes/5303/Bennett-Salmon-2009.pdf">most famous example</a> is when someone did an fMRI on a dead fish and showed that there were a bunch of significant regions at the P &lt; 0.05 level. The reason is that there is natural variation in the background of these measurements and if you consider each pixel independently ignoring that you are looking at a bunch of them, a few will have P &lt; 0.05 just by chance.</p> <p><em>What you can do</em>: Correct for multiple testing. When you calculate a large number of p-values make sure you <a href="http://varianceexplained.org/statistics/interpreting-pvalue-histogram/">know what their distribution</a> is expected to be and you use a method like Bonferroni, Benjamini-Hochberg, or q-value to correct for multiple testing.</p> <p> </p> <p><strong><img class="alignleft wp-image-4625" src="http://simplystatistics.org/wp-content/uploads/2016/02/animal162.png" alt="animal162" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/animal162-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/animal162.png 256w" sizes="(max-width: 125px) 100vw, 125px" /><span style="text-decoration: underline;">I got a big one here</span></strong></p> <p><em>What it is:</em> One of the most painful experiences for all new data analysts. You collect data and discover a huge effect. You are super excited so you write it up and submit it to one of the best journals or convince your boss to be the farm. The problem is that huge effects are incredibly rare and are usually due to some combination of experimental artifacts and biases or mistakes in the analysis. Almost no effects you detect with statistics are huge. Even the relationship between smoking and cancer is relatively weak in observational studies and requires very careful calibration and analysis.</p> <p><em>An example:</em> <a href="http://www.ncbi.nlm.nih.gov/pubmed/17206142">In a paper</a> authors claimed that 78% of genes were differentially expressed between Asians and Europeans. But it turns out that most of the Asian samples were measured in one sample and the Europeans in another.](http://www.ncbi.nlm.nih.gov/pubmed/17597765) a large fraction of these differences.</p> <p><em>What you can do</em>: Be deeply suspicious of big effects in data analysis. If you find something huge and counterintuitive, especially in a well established research area, spend <em>a lot</em> of time trying to figure out why it could be a mistake. If you don’t, others definitely will, and you might be embarrassed.</p> <p><span style="text-decoration: underline;"><strong><img class="alignleft wp-image-4632" src="http://simplystatistics.org/wp-content/uploads/2016/02/man298.png" alt="man298" width="125" height="125" srcset="http://simplystatistics.org/wp-content/uploads/2016/02/man298-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2016/02/man298.png 256w" sizes="(max-width: 125px) 100vw, 125px" />Double complication</strong></span></p> <p><em>What it is</em>: When faced with a large and complicated data set, beginning analysts often feel compelled to use a big complicated method. Imagine you have collected data on thousands of genes or hundreds of thousands of voxels and you want to use this data to predict some health outcome. There is a severe temptation to use deep learning or blend random forests, boosting, and five other methods to perform the prediction. The problem is that complicated methods fail for complicated reasons, which will be extra hard to diagnose if you have a really big, complicated data set.</p> <p><em>An example:</em> There are a large number of examples where people use very small training sets and complicated methods. One example (there were many other problems with this analysis, too) is when people <a href="http://www.nature.com/nm/journal/v12/n11/full/nm1491.html">tried to use complicated prediction algorithms</a> to predict which chemotherapy would work best using genomics. Ultimately this paper was retracted for may problems, but the complication of the methods plus the complication of the data made it hard to detect.</p> <p><em>What you can do:</em> When faced with a big, messy data set, try simple things first. Use linear regression, make simple scatterplots, check to see if there are obvious flaws with the data. If you must use a really complicated method, ask yourself if there is a reason it is outperforming the simple methods because often with large data sets <a href="http://arxiv.org/pdf/math/0606441.pdf">even simple things work</a>.</p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p><span style="text-decoration: underline;"><strong>Image credits:</strong></span></p> <ul> <li>Outcome switching. Icon made by <a href="http://hananonblog.wordpress.com" title="Hanan">Hanan</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> <li>Forking paths. Icon made by <a href="http://iconalone.com" title="Popcic">Popcic</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> <li>P-hacking.Icon made by <a href="http://www.icomoon.io" title="Icomoon">Icomoon</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> <li>Uncorrected multiple testing.Icon made by <a href="http://www.freepik.com" title="Freepik">Freepik</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> <li>Big one here. Icon made by <a href="http://www.freepik.com" title="Freepik">Freepik</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> <li>Double complication. Icon made by <a href="http://www.freepik.com" title="Freepik">Freepik</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed under <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0">CC BY 3.0</a></li> </ul> Exactly how risky is breathing? 2016-01-26T09:58:23+00:00 http://simplystats.github.io4607 <p>This <a href="http://nyti.ms/23nysp5">article by by George Johnson</a> in the NYT describes a study by Kamen P. Simonov​​ and Daniel S. Himmelstein​ that examines the hypothesis that people living at higher altitudes experience lower rates of lung cancer than people living at lower altitudes.</p> <blockquote> <p>All of the usual caveats apply. Studies like this, which compare whole populations, can be used only to suggest possibilities to be explored in future research. But the hypothesis is not as crazy as it may sound. Oxygen is what energizes the cells of our bodies. Like any fuel, it inevitably spews out waste — a corrosive exhaust of substances called “free radicals,” or “reactive oxygen species,” that can mutate DNA and nudge a cell closer to malignancy.</p> </blockquote> <p>I’m not so much focused on the science itself, which is perhaps intriguing, but rather on the way the article was written. First, George Johnson links to the <a href="https://peerj.com/articles/705/">paper</a> itself, <a href="http://simplystatistics.org/2015/01/15/how-to-find-the-science-paper-behind-a-headline-when-the-link-is-missing/">already a major victory</a>. Also, I thought he did a very nice job of laying out the complexity of doing a population-level study like this one–all the potential confounders, selection bias, negative controls, etc.</p> <p>I remember particulate matter air pollution epidemiology used to have this feel. You’d try to do all these different things to make the effect go away, but for some reason, under every plausible scenario, in almost every setting, there was always some association between air pollution and health outcomes. Eventually you start to believe it….</p> On research parasites and internet mobs - let's try to solve the real problem. 2016-01-25T14:34:08+00:00 http://simplystats.github.io4602 <p>A couple of days ago one of the editors of the New England Journal of Medicine <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">posted an editorial</a> showing some moderate level of support for data sharing but also introducing the term “research parasite”:</p> <blockquote> <p>A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”</p> </blockquote> <p>While this is obviously the most inflammatory statement in the article, I think that there are several more important and overlooked misconceptions. The biggest problems are:</p> <ol> <li><strong>“</strong><strong>The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.</strong><strong>“ </strong>This almost certainly would be the fault of the investigators who published the data. If the authors adhere to good [A couple of days ago one of the editors of the New England Journal of Medicine <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">posted an editorial</a> showing some moderate level of support for data sharing but also introducing the term “research parasite”:</li> </ol> <blockquote> <p>A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”</p> </blockquote> <p>While this is obviously the most inflammatory statement in the article, I think that there are several more important and overlooked misconceptions. The biggest problems are:</p> <ol> <li><strong>“</strong><strong>The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.</strong><strong>“ </strong>This almost certainly would be the fault of the investigators who published the data. If the authors adhere to good](https://github.com/jtleek/datasharing) policies and respond to queries from people using their data promptly then this should not be a problem at all.</li> <li><strong>“… but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.” </strong>The idea that no one should be able to try to disprove ideas with the authors data has been covered in other blogs/on Twitter. One thing I do think is worth considering here is the concern about credit. I think that the traditional way credit has accrued to authors has been citations. But if you get a major study funded, say for 50 million dollars, run that study carefully, sit on a million conference calls, and end up with a single major paper, that could be frustrating. Which is why I think that a better policy would be to have the people who run massive studies get credit in a way that <em>is not papers</em>. They should get some kind of formal administrative credit. But then the data should be immediately and publicly available to anyone to publish on. That allows people who run massive studies to get credit and science to proceed normally.</li> <li><strong>“</strong><strong>The new investigators arrived on the scene with their own ideas and worked symbiotically, rather than parasitically, with the investigators holding the data, moving the field forward in a way that neither group could have done on its own.” </strong> The story that follows about a group of researchers who collaborated with the NSABP to validate their gene expression signature is very encouraging. But it isn’t the only way science should work. Researchers shouldn’t be constrained to one model or another. Sometimes collaboration is necessary, sometimes it isn’t, but in neither case should we label the researchers “symbiotic” or “parasitic”, terms that have extreme connotations.</li> <li><strong>“How would data sharing work best? We think it should happen symbiotically, not parasitically.”</strong> I think that it should happen <em>automatically</em>. If you generate a data set with public funds, you should be required to immediately make it available to researchers in the community. But you should <em>get credit for generating the data set and the hypothesis that led to the data set</em>. The problem is that people who generate data will almost never be as fast at analyzing it as people who know how to analyze data. But both deserve credit, whether they are working together or not.</li> <li><strong>“Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested.”</strong> The trouble with this framework is that it preferentially accrues credit to data generators and doesn’t accurately describe the role of either party. To flip this argument around, you could just as easily say that anyone who uses <a href="http://salzberg-lab.org/">Steven Salzberg</a>’s software for aligning or assembling short reads should make him a co-author. I think Dr. Drazen would agree that not everyone who aligned reads should add Steven as co-author, despite his contribution being critical for the completion of their work.</li> </ol> <p>After the piece was posted there was predictable internet rage from <a href="https://twitter.com/dataparasite">data parasites</a>, a <a href="https://twitter.com/hashtag/researchparasite?src=hash">dedicated hashtag</a>, and half a dozen angry blog posts written about the piece. These inspired a <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1601087">follow up piece</a> from Drazen. I recognize why these folks were upset - the “research parasites” thing was unnecessarily inflammatory. But <a href="http://simplystatistics.org/2014/03/05/plos-one-i-have-an-idea-for-what-to-do-with-all-your-profits-buy-hard-drives/">I also sympathize with data creators</a> who are also subject to a tough environment - particularly when they are junior scientists.</p> <p>I think the response to the internet outrage also misses the mark and comes off as a defense of people with angry perspectives on data sharing. I would have much rather seen a more pro-active approach from a leading journal of medicine. I’d like to see something that acknowledges different contributions appropriately and doesn’t slow down science. Something like:</p> <ol> <li>We will require all data, including data from clinical trials, to be made public immediately on publication as long as it poses minimal risk to the patients involved or the patients have been consented to broad sharing.</li> <li>When data are not made publicly available they are still required to be deposited with a third party such as the NIH or Figshare to be held available for request from qualified/approved researchers.</li> <li>We will require that all people who use data give appropriate credit to the original data generators in terms of data citations.</li> <li>We will require that all people who use software/statistical analysis tools give credit to the original tool developers in terms of software citations.</li> <li>We will include a new designation for leaders of major data collection or software generation projects that can be included to demonstrate credit for major projects undertaken and completed.</li> <li>When reviewing papers written by experimentalists with no statistical/computational co-authors we will require no fewer than 2 statistical/computational referees to ensure there has not been a mistake made by inexperienced researchers.</li> <li>When reviewing papers written by statistical/computational authors with no experimental co-authors we will require no fewer than 2 experimental referees to ensure there has not been a mistake made by inexperienced researchers.</li> </ol> <p> </p> Not So Standard Deviations Episode 8 - Snow Day 2016-01-24T21:41:44+00:00 http://simplystats.github.io4596 <p>Hilary and I were snowed in over the weekend, so we recorded Episode 8 of Not So Standard Deviations. In this episode, Hilary and I talk about how to get your foot in the door with data science, the New England Journal’s view on data sharing, Google’s “Cohort Analysis”, and trying to predict a movie’s box office returns based on the movie’s script.</p> <p><a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">Subscribe to the podcast on iTunes</a>.</p> <p>Follow <a href="https://twitter.com/nssdeviations">@NSSDeviations</a> on Twitter!</p> <p>Show notes:</p> <ul> <li><a href="http://goo.gl/eUU2AK">Remembrances of Peter Hall</a></li> <li><a href="http://goo.gl/HbMu87">Research Parasites</a> (NEJM editorial by Dan Longo and Jeffrey Drazen)</li> <li>Amazon <a href="http://goo.gl/83DvvO">review/data analysis</a> of Fifty Shades of Grey</li> <li><a href="https://youtu.be/55psWVYSbrI">Time-lapse cats</a></li> <li><a href="https://getpocket.com">Pocket</a></li> </ul> <p>Apologies for my audio on this episode. I had a bit of a problem calibrating my microphone. I promise to figure it out for the next episode!</p> <p><a href="https://api.soundcloud.com/tracks/243634673/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio for this episode</a>.</p> <p> </p> Parallel BLAS in R 2016-01-21T11:53:07+00:00 http://simplystats.github.io4593 <p>I’m working on a new chapter for my R Programming book and the topic is parallel computation. So, I was happy to see this tweet from David Robinson (@drob) yesterday:</p> <blockquote class="twitter-tweet" lang="en"> <p dir="ltr" lang="en"> How fast is this <a href="https://twitter.com/hashtag/rstats?src=hash">#rstats</a> code? x &lt;- replicate(5e3, rnorm(5e3)) x %*% t(x) For me, w/Microsoft R Open, 2.5sec. Wow. <a href="https://t.co/0SbijNxxVa">https://t.co/0SbijNxxVa</a> </p> <p> — David Robinson (@drob) <a href="https://twitter.com/drob/status/689916280233562112">January 20, 2016</a> </p> </blockquote> <p>What does this have to do with parallel computation? Briefly, the code generates 5,000 standard normal random variates, repeats this 5,000 times and stores them in a 5,000 x 5,000 matrix (x’). Then it computes x x’. The second part is key, because it involves a matrix multiplication.</p> <p>Matrix multiplication in R is handled, at a very low level, by the library that implements the Basic Linear Algebra Subroutines, or BLAS. The stock R that you download from CRAN comes with what’s known as a reference implementation of BLAS. It works, it produces what everyone agrees are the right answers, but it is in no way optimized. Here’s what I get when I run this code on my Mac using Studio and the CRAN version of R for Mac OS X:</p> <pre>system.time({ x &lt;- replicate(5e3, rnorm(5e3)); tcrossprod(x) }) user system elapsed 59.622 0.314 59.927 </pre> <p>Note that the “user” time and the “elapsed” time are roughly the same. Note also that I use the tcrossprod() function instead of the otherwise equivalent expression x %*% t(x). Both crossprod() and tcrossprod() are generally faster than using the %*% operator.</p> <p>Now, when I run the same code on my built-from-source version of R (version 3.2.3), here’s what I get:</p> <pre>system.time({ x &lt;- replicate(5e3, rnorm(5e3)); tcrossprod(x) }) user system elapsed 14.378 0.276 3.344 </pre> <p>Overall, it’s faster when I don’t run the code through RStudio (14s vs. 59s). Also on this version the elapsed time is about 1/4 the user time. Why is that?</p> <p>The build-from-source version of R is linked to Apple’s Accelerate framework, which is a large library that includes an optimized BLAS library for Intel chips. This optimized BLAS, in addition to being optimized with respect to the code itself, is designed to be multi-threaded so that it can split work off into chunks and run them in parallel on multi-core machines. Here, the tcrossprod() function was run in parallel on my machine, and so the elapsed time was about a quarter of the time that was “charged” to the CPU(s).</p> <p>David’s tweet indicated that when using Microsoft R Open, which is a custom built binary of R, that the (I assume?) elapsed time is 2.5 seconds. Looking at the attached link, it appears that Microsoft’s R Open is linked against <a href="https://software.intel.com/en-us/intel-mkl">Intel’s Math Kernel Library</a> (MKL) which contains, among other things, an optimized BLAS for Intel chips. I don’t know what kind of computer David was running on, but assuming it was similarly high-powered as mine, it would suggest Intel’s MKL sees slightly better performance. But either way, both Accelerate and MKL achieve that speed up through custom-coding of the BLAS routines and multi-threading on multi-core systems.</p> <p>If you’re going to be doing any linear algebra in R (and you will), it’s important to link to an optimized BLAS. Otherwise, you’re just wasting time unnecessarily. Besides Accelerate (Mac) and Intel MKL, theres AMD’s <a href="http://developer.amd.com/tools-and-sdks/archive/amd-core-math-library-acml/">ACML</a> library for AMD chips and the <a href="http://math-atlas.sourceforge.net">ATLAS</a> library which is a general purpose tunable library. Also <a href="https://www.tacc.utexas.edu/research-development/tacc-software/gotoblas2">Goto’s BLAS</a> is optimized but is not under active development.</p> Profile of Hilary Parker 2016-01-14T21:15:46+00:00 http://simplystats.github.io4590 <p>If you’ve ever wanted to know more about my <a href="https://soundcloud.com/nssd-podcast">Not So Standard Deviations</a> co-host (and Johns Hopkins graduate) Hilary Parker, you can go check out the <a href="http://thisisstatistics.org/hilary-parker-gets-crafty-with-statistics-in-her-not-so-standard-job/">great profile of her</a> on the American Statistical Association’s This Is Statistics web site.</p> <blockquote> <p><strong>What advice would you give to high school students thinking about majoring in statistics?</strong></p> <p>It’s such a great field! Not only is the industry booming, but more importantly, the disciplines of statistics teaches you to think analytically, which I find helpful for just about every problem I run into. It’s also a great field to be interested in as a generalist– rather than dedicating yourself to studying one subject, you are deeply learning a set of tools that you can apply to any subject that you find interesting. Just one glance at the topics covered on The Upshot or 538 can give you a sense of that. There’s politics, sports, health, history… the list goes on! It’s a field with endless possibility for growth and exploration, and as I mentioned above, the more I explore the more excited I get about it.</p> </blockquote> Not So Standard Deviations Episode 7 - Statistical Royalty 2016-01-12T08:45:24+00:00 http://simplystats.github.io4588 <p>The latest episode of Not So Standard Deviations is out, and boy does Hilary have a story to tell.</p> <p>We also talk about Theranos and the pitfalls of diagnostic testing, Spotify’s Discover Weekly playlist generation algorithm (and the need for human product managers), and of course, a little Star Wars. Also, Hilary and I start a new segment where we each give some “free advertising” to something interesting that they think other people should know about.</p> <p>Show Notes:</p> <ul> <li><a href="http://goo.gl/JDk6ni">Gosset Icterometer</a></li> <li>The <a href="http://skybrudeconsulting.com/blog/2015/10/16/theranos-healthcare.html">dangers</a> of <a href="https://www.fredhutch.org/en/news/center-news/2013/11/scientists-urge-caution-personal-genetic-screenings.html">entertainment</a> <a href="http://mobihealthnews.com/35444/the-rise-of-the-seemingly-serious-but-just-for-entertainment-purposes-medical-app/">medicine</a></li> <li>Spotify’s Discover Weekly <a href="http://goo.gl/enzFeR">solves human curation</a>?</li> <li>David Robinson’s <a href="http://varianceexplained.org">Variance Explained</a></li> <li><a href="http://what3words.com">What3Words</a></li> </ul> <p><a href="https://api.soundcloud.com/tracks/241071463/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio for this episode</a>.</p> Jeff, Roger and Brian Caffo are doing a Reddit AMA at 3pm EST Today 2016-01-11T09:29:28+00:00 http://simplystats.github.io4585 <p>Jeff Leek, Brian Caffo, and I are doing a <a href="https://www.reddit.com/r/IAmA">Reddit AMA</a> TODAY at 3pm EST. We’re happy to answer questions about…anything…including our roles as Co-Directors of the <a href="https://www.coursera.org/specializations/jhu-data-science">Johns Hopkins Data Science Specialization</a> as well as the <a href="https://www.coursera.org/specializations/executive-data-science">Executive Data Science Specialization</a>.</p> <p>This is one of the few pictures of the three of us together.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189.jpg"><img class="alignright size-large wp-image-4586" src="http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189-1024x768.jpg" alt="IMG_0189" width="990" height="743" srcset="http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189-120x90.jpg 120w, http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189-300x225.jpg 300w, http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189-1024x768.jpg 1024w, http://simplystatistics.org/wp-content/uploads/2016/01/IMG_0189-260x195.jpg 260w" sizes="(max-width: 990px) 100vw, 990px" /></a></p> A non-comprehensive list of awesome things other people did in 2015 2015-12-21T11:22:07+00:00 http://simplystats.github.io4577 <p><em>Editor’s Note: This is the third year I’m making a list of awesome things other people did this year. Just like the lists for <a href="http://simplystatistics.org/2013/12/20/a-non-comprehensive-list-of-awesome-things-other-people-did-this-year/">2013</a> and <a href="http://simplystatistics.org/2014/12/17/a-non-comprehensive-list-of-awesome-things-other-people-did-in-2014/">2014</a> I am doing this off the top of my head. I have avoided talking about stuff I worked on or that people here at Hopkins are doing because this post is supposed to be about other people’s awesome stuff. I wrote this post because a blog often feels like a place to complain, but we started Simply Stats as a place to be pumped up about the stuff people were doing with data. This year’s list is particularly “off the cuff” so I’d appreciate additions if you have ‘em. I have surely missed awesome things people have done.</em></p> <ol> <li>I hear the <a href="http://sml.princeton.edu/tukey">Tukey conference</a> put on by my former advisor John S. was amazing. Out of it came this really good piece by David Donoho on <a href="https://dl.dropboxusercontent.com/u/23421017/50YearsDataScience.pdf">50 years of Data Science</a>.</li> <li>Sherri Rose wrote really accurate and readable guides on <a href="http://drsherrirose.com/academic-cvs-for-statistical-science-faculty-positions">academic CVs</a>, <a href="http://drsherrirose.com/academic-cover-letters-for-statistical-science-faculty-positions">academic cover letters</a>, and <a href="http://drsherrirose.com/how-to-be-an-effective-phd-researcher">how to be an effective PhD researcher</a>.</li> <li>I am not 100% sold on the deep learning hype, but Michael Nielson wrote this awesome book on <a href="http://neuralnetworksanddeeplearning.com/">deep learning and neural networks</a>. I like how approachable it is and how un-hypey it is. I also thought Andrej Karpathy’s <a href="http://karpathy.github.io/2015/10/25/selfie/">blog post</a> on whether you have a good selfie or not was fun.</li> <li>Thomas Lumley continues to be must read regardless of which blog he writes for with a ton of snarky fun posts debunking the latest ridiculous health headlines on <a href="http://www.statschat.org.nz/2015/11/27/to-find-the-minds-construction-near-the-face/">statschat</a> and more in depth posts like this one on pre-filtering multiple tests on <a href="http://notstatschat.tumblr.com/post/131478660126/prefiltering-very-large-numbers-of-tests">notstatschat</a>.</li> <li>David Robinson is making a strong case for top data science blogger with his series of <a href="http://varianceexplained.org/r/bayesian_fdr_baseball/">awesome</a> <a href="http://varianceexplained.org/r/credible_intervals_baseball/">posts</a> on <a href="http://varianceexplained.org/r/empirical_bayes_baseball/">empirical Bayes</a>.</li> <li>Hadley Wickham doing Hadley Wickham things again. <a href="https://github.com/hadley/readr">readr</a> is the biggie for me this year.</li> <li>I’ve been really enjoying the solid coverage of science/statistics from the (not entirely statistics focused as the name would suggest) <a href="https://twitter.com/statnews">STAT</a>.</li> <li>Ben Goldacre and co. launched <a href="http://opentrials.net/">OpenTrials</a> for aggregating all the clinical trial data in the world in an open repository.</li> <li>Christie Aschwanden’s piece on why <a href="http://fivethirtyeight.com/features/science-isnt-broken/">Science Isn’t Broken </a> is a must read and one of the least polemic treatments of the reproducibility/replicability issue I’ve read. The p-hacking graphic is just icing on the cake.</li> <li>I’m excited about the new <a href="http://blog.revolutionanalytics.com/2015/06/r-consortium.html">R Consortium</a> and the idea of having more organizations that support folks in the R community.</li> <li>Emma Pierson’s blog and writeups in various national level news outlets continue to impress. I thought <a href="https://www.washingtonpost.com/news/grade-point/wp/2015/10/15/a-better-way-to-gauge-how-common-sexual-assault-is-on-college-campuses/">this one</a> on changing the incentives for sexual assault surveys was particularly interesting/good.</li> <li> <p>Amanda Cox an co. created this [<em>Editor’s Note: This is the third year I’m making a list of awesome things other people did this year. Just like the lists for <a href="http://simplystatistics.org/2013/12/20/a-non-comprehensive-list-of-awesome-things-other-people-did-this-year/">2013</a> and <a href="http://simplystatistics.org/2014/12/17/a-non-comprehensive-list-of-awesome-things-other-people-did-in-2014/">2014</a> I am doing this off the top of my head. I have avoided talking about stuff I worked on or that people here at Hopkins are doing because this post is supposed to be about other people’s awesome stuff. I wrote this post because a blog often feels like a place to complain, but we started Simply Stats as a place to be pumped up about the stuff people were doing with data. This year’s list is particularly “off the cuff” so I’d appreciate additions if you have ‘em. I have surely missed awesome things people have done.</em></p> </li> <li>I hear the <a href="http://sml.princeton.edu/tukey">Tukey conference</a> put on by my former advisor John S. was amazing. Out of it came this really good piece by David Donoho on <a href="https://dl.dropboxusercontent.com/u/23421017/50YearsDataScience.pdf">50 years of Data Science</a>.</li> <li>Sherri Rose wrote really accurate and readable guides on <a href="http://drsherrirose.com/academic-cvs-for-statistical-science-faculty-positions">academic CVs</a>, <a href="http://drsherrirose.com/academic-cover-letters-for-statistical-science-faculty-positions">academic cover letters</a>, and <a href="http://drsherrirose.com/how-to-be-an-effective-phd-researcher">how to be an effective PhD researcher</a>.</li> <li>I am not 100% sold on the deep learning hype, but Michael Nielson wrote this awesome book on <a href="http://neuralnetworksanddeeplearning.com/">deep learning and neural networks</a>. I like how approachable it is and how un-hypey it is. I also thought Andrej Karpathy’s <a href="http://karpathy.github.io/2015/10/25/selfie/">blog post</a> on whether you have a good selfie or not was fun.</li> <li>Thomas Lumley continues to be must read regardless of which blog he writes for with a ton of snarky fun posts debunking the latest ridiculous health headlines on <a href="http://www.statschat.org.nz/2015/11/27/to-find-the-minds-construction-near-the-face/">statschat</a> and more in depth posts like this one on pre-filtering multiple tests on <a href="http://notstatschat.tumblr.com/post/131478660126/prefiltering-very-large-numbers-of-tests">notstatschat</a>.</li> <li>David Robinson is making a strong case for top data science blogger with his series of <a href="http://varianceexplained.org/r/bayesian_fdr_baseball/">awesome</a> <a href="http://varianceexplained.org/r/credible_intervals_baseball/">posts</a> on <a href="http://varianceexplained.org/r/empirical_bayes_baseball/">empirical Bayes</a>.</li> <li>Hadley Wickham doing Hadley Wickham things again. <a href="https://github.com/hadley/readr">readr</a> is the biggie for me this year.</li> <li>I’ve been really enjoying the solid coverage of science/statistics from the (not entirely statistics focused as the name would suggest) <a href="https://twitter.com/statnews">STAT</a>.</li> <li>Ben Goldacre and co. launched <a href="http://opentrials.net/">OpenTrials</a> for aggregating all the clinical trial data in the world in an open repository.</li> <li>Christie Aschwanden’s piece on why <a href="http://fivethirtyeight.com/features/science-isnt-broken/">Science Isn’t Broken </a> is a must read and one of the least polemic treatments of the reproducibility/replicability issue I’ve read. The p-hacking graphic is just icing on the cake.</li> <li>I’m excited about the new <a href="http://blog.revolutionanalytics.com/2015/06/r-consortium.html">R Consortium</a> and the idea of having more organizations that support folks in the R community.</li> <li>Emma Pierson’s blog and writeups in various national level news outlets continue to impress. I thought <a href="https://www.washingtonpost.com/news/grade-point/wp/2015/10/15/a-better-way-to-gauge-how-common-sexual-assault-is-on-college-campuses/">this one</a> on changing the incentives for sexual assault surveys was particularly interesting/good.</li> <li>Amanda Cox an co. created this ](http://www.nytimes.com/interactive/2015/05/28/upshot/you-draw-it-how-family-income-affects-childrens-college-chances.html) , which is an amazing way to teach people about pre-conceived biases in the way we think about relationships and correlations. I love the crowd-sourcing view on data analysis this suggests.</li> <li>As usual Philip Guo was producing gold over on his blog. I appreciate this piece on <a href="http://www.pgbovine.net/tips-for-data-driven-research.htm">twelve tips for data driven research</a>.</li> <li>I am really excited about the new field of adaptive data analysis. Basically understanding how we can let people be “real data analysts” and still get reasonable estimates at the end of the day. <a href="http://www.sciencemag.org/content/349/6248/636.abstract">This paper</a> from Cynthia Dwork and co was one of the initial salvos that came out this year.</li> <li>Datacamp <a href="https://www.datacamp.com/courses/intro-to-python-for-data-science?utm_source=growth&amp;utm_campaign=python&amp;utm_medium=button">incorporated Python</a> into their platform. The idea of interactive education for R/Python/Data Science is a very cool one and has tons of potential.</li> <li>I was really into the idea of <a href="http://projecteuclid.org/euclid.aoas/1430226098">Cross-Study validatio</a>n that got proposed this year. With the growth of public data in a lot of areas we can really start to get a feel for generalizability.</li> <li>The Open Science Foundation did this <a href="http://www.sciencemag.org/content/349/6251/aac4716">incredible replication of 100 different studies</a> in psychology with attention to detail and care that deserves a ton of attention.</li> <li>Florian’s piece “<a href="http://www.ncbi.nlm.nih.gov/pubmed/26402330">You are not working for me; I am working with you.</a>” should be required reading for all students/postdocs/mentors in academia. This is something I still hadn’t fully figured out until I read Florian’s piece.</li> <li>I think Karl Broman’s post on why <a href="https://kbroman.wordpress.com/2015/09/09/reproducibility-is-hard/">reproducibility is hard</a> is a great introduction to the real issues in making data analyses reproducible.</li> <li>This was the year of the f1000 post-publication review paper. I thought <a href="http://f1000research.com/articles/4-121/v1">this one</a> from Yoav and the ensuing fallout was fascinating.</li> <li>I love pretty much everything out of Di Cook/Heike Hoffman’s groups. This year I liked the paper on <a href="http://download.springer.com/static/pdf/611/art%253A10.1007%252Fs00180-014-0534-x.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs00180-014-0534-x&amp;token2=exp=1450714996~acl=%2Fstatic%2Fpdf%2F611%2Fart%25253A10.1007%25252Fs00180-014-0534-x.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle%252F10.1007%252Fs00180-014-0534-x*~hmac=3c5f5c7c1b2381685437659d8ffd64e1cb2c52d1dfd10506cad5d2af1925c0ac">visual statistical inference in high-dimensional low sample size settings</a>.</li> <li>This is pretty recent, but Nathan Yau’s <a href="https://flowingdata.com/2015/12/15/a-day-in-the-life-of-americans/">day in the life graphic is mesmerizing</a>.</li> </ol> <p>This was a year where open source data people <a href="http://treycausey.com/emotional_rollercoaster_public_work.html">described</a> their <a href="https://twitter.com/johnmyleswhite/status/666429299327569921">pain</a> from people being demanding/mean to them for their contributions. As the year closes I just want to give a big thank you to everyone who did awesome stuff I used this year and have completely ungraciously failed to acknowledge.</p> <p> </p> Not So Standard Deviations: Episode 6 - Google is the New Fisher 2015-12-18T13:08:10+00:00 http://simplystats.github.io4575 <p>Episode 6 of Not So Standard Deviations is now posted. In this episode Hilary and I talk about the analytics of our own podcast, and analyses that seem easy but are actually hard.</p> <p>If you haven’t already, you can subscribe to the podcast through <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">iTunes</a>.</p> <p>This will be our last episode for 2015 so see you in 2016!</p> <p>Notes</p> <ul> <li><a href="https://goo.gl/X0TFt9">Roger’s books on Leanpub</a></li> <li><a href="https://goo.gl/VO0ckP">KPIs</a></li> <li><a href="http://replyall.soy">Reply All</a>, a great podcast</li> <li><a href="http://user2016.org">Use R! 2016 conference</a> where Don Knuth is an invited speaker!</li> <li><a href="http://goo.gl/wUcTBT">Liz Stuart’s directory of propensity score software</a></li> <li><a href="https://goo.gl/CibhJ0">A/B testing</a></li> <li><a href="https://goo.gl/qMyksb">iid</a></li> <li><a href="https://goo.gl/qHVzWQ">R 3.2.3 release notes</a></li> <li><a href="http://www.pqr-project.org/">pqR</a></li> <li><a href="https://goo.gl/pFOVkx">John Myles White’s tweet</a></li> </ul> <p><a href="https://api.soundcloud.com/tracks/237909534/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio file for this episode</a>.</p> Instead of research on reproducibility, just do reproducible research 2015-12-11T12:18:33+00:00 http://simplystats.github.io4563 <p>Right now reproducibility, replicability, false positive rates, biases in methods, and other problems with science are the hot topic. As I mentioned in a previous post pointing out a flaw with a scientific study is way easier to do correctly than generating a new scientific study. Some folks have noticed that right now there is a huge market for papers pointing out how science is flawed. The combination of the relative ease of pointing out flaws and the huge payout for writing these papers is helping to generate the hype around the “reproducibility crisis”.</p> <p>I <a href="http://www.slideshare.net/jtleek/evidence-based-data-analysis-45800617">gave a talk</a> a little while ago at an NAS workshop where I stated that all the tools for reproducible research exist (the caveat being really large analyses - although that is changing as well). To make a paper completely reproducible, open, and available for post publication review you can use the following approach with no new tools/frameworks needed.</p> <ol> <li>Use <a href="https://github.com/">Github </a>for version control.</li> <li>Use <a href="http://rmarkdown.rstudio.com/">rmarkdown</a> or <a href="http://ipython.org/notebook.html">iPython notebooks</a> for your analysis code</li> <li>When your paper is done post it to <a href="http://arxiv.org/">arxiv</a> or <a href="http://biorxiv.org/">biorxiv</a>.</li> <li>Post your data to an appropriate repository like <a href="http://www.ncbi.nlm.nih.gov/sra">SRA</a> or a general purpose site like <a href="https://figshare.com/">figshare.</a></li> <li>Send any software you develop to a controlled repository like [Right now reproducibility, replicability, false positive rates, biases in methods, and other problems with science are the hot topic. As I mentioned in a previous post pointing out a flaw with a scientific study is way easier to do correctly than generating a new scientific study. Some folks have noticed that right now there is a huge market for papers pointing out how science is flawed. The combination of the relative ease of pointing out flaws and the huge payout for writing these papers is helping to generate the hype around the “reproducibility crisis”.</li> </ol> <p>I <a href="http://www.slideshare.net/jtleek/evidence-based-data-analysis-45800617">gave a talk</a> a little while ago at an NAS workshop where I stated that all the tools for reproducible research exist (the caveat being really large analyses - although that is changing as well). To make a paper completely reproducible, open, and available for post publication review you can use the following approach with no new tools/frameworks needed.</p> <ol> <li>Use <a href="https://github.com/">Github </a>for version control.</li> <li>Use <a href="http://rmarkdown.rstudio.com/">rmarkdown</a> or <a href="http://ipython.org/notebook.html">iPython notebooks</a> for your analysis code</li> <li>When your paper is done post it to <a href="http://arxiv.org/">arxiv</a> or <a href="http://biorxiv.org/">biorxiv</a>.</li> <li>Post your data to an appropriate repository like <a href="http://www.ncbi.nlm.nih.gov/sra">SRA</a> or a general purpose site like <a href="https://figshare.com/">figshare.</a></li> <li>Send any software you develop to a controlled repository like](https://cran.r-project.org/) or <a href="http://bioconductor.org/">Bioconductor</a>.</li> <li>Participate in the <a href="http://simplystatistics.org/2015/11/16/so-you-are-getting-crushed-on-the-internet-the-new-normal-for-academics/">post publication discussion on Twitter and with a Blog</a></li> </ol> <p>This is also true of open science, open data sharing, reproducibility, replicability, post-publication peer review and all the other issues forming the “reproducibility crisis”. There is a lot of attention and heat that has focused on the “crisis” or on folks who make a point to take a stand on reproducibility or open science or post publication review. But in the background, outside of the hype, there are a large group of people that are quietly executing solid, open, reproducible science.</p> <p>I wish that this group would get more attention so I decided to point out a few of them. Next time somebody asks me about the research on reproducibility or open science I’ll just point them here and tell them to just follow the lead of people doing it.</p> <ul> <li><strong>Karl Broman</strong> - posts all of his <a href="http://kbroman.org/pages/talks.html">talks online </a>, generates many widely used <a href="http://kbroman.org/pages/software.html">open source packages</a>, writes <a href="http://kbroman.org/pages/tutorials.html">free/open tutorials</a> on everything from knitr to making webpages, makes his <a href="http://www.ncbi.nlm.nih.gov/pubmed/26290572">papers</a> highly <a href="https://github.com/kbroman/Paper_SampleMixups">reproducible</a>.</li> <li><strong>Jessica Li</strong> - <a href="http://www.stat.ucla.edu/~jingyi.li/software-and-data.html">posts her data online and writes open source software for her analyses</a>.</li> <li><strong>Mark Robinson - </strong>posts many of his papers as <a href="http://biorxiv.org/search/author1%3Arobinson%252C%2Bmd%20numresults%3A10%20sort%3Arelevance-rank%20format_result%3Astandard">preprints on biorxiv</a>, makes his <a href="https://github.com/markrobinsonuzh/diff_splice_paper">analyses reproducible</a>, writes <a href="http://bioconductor.org/packages/release/bioc/html/Repitools.html">open source software </a></li> <li><strong>Florian Markowetz -<a href="http://www.markowetzlab.org/software/"> </a></strong><a href="http://www.markowetzlab.org/software/">writes open source software</a>, provides <a href="http://www.markowetzlab.org/data.php">Bioconductor data for major projects</a>, links <a href="http://www.markowetzlab.org/publications.php">his papers with his code</a> nicely on his publications page.</li> <li><strong>Raphael Gottardo</strong> - <a href="http://www.rglab.org/software.html">writes/maintains many open source software packages</a>, makes <a href="https://github.com/RGLab/BNCResponse">his analyses reproducible and available via Github</a>, posts <a href="http://biorxiv.org/content/early/2015/06/15/020842">preprints of his papers</a>.</li> <li><strong>Genevera Allen - </strong>writes [Right now reproducibility, replicability, false positive rates, biases in methods, and other problems with science are the hot topic. As I mentioned in a previous post pointing out a flaw with a scientific study is way easier to do correctly than generating a new scientific study. Some folks have noticed that right now there is a huge market for papers pointing out how science is flawed. The combination of the relative ease of pointing out flaws and the huge payout for writing these papers is helping to generate the hype around the “reproducibility crisis”.</li> </ul> <p>I <a href="http://www.slideshare.net/jtleek/evidence-based-data-analysis-45800617">gave a talk</a> a little while ago at an NAS workshop where I stated that all the tools for reproducible research exist (the caveat being really large analyses - although that is changing as well). To make a paper completely reproducible, open, and available for post publication review you can use the following approach with no new tools/frameworks needed.</p> <ol> <li>Use <a href="https://github.com/">Github </a>for version control.</li> <li>Use <a href="http://rmarkdown.rstudio.com/">rmarkdown</a> or <a href="http://ipython.org/notebook.html">iPython notebooks</a> for your analysis code</li> <li>When your paper is done post it to <a href="http://arxiv.org/">arxiv</a> or <a href="http://biorxiv.org/">biorxiv</a>.</li> <li>Post your data to an appropriate repository like <a href="http://www.ncbi.nlm.nih.gov/sra">SRA</a> or a general purpose site like <a href="https://figshare.com/">figshare.</a></li> <li>Send any software you develop to a controlled repository like [Right now reproducibility, replicability, false positive rates, biases in methods, and other problems with science are the hot topic. As I mentioned in a previous post pointing out a flaw with a scientific study is way easier to do correctly than generating a new scientific study. Some folks have noticed that right now there is a huge market for papers pointing out how science is flawed. The combination of the relative ease of pointing out flaws and the huge payout for writing these papers is helping to generate the hype around the “reproducibility crisis”.</li> </ol> <p>I <a href="http://www.slideshare.net/jtleek/evidence-based-data-analysis-45800617">gave a talk</a> a little while ago at an NAS workshop where I stated that all the tools for reproducible research exist (the caveat being really large analyses - although that is changing as well). To make a paper completely reproducible, open, and available for post publication review you can use the following approach with no new tools/frameworks needed.</p> <ol> <li>Use <a href="https://github.com/">Github </a>for version control.</li> <li>Use <a href="http://rmarkdown.rstudio.com/">rmarkdown</a> or <a href="http://ipython.org/notebook.html">iPython notebooks</a> for your analysis code</li> <li>When your paper is done post it to <a href="http://arxiv.org/">arxiv</a> or <a href="http://biorxiv.org/">biorxiv</a>.</li> <li>Post your data to an appropriate repository like <a href="http://www.ncbi.nlm.nih.gov/sra">SRA</a> or a general purpose site like <a href="https://figshare.com/">figshare.</a></li> <li>Send any software you develop to a controlled repository like](https://cran.r-project.org/) or <a href="http://bioconductor.org/">Bioconductor</a>.</li> <li>Participate in the <a href="http://simplystatistics.org/2015/11/16/so-you-are-getting-crushed-on-the-internet-the-new-normal-for-academics/">post publication discussion on Twitter and with a Blog</a></li> </ol> <p>This is also true of open science, open data sharing, reproducibility, replicability, post-publication peer review and all the other issues forming the “reproducibility crisis”. There is a lot of attention and heat that has focused on the “crisis” or on folks who make a point to take a stand on reproducibility or open science or post publication review. But in the background, outside of the hype, there are a large group of people that are quietly executing solid, open, reproducible science.</p> <p>I wish that this group would get more attention so I decided to point out a few of them. Next time somebody asks me about the research on reproducibility or open science I’ll just point them here and tell them to just follow the lead of people doing it.</p> <ul> <li><strong>Karl Broman</strong> - posts all of his <a href="http://kbroman.org/pages/talks.html">talks online </a>, generates many widely used <a href="http://kbroman.org/pages/software.html">open source packages</a>, writes <a href="http://kbroman.org/pages/tutorials.html">free/open tutorials</a> on everything from knitr to making webpages, makes his <a href="http://www.ncbi.nlm.nih.gov/pubmed/26290572">papers</a> highly <a href="https://github.com/kbroman/Paper_SampleMixups">reproducible</a>.</li> <li><strong>Jessica Li</strong> - <a href="http://www.stat.ucla.edu/~jingyi.li/software-and-data.html">posts her data online and writes open source software for her analyses</a>.</li> <li><strong>Mark Robinson - </strong>posts many of his papers as <a href="http://biorxiv.org/search/author1%3Arobinson%252C%2Bmd%20numresults%3A10%20sort%3Arelevance-rank%20format_result%3Astandard">preprints on biorxiv</a>, makes his <a href="https://github.com/markrobinsonuzh/diff_splice_paper">analyses reproducible</a>, writes <a href="http://bioconductor.org/packages/release/bioc/html/Repitools.html">open source software </a></li> <li><strong>Florian Markowetz -<a href="http://www.markowetzlab.org/software/"> </a></strong><a href="http://www.markowetzlab.org/software/">writes open source software</a>, provides <a href="http://www.markowetzlab.org/data.php">Bioconductor data for major projects</a>, links <a href="http://www.markowetzlab.org/publications.php">his papers with his code</a> nicely on his publications page.</li> <li><strong>Raphael Gottardo</strong> - <a href="http://www.rglab.org/software.html">writes/maintains many open source software packages</a>, makes <a href="https://github.com/RGLab/BNCResponse">his analyses reproducible and available via Github</a>, posts <a href="http://biorxiv.org/content/early/2015/06/15/020842">preprints of his papers</a>.</li> <li><strong>Genevera Allen - </strong>writes](https://cran.r-project.org/web/packages/TCGA2STAT/index.html) to make data easier to access, posts <a href="http://biorxiv.org/content/early/2015/09/24/027516">preprints on biorxiv</a> and <a href="http://arxiv.org/pdf/1502.03853v1.pdf">on arxiv</a></li> <li><strong>Lorena Barba</strong> - <a href="http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about">teaches open source moocs</a>, with lessons as <a href="https://github.com/barbagroup/CFDPython">open source iPython modules</a>, and <a href="https://github.com/barbagroup/pygbe">reproducible code for her analyses</a>.</li> <li><strong>Alicia Oshlack - </strong>writes papers with <a href="http://www.genomemedicine.com/content/7/1/43">completely reproducible analyses</a>, <a href="http://bioconductor.org/packages/release/bioc/html/missMethyl.html">publishes lots of open source software</a> and publishes <a href="http://biorxiv.org/content/early/2015/01/23/013698">preprints</a> for her papers.</li> <li><strong>Baggerly and Coombs</strong> - although they are famous for a <a href="https://projecteuclid.org/euclid.aoas/1267453942">highly public reproducible piece of research</a> they have also quietly implemented policies like <a href="http://magazine.amstat.org/blog/2011/01/01/scipolicyjan11/">making all reports reproducible for their consulting center</a>.</li> </ul> <p>This list was made completely haphazardly as all my lists are, but just to indicate there are a ton of people out there doing this. One thing that is clear too is that grad students and postdocs are adopting the approach I described at a very high rate.</p> <p>Moreover there are people that have been doing parts of this for a long time (like the <a href="http://arxiv.org/">physics</a> or <a href="http://biostats.bepress.com/jhubiostat/">biostatistics</a> communities with preprints, or how people have used <a href="https://projecteuclid.org/euclid.aoas/1267453942">Sweave for a long time</a>) . I purposely left people off the list like Titus and Ethan who have gone all in, even posting their <a href="http://ivory.idyll.org/blog/grants-posted.html">grants</a> <a href="http://jabberwocky.weecology.org/2012/08/10/a-list-of-publicly-available-grant-proposals-in-the-biological-sciences/">online</a>. I did this because they are very loud advocates of open science, but I wanted to highlight quieter contributors and point out that while there is a lot of noise going on over in one corner, many people are quietly doing really good science in another.</p> By opposing tracking well-meaning educators are hurting disadvantaged kids 2015-12-09T10:10:02+00:00 http://simplystats.github.io4505 <div class="page" title="Page 2"> <div class="layoutArea"> <div class="column"> <p> An unfortunate fact about the US K-12 system is that the education gap between poor and rich is growing. One manifestation of this trend is that we rarely see US kids from disadvantaged backgrounds become tenure track faculty, especially in the STEM fields. In my experience, the ones that do make it, when asked how they overcame the suboptimal math education their school district provided, often respond "I was <a href="https://en.wikipedia.org/wiki/Tracking_(education)">tracked</a>" or "I went to a <a href="https://en.wikipedia.org/wiki/Magnet_school">magnet school</a>". Magnet schools filter students with admission tests and then teach at a higher level than an average school, so essentially the entire school is an advanced track. </p> </div> </div> </div> <p>Twenty years of classroom instruction experience has taught me that classes with diverse academic abilities present one of the most difficult teaching challenges. Typically, one is forced to focus on only a sub-group of students, usually the second quartile. As a consequence the lower and higher quartiles are not properly served. At the university level, we minimize this problem by offering different levels: remedial math versus math for engineers, probability for the Masters program versus probability for PhD students, co-ed intramural sports versus the varsity basketball team, intro to World Music versus a spot in the orchestra, etc. In K-12, tracking seems like the obvious solution to teaching to an array of student levels.</p> <p>Unfortunately, there has been a trend recently to move away from tracking and several school districts now forbid it. The motivation seems to be a series of <a href="http://www.tandfonline.com/doi/abs/10.1207/s15430421tip4501_9">observational</a> <a href="http://files.eric.ed.gov/fulltext/ED329615.pdf">studies</a> that note that “low-track classes tend to be primarily composed of low-income students, usually minorities, while upper-track classes are usually dominated by students from socioeconomically successful groups.” Tracking opponents infer that this unfortunate reality is due to bias (conscious or unconscious) in the the informal referrals that are typically used to decide which students are advanced. However, <strong>this is a critique of the referral system, not of tracking itself.</strong> A simple fix is to administer an objective test or use the percentiles from <a href="http://www.doe.mass.edu/mcas/overview.html">state assessment tests</a>. In fact, such exams have been developed and implemented. A recent study (summarized <a href="http://www.vox.com/2015/11/23/9784250/card-giuliano-gifted-talented">here</a>) examined the data from a district that for a period of time implemented an objective assessment and found that</p> <blockquote> <p>[t]he number of Hispanic students [in the advanced track increased] by 130 percent and the number of black students by 80 percent.</p> </blockquote> <p>Unfortunately, instead of maintaining the placement criteria, which benefited underrepresented minorities without relaxing standards, these school districts reverted to the old, flawed system due to budget cuts.</p> <p>Another argument against tracking is that students benefit more from being in classes with higher-achieving peers, rather than being in a class with students with similar subject mastery and a teacher focused on their level. However a [&lt;div class="page" title="Page 2"&gt;</p> <div class="layoutArea"> <div class="column"> <p> An unfortunate fact about the US K-12 system is that the education gap between poor and rich is growing. One manifestation of this trend is that we rarely see US kids from disadvantaged backgrounds become tenure track faculty, especially in the STEM fields. In my experience, the ones that do make it, when asked how they overcame the suboptimal math education their school district provided, often respond "I was <a href="https://en.wikipedia.org/wiki/Tracking_(education)">tracked</a>" or "I went to a <a href="https://en.wikipedia.org/wiki/Magnet_school">magnet school</a>". Magnet schools filter students with admission tests and then teach at a higher level than an average school, so essentially the entire school is an advanced track. </p> </div> </div> <p>&lt;/div&gt;</p> <p>Twenty years of classroom instruction experience has taught me that classes with diverse academic abilities present one of the most difficult teaching challenges. Typically, one is forced to focus on only a sub-group of students, usually the second quartile. As a consequence the lower and higher quartiles are not properly served. At the university level, we minimize this problem by offering different levels: remedial math versus math for engineers, probability for the Masters program versus probability for PhD students, co-ed intramural sports versus the varsity basketball team, intro to World Music versus a spot in the orchestra, etc. In K-12, tracking seems like the obvious solution to teaching to an array of student levels.</p> <p>Unfortunately, there has been a trend recently to move away from tracking and several school districts now forbid it. The motivation seems to be a series of <a href="http://www.tandfonline.com/doi/abs/10.1207/s15430421tip4501_9">observational</a> <a href="http://files.eric.ed.gov/fulltext/ED329615.pdf">studies</a> that note that “low-track classes tend to be primarily composed of low-income students, usually minorities, while upper-track classes are usually dominated by students from socioeconomically successful groups.” Tracking opponents infer that this unfortunate reality is due to bias (conscious or unconscious) in the the informal referrals that are typically used to decide which students are advanced. However, <strong>this is a critique of the referral system, not of tracking itself.</strong> A simple fix is to administer an objective test or use the percentiles from <a href="http://www.doe.mass.edu/mcas/overview.html">state assessment tests</a>. In fact, such exams have been developed and implemented. A recent study (summarized <a href="http://www.vox.com/2015/11/23/9784250/card-giuliano-gifted-talented">here</a>) examined the data from a district that for a period of time implemented an objective assessment and found that</p> <blockquote> <p>[t]he number of Hispanic students [in the advanced track increased] by 130 percent and the number of black students by 80 percent.</p> </blockquote> <p>Unfortunately, instead of maintaining the placement criteria, which benefited underrepresented minorities without relaxing standards, these school districts reverted to the old, flawed system due to budget cuts.</p> <p>Another argument against tracking is that students benefit more from being in classes with higher-achieving peers, rather than being in a class with students with similar subject mastery and a teacher focused on their level. However a](http://web.stanford.edu/~pdupas/Tracking_rev.pdf) (and the only one of which I am aware) finds that tracking helps all students:</p> <blockquote> <p>We find that tracking students by prior achievement raised scores for all students, even those assigned to lower achieving peers. On average, after 18 months, test scores were 0.14 standard deviations higher in tracking schools than in non-tracking schools (0.18 standard deviations higher after controlling for baseline scores and other control variables). After controlling for the baseline scores, students in the top half of the pre-assignment distribution gained 0.19 standard deviations, and those in the bottom half gained 0.16 standard deviations. <strong>Students in all quantiles benefited from tracking. </strong></p> </blockquote> <p>I believe that without tracking, the achievement gap between disadvantaged children and their affluent peers will continue to widen since involved parents will seek alternative educational opportunities, including private schools or subject specific extracurricular acceleration programs. With limited or no access to advanced classes in the public system, disadvantaged students will be less prepared to enter the very competitive STEM fields. Note that competition comes not only from within the US, but from other countries including many with educational systems that track.</p> <p>To illustrate the extreme gap, the following exercises are from a 7th grade public school math class (in a high performing school district):</p> <table style="width: 100%;"> <tr> <td> <a href="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.49.41-AM.png"><img src="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.49.41-AM.png" alt="Screen Shot 2015-12-07 at 11.49.41 AM" width="275" /></a> </td> <td> <a href="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-09-at-9.00.57-AM.png"><img src="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-09-at-9.00.57-AM.png" alt="Screen Shot 2015-12-09 at 9.00.57 AM" width="275" /></a> </td> </tr> </table> <p>(Click to enlarge). There is no tracking so all students must work on these problems. Meanwhile, in a 7th grade advanced, private math class, that same student can be working on problems like these:<a href="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM.png"><img class="alignnone size-full wp-image-4511" src="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM.png" alt="Screen Shot 2015-12-07 at 11.47.45 AM" width="1165" height="341" srcset="http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM-300x88.png 300w, http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM-1024x300.png 1024w, http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM-260x76.png 260w, http://simplystatistics.org/wp-content/uploads/2016/12/Screen-Shot-2015-12-07-at-11.47.45-AM.png 1165w" sizes="(max-width: 1165px) 100vw, 1165px" /></a>Let me stress that there is nothing wrong with the first example if it is the appropriate level of the student. However, a student who can work at the level of the second example, should be provided with the opportunity to do so notwithstanding their family’s ability to pay. Poorer kids in districts which do not offer advanced classes will not only be less equipped to compete with their richer peers, but many of the academically advanced ones may, I suspect, dismiss academics due to lack of challenge and boredom. Educators need to consider evidence when making decisions regarding policy. Tracking can be applied unfairly, but that aspect can be remedied. Eliminating tracking all together takes away a crucial tool for disadvantaged students to move into the STEM fields and, according to the empirical evidence, hurts all students.</p> Not So Standard Deviations: Episode 5 - IRL Roger is Totally With It 2015-12-03T09:52:47+00:00 http://simplystats.github.io4490 <p>I just posted Episode 5 of Not So Standard Deviations so check your feeds! Sorry for the long delay since the last episode but we got a bit tripped up by the Thanksgiving holiday.</p> <p>In this episode, Hilary and I open up the mailbag and go through some of the feedback we’ve gotten on the previous episodes. The rest of the time is spent talking about the importance of reproducibility in data analysis both in academic research and in industry settings.</p> <p>If you haven’t already, you can subscribe to the podcast through <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">iTunes</a>. Or you can use the <a href="http://feeds.soundcloud.com/users/soundcloud:users:174789515/sounds.rss">SoundCloud RSS feed</a> directly.</p> <p>Notes:</p> <ul> <li>Hilary’s <a href="https://youtu.be/7B3n-5atLxM">talk on reproducible analysis in production</a> at the New York R Conference</li> <li>Hilary’s <a href="https://youtu.be/zlSOckFpYqg">Ignite presentation</a> at Strata 2013</li> <li>Roger’s <a href="https://youtu.be/aH8dpcirW1U">talk on “Computational and Policy Tools for Reproducible Research”</a> at the Applied Mathematics Perspectives Workshop in Vancouver, 2011</li> <li>Duke Scandal <a href="http://goo.gl/rEO5QD">Starter Set</a></li> <li><a href="https://youtu.be/7gYIs7uYbMo">Keith Baggerly’s talk</a> on Duke Scandal</li> <li>The <a href="https://goo.gl/RtpBZa">Web of Trust</a></li> <li><a href="https://goo.gl/MlM0gu">testdat</a> R package</li> </ul> <p><a href="https://api.soundcloud.com/tracks/235689361/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio file for this episode</a>.</p> <p>Or you can listen right here:</p> Thinking like a statistician: the importance of investigator-initiated grants 2015-12-01T11:40:29+00:00 http://simplystats.github.io4480 <p>A substantial amount of scientific research is funded by investigator-initiated grants. A researcher has an idea, writes it up and sends a proposal to a funding agency. The agency then elicits help from a group of peers to evaluate competing proposals. Grants are awarded to the most highly ranked ideas. The percent awarded depends on how much funding gets allocated to these types of proposals. At the NIH, the largest funding agency of these types of grants, the success rate recently <a href="https://nihdirectorsblog.files.wordpress.com/2013/09/sequestration-success-rates1.jpg">fell below 20% from a high above 35%</a>. Part of the reason these percentages have fallen is to make room for large collaborative projects. Large projects seem to be increasing, and not just at the NIH. In Europe, for example, the <a href="https://www.humanbrainproject.eu/">Human Brain Project</a> has an estimated cost of over 1 billion US over 10 years. To put this in perspective, 1 billion dollars can fund over 500 <a href="http://grants.nih.gov/grants/funding/r01.htm">NIH R01s</a>. R01 is the NIH mechanism most appropriate for investigator initiated proposals.</p> <p>The merits of big science has been widely debated (for example <a href="http://www.michaeleisen.org/blog/?p=1179">here</a> and <a href="http://simplystatistics.org/2013/02/27/please-save-the-unsolicited-r01s/">here</a>). And most agree that some big projects have been successful. However, in this post I present a statistical argument highlighting the importance of investigator-initiated awards. The idea is summarized in the graph below.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/12/Rplot.png"><img class="alignnone size-full wp-image-4483" src="http://simplystatistics.org/wp-content/uploads/2015/12/Rplot.png" alt="Rplot" width="1112" height="551" srcset="http://simplystatistics.org/wp-content/uploads/2015/12/Rplot-300x149.png 300w, http://simplystatistics.org/wp-content/uploads/2015/12/Rplot-1024x507.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/12/Rplot-260x129.png 260w, http://simplystatistics.org/wp-content/uploads/2015/12/Rplot.png 1112w" sizes="(max-width: 1112px) 100vw, 1112px" /></a></p> <p>The two panes above represent two different funding strategies: fund-many-R01s (left) or reduce R01s to fund several large projects (right). The grey crosses represent investigators and the gold dots represent potential paradigm-shifting geniuses. Location on the Cartesian plane represent research areas, with the blue circles denoting areas that are prime for an important scientific advance. The largest scientific contributions occur when a gold dot falls in a blue circle. Large contributions also result from the accumulation of incremental work produced by grey crosses in the blue circles.</p> <p>Although not perfect, the peer review approach implemented by most funding agencies appears to work quite well at weeding out unproductive researchers and unpromising ideas. They also seem to do well at spreading funds across general areas. For example NIH spreads funds across <a href="https://www.nih.gov/institutes-nih/list-nih-institutes-centers-offices">diseases and public health challenges</a> (for example cancer, mental health, heart, genomics, heart and lung disease.) as well as <a href="https://www.nigms.nih.gov/Pages/default.aspx">general medicine</a>, <a href="https://www.genome.gov/">genomics</a> and <a href="https://www.nlm.nih.gov/">information.</a> However, precisely predicting who will be a gold dot or what specific area will be a blue circle seems like an impossible endeavor. Increasing the number of tested ideas and researchers therefore increases our chance of success. When a funding agency decides to invest big in a specific area (green dollar signs) they are predicting the location of a blue circle. As funding flows into these areas, so do investigators (note the clusters). The total number of funded lead investigators also drops. The risk here is that if the dollar sign lands far from a blue dot, we pull researchers away from potentially fruitful areas. If after 10 years of funding, the <a href="https://www.humanbrainproject.eu/">Human Brain Project</a> doesn’t <a href="https://www.humanbrainproject.eu/mission">“achieve a multi-level, integrated understanding of brain structure and function”</a> we will have missed out on trying out 500 ideas by hundreds of different investigators. With a sample size this large, we expect at least a  handful of these attempts to result in the type of impactful advance that justifies funding scientific research.</p> <p>The simulation presented (code below) here is clearly an over simplification, but it does depict the statistical reason why I favor investigator-initiated grants.  The simulation clearly depicts that the strategy of funding many investigator-initiated grants is key for the continued success of scientific research.</p> <p><tt><br /> set.seed(2)<br /> library(rafalib)<br /> thecol=”gold3”<br /> mypar(1,2,mar=c(0.5,0.5,2,0.5))<br /> ###<br /> ## Start with the many R01s model<br /> ###<br /> ##generate location of 2,000 investigators<br /> N = 2000<br /> x = runif(N)<br /> y = runif(N)<br /> ## 1% are geniuses<br /> Ng = N<em>0.01<br /> g = rep(4,N);g[1:Ng]=16<br /> ## generate location of important areas of research<br /> M0 = 10<br /> x0 = runif(M0)<br /> y0 = runif(M0)<br /> r0 = rep(0.03,M0)<br /> ##Make the plot<br /> nullplot(xaxt=”n”,yaxt=”n”,main=”Many R01s”)<br /> symbols(x0,y0,circles=r0,fg=”black”,bg=”blue”,<br /> lwd=3,add=TRUE,inches=FALSE)<br /> points(x,y,pch=g,col=ifelse(g==4,”grey”,thecol))<br /> points(x,y,pch=g,col=ifelse(g==4,NA,thecol))<br /> ### Generate the location of 5 big projects<br /> M1 = 5<br /> x1 = runif(M1)<br /> y1 = runif(M1)<br /> ##make initial plot<br /> nullplot(xaxt=”n”,yaxt=”n”,main=”A Few Big Projects”)<br /> symbols(x0,y0,circles=r0,fg=”black”,bg=”blue”,<br /> lwd=3,add=TRUE,inches=FALSE)<br /> ### Generate location of investigators attracted<br /> ### to location of big projects. There are 1000 total<br /> ### investigators<br /> Sigma = diag(2)</em>0.005<br /> N1 = 200<br /> Ng1 = round(N1<em>0.01)<br /> g1 = rep(4,N);g1[1:Ng1]=16<br /> library(MASS)<br /> for(i in 1:M1){<br /> xy = mvrnorm(N1,c(x1[i],y1[i]),Sigma)<br /> points(xy[,1],xy[,2],pch=g1,col=ifelse(g1==4,”grey”,thecol))<br /> }<br /> ### generate location of investigators that ignore big projects<br /> ### note now 500 instead of 200. Note overall total<br /> ## is also less because large projects result in less<br /> ## lead investigators<br /> N = 500<br /> x = runif(N)<br /> y = runif(N)<br /> Ng = N</em>0.01<br /> g = rep(4,N);g[1:Ng]=16<br /> points(x,y,pch=g,col=ifelse(g==4,”grey”,thecol))<br /> points(x1,y1,pch=””,col=”darkgreen”,cex=2,lwd=2)<br /> </tt></p> A thanksgiving dplyr Rubik's cube puzzle for you 2015-11-25T12:14:06+00:00 http://simplystats.github.io4473 <p><a href="http://nickcarchedi.com/">Nick Carchedi</a> is back visiting from <a href="https://www.datacamp.com/">DataCamp</a> and for fun we came up with a <a href="https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html">[Nick Carchedi](http://nickcarchedi.com/) is back visiting from [DataCamp](https://www.datacamp.com/) and for fun we came up with a</a> Rubik’s cube puzzle. Here is how it works. To solve the puzzle you have to make a 4 x 3 data frame that spells Thanksgiving like this:</p> <div class="oembed-gist"> <noscript> View the code on <a href="https://gist.github.com/jtleek/4d4b63a035973231e6d4">Gist</a>. </noscript> </div> <p><span style="line-height: 1.5;">To solve the puzzle you need to pipe this data frame in </span></p> <div class="oembed-gist"> <noscript> View the code on <a href="https://gist.github.com/jtleek/aae1218a8f4d1220e07d">Gist</a>. </noscript> </div> <p>and pipe out the Thanksgiving data frame using only the dplyr commands <em>arrange</em>, <em>mutate</em>, <em>slice</em>, <em>filter</em> and <em>select</em>. For advanced users you can try our slightly more complicated puzzle:</p> <div class="oembed-gist"> <noscript> View the code on <a href="https://gist.github.com/jtleek/b82531d9dac78ba3c60a">Gist</a>. </noscript> </div> <p>See if you can do it <a href="http://www.theguardian.com/technology/video/2015/nov/24/boy-completes-rubiks-cube-in-49-seconds-word-recordvideo">this fast</a>. Post your solutions in the comments and Happy Thanksgiving!</p> 20 years of Data Science: from Music to Genomics 2015-11-24T10:00:56+00:00 http://simplystats.github.io4018 <p>I finally got around to reading David Donoho’s <a href="https://dl.dropboxusercontent.com/u/23421017/50YearsDataScience.pdf">50 Years of Data Science</a> paper. I highly recommend it. The following quote seems to summarize the sentiment that motivated the paper, as well as why it has resonated among academic statisticians:</p> <div class="page" title="Page 5"> <div class="layoutArea"> <div class="column"> <blockquote> <p> The statistics profession is caught at a confusing moment: the activities which preoccupied it over centuries are now in the limelight, but those activities are claimed to be bright shiny new, and carried out by (although not actually invented by) upstarts and strangers. </p> </blockquote> </div> </div> </div> <p>The reason we started this blog over four years ago was because, as Jeff wrote in his inaugural post, we were “<a href="http://simplystatistics.org/2011/09/07/first-things-first/">fired up about the new era where data is abundant and statisticians are scientists</a>”. It was clear that many disciplines were becoming data-driven and that interest in data analysis was growing rapidly. We were further motivated because, despite this <a href="http://simplystatistics.org/2014/09/15/applied-statisticians-people-want-to-learn-what-we-do-lets-teach-them/">new found interest in our work</a>, academic statisticians were, in general, more interested in the development of context free methods than in leveraging applied statistics to take <a href="http://simplystatistics.org/2012/06/22/statistics-and-the-science-club/">leadership roles</a> in data-driven projects. Meanwhile, great and highly visible applied statistics work was occurring in other fields such as astronomy, computational biology, computer science, political science and economics. So it was not completely surprising that some (bio)statistics departments were being left out from larger university-wide data science initiatives. Some of <a href="http://simplystatistics.org/2014/07/25/academic-statisticians-there-is-no-shame-in-developing-statistical-solutions-that-solve-just-one-problem/">our</a> <a href="http://simplystatistics.org/2013/04/15/data-science-only-poses-a-threat-to-biostatistics-if-we-dont-adapt/">posts</a> exhorted academic departments to embrace larger numbers of applied statisticians:</p> <blockquote> <p>[M]any of the giants of our discipline were very much interested in solving specific problems in genetics, agriculture, and the social sciences. In fact, many of today’s most widely-applied methods were originally inspired by insights gained by answering very specific scientific questions. I worry that the balance between application and theory has shifted too far away from applications. An unfortunate consequence is that our flagship journals, including our applied journals, are publishing too many methods seeking to solve many problems but actually solving none. By shifting some of our efforts to solving specific problems we will get closer to the essence of modern problems and will actually inspire more successful generalizable methods.</p> </blockquote> <p>Donoho points out that John Tukey had a similar preoccupation 50 years ago:</p> <div class="page" title="Page 10"> <div class="layoutArea"> <div class="column"> <blockquote> <p> For a long time I have thought I was a statistician, interested in inferences from the particular to the general. But as I have watched mathematical statistics evolve, I have had cause to wonder and to doubt. ... All in all I have come to feel that my central interest is in data analysis, which I take to include, among other things: procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data </p> </blockquote> <p> Many applied statisticians do the things Tukey mentions above. In the blog we have encouraged them to <a href="http://simplystatistics.org/2014/09/15/applied-statisticians-people-want-to-learn-what-we-do-lets-teach-them/">teach the gory details of what what they do</a>, along with the general methodology we currently teach. With all this in mind, several months ago, when I was invited to give a talk at a department that was, at the time, deciphering their role in their university's data science initiative, I gave a talk titled<em> 20 years of Data Science: from Music to Genomics. </em>The goal was to explain why <em>applied statistician</em> is not considered synonymous with <em>data scientist </em>even when we focus on the same goal: <a href="https://en.wikipedia.org/wiki/Data_science">extract knowledge or insights from data.</a> </p> <p> The first example in the talk related to how academic applied statisticians tend to emphasize the parts that will be most appreciated by our math stat colleagues and ignore the aspects that are today being heralded as the linchpins of data science. I used my thesis papers as examples. <a href="http://archive.cnmat.berkeley.edu/Research/1998/Rafael/tesis.pdf">My dissertation work</a> was about finding meaningful parametrization of musical sound signals that<img class="wp-image-4449 alignright" src="http://www.biostat.jhsph.edu/~ririzarr/Demo/img7.gif" alt="Spectrogram" width="380" height="178" /> my collaborators could use to manipulate sounds to create new ones. To do this, I prepared a database of sounds, wrote code to extract and import the digital representations from CDs into S-plus (yes, I'm that old), visualized the data to motivate models, wrote code in C (or was it Fortran?) to make the analysis go faster, and tested these models with residual analysis by ear (you can listen to them <a href="http://www.biostat.jhsph.edu/~ririzarr/Demo/">here</a>). None of these data science aspects were highlighted in the <a href="http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n42.pdf">papers</a> <a href="http://www.tandfonline.com/doi/abs/10.1198/000313001300339969#.Vk4_ht-rQUE">I</a> <a href="http://www.tandfonline.com/doi/abs/10.1198/016214501750332875#.Vk4_mN-rQUE">wrote </a><a href="http://www.tandfonline.com/doi/abs/10.1198/016214501753168082#.Vk4_qt-rQUE">about</a> my <a href="http://onlinelibrary.wiley.com/doi/10.1111/1467-9892.01515/abstract?userIsAuthenticated=false&amp;deniedAccessCustomisedMessage=">thesis</a>. Here is a screen shot from <a href="http://onlinelibrary.wiley.com/doi/10.1111/1467-9892.01515/abstract">this paper</a>: </p> </div> </div> </div> <p><a href="http://simplystatistics.org/wp-content/uploads/2016/05/Screen-Shot-2015-04-15-at-12.24.40-PM.png"><img class="wp-image-4449 aligncenter" src="http://simplystatistics.org/wp-content/uploads/2016/05/Screen-Shot-2015-04-15-at-12.24.40-PM.png" alt="Screen Shot 2015-04-15 at 12.24.40 PM" width="320" height="342" srcset="http://simplystatistics.org/wp-content/uploads/2016/05/Screen-Shot-2015-04-15-at-12.24.40-PM-957x1024.png 957w, http://simplystatistics.org/wp-content/uploads/2016/05/Screen-Shot-2015-04-15-at-12.24.40-PM-187x200.png 187w, http://simplystatistics.org/wp-content/uploads/2016/05/Screen-Shot-2015-04-15-at-12.24.40-PM.png 1204w" sizes="(max-width: 320px) 100vw, 320px" /></a></p> <p>I am actually glad I wrote out and published all the technical details of this work. It was great training. My point was simply that based on the focus of these papers, this work would not be considered data science.</p> <p>The rest of my talk described some of the work I did once I transitioned into applications in Biology. I was fortunate to have a <a href="http://www.jhsph.edu/faculty/directory/profile/3859/scott-zeger">department chair</a> that appreciated lead-author papers in the subject matter journals as much as statistical methodology papers. This opened the door for me to become a full fledged applied statistician/data scientist. In the talk I described how <a href="http://bioinformatics.oxfordjournals.org/content/20/3/307.short">developing software packages,</a> <a href="http://www.nature.com/nmeth/journal/v2/n5/abs/nmeth756.html">planning</a> the <a href="http://www.nature.com/nmeth/journal/v4/n11/abs/nmeth1102.html">gathering of data</a> to <a href="http://www.ncbi.nlm.nih.gov/pubmed/?term=16108723">aid method development</a>, developing <a href="http://www.ncbi.nlm.nih.gov/pubmed/14960458">web tools</a> to assess data analysis techniques in the wild, and facilitating <a href="http://www.ncbi.nlm.nih.gov/pubmed/19151715">data-driven discovery</a> in biology has been very gratifying and, simultaneously, helped my career. However, at some point, early in my career, senior members of my department encouraged me to write and submit a methods paper to a statistical journal to go along with every paper I sent to the subject matter journals. Although I do write methods papers when I think the ideas add to the statistical literature, I did not follow the advice to simply write papers for the sake of publishing in statistics journals. Note that if (bio)statistics departments require applied statisticians to do this, then it becomes harder to have an impact as data scientists. Departments that are not producing widely used methodology or successful and visible applied statistics projects (or both), should not be surprised when they are not included in data science initiatives. So, applied statistician, read that Tukey quote again, listen to <a href="https://youtu.be/vbb-AjiXyh0">President Obama</a>, and go do some great data science.</p> <p> </p> <p> </p> Some Links Related to Randomized Controlled Trials for Policymaking 2015-11-19T12:49:03+00:00 http://simplystats.github.io4445 <div> <p> In response to <a href="http://simplystatistics.org/2015/11/17/why-are-randomized-trials-not-used-by-policymakers/">my previous post</a>, <a href="https://gspp.berkeley.edu/directories/faculty/avi-feller">Avi Feller</a> sent me these links related to efforts promoting the use of RCTs and evidence-based approaches for policymaking: </p> <ul> <li> The theme of this year's just-concluded APPAM conference (the national public policy research organization) was "evidence-based policymaking," with a headline panel on using experiments in policy (see <a href="http://www.appam.org/events/fall-research-conference/2015-fall-research-conference-information/" target="_blank">here</a> and <a href="http://www.appam.org/2015appam-student-summary-using-experiments-for-evidence-based-policy-lessons-from-the-private-sector/" target="_blank">here</a>). </li> </ul> <ul> <li> Jeff Liebman has written extensively about the use of randomized experiments in policy (see <a href="http://govinnovator.com/ten_year_challenge/" target="_blank">here</a> for a recent interview). </li> </ul> <ul> <li> The White House now has an entire office devoted to running randomized trials to improve government performance (the so-called "nudge unit"). Check out their recent annual report <a href="https://www.whitehouse.gov/sites/default/files/microsites/ostp/sbst_2015_annual_report_final_9_14_15.pdf" target="_blank">here</a>. </li> </ul> <ul> <li> JPAL North America just launched a major initiative to help state and local governments run randomized trials (see <a href="https://www.povertyactionlab.org/about-j-pal/news/j-pal-north-america-state-and-local-innovation-initiative-release" target="_blank">here</a>). </li> </ul> </div> Given the history of medicine, why are randomized trials not used for social policy? 2015-11-17T10:42:24+00:00 http://simplystats.github.io4429 <p>Policy changes can have substantial societal effects. For example, clean water and hygiene policies have saved millions, if not billions, of lives. But effects are not always positive. For example, <a href="https://en.wikipedia.org/wiki/Prohibition_in_the_United_States">prohibition</a>, or the “noble experiment”, boosted organized crime, slowed economic growth and increased deaths caused by tainted liquor. Good intentions do not guarantee desirable outcomes.</p> <p>The medical establishment is well aware of the danger of basing decisions on the good intentions of doctors or biomedical researchers. For this reason, randomized controlled trials (RCTs) are the standard approach to determining if a new treatment is safe and effective. In these trials an objective assessment is achieved by assigning patients at random to a treatment or control group, and then comparing the outcomes in these two groups. Probability calculations are used to summarize the evidence in favor or against the new treatment. Modern RCTs are considered <a href="http://abcnews.go.com/Health/TenWays/story?id=3605442&amp;page=1">one of the greatest medical advances of the 20th century</a>.</p> <p>Despite their unprecedented success in medicine, RCTs have not been fully adopted outside of scientific fields. In <a href="http://www.badscience.net/2011/05/we-should-so-blatantly-do-more-randomised-trials-on-policy/">this post</a>, Ben Goldcare advocates for politicians to learn from scientists and base policy decisions on RCTs. He provides several examples in which results contradicted conventional wisdom. In <a href="https://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty?language=en">this TED talk</a> Esther Duflo convincingly argues that RCTs should be used to determine what interventions are best at fighting poverty. Although some RCTs are being conducted, they are still rare and oftentimes ignored by policymakers. For example, despite at least <a href="http://peabody.vanderbilt.edu/research/pri/VPKthrough3rd_final_withcover.pdf">two</a> <a href="http://www.acf.hhs.gov/sites/default/files/opre/executive_summary_final.pdf">RCT</a>s finding that universal pre-K programs are not effective, polymakers in New York <a href="http://www.npr.org/sections/ed/2015/09/08/438584249/new-york-city-mayor-goes-all-in-on-free-preschool">are implementing a400 million a year program</a>. Supporters of this noble endeavor defend their decision by pointing to observational studies and “expert” opinion that support their preconceived views. Before the 1950s, indifference to RCTs was common among medical doctors as well, and the outcomes were at times devastating.</p> <p>Today, when we <a href="http://www.ncbi.nlm.nih.gov/pubmed/7058834">compare conclusions from non-RCT studies to RCTs</a>, we note the unintended strong effects that preconceived notions can have. The first chapter in <a href="http://www.amazon.com/Statistics-4th-Edition-David-Freedman/dp/0393929728">this book</a> provides a summary and some examples. One example comes from <a href="http://www.jameslindlibrary.org/grace-nd-muench-h-chalmers-tc-1966/">a study</a> of 51 studies on the effectiveness of the portacaval shunt. Here is table summarizing the conclusions of the 51 studies:</p> <table> <tr> <td> Design </td> <td> Marked Improvement </td> <td> Moderate Improvement </td> <td> None </td> </tr> <tr> <td> No control </td> <td> 24 </td> <td> 7 </td> <td> 1 </td> </tr> <tr> <td> Controls; but no randomized </td> <td> 10 </td> <td> 3 </td> <td> 2 </td> </tr> <tr> <td> Randomized </td> <td> </td> <td> 1 </td> <td> 3 </td> </tr> </table> <p>Compare the first and last column to appreciate the importance of the randomized trials.</p> <p>A particularly troubling example relates to the studies on Diethylstilbestrol (DES). DES is a drug that was used to prevent spontaneous abortions. Five out of five studies using historical controls found the drug to be effective, yet all three randomized trials found the opposite. Before the randomized trials convinced doctors to stop using this drug , it was given to thousands of women. This turned out to be a tragedy as later studies showed DES has <a href="http://diethylstilbestrol.co.uk/des-side-effects/">terrible side effects</a>. Despite the doctors having the best intentions in mind, ignoring the randomized trials resulted in unintended consequences.</p> <p>Well meaning experts are regularly implementing policies without really testing their effects. Although randomized trials are not always possible, it seems that they are rarely considered, in particular when the intentions are noble. <span style="line-height: 1.5;">Just like well-meaning turn-of-the-20th-century doctors, convinced that they were doing good, put their patients at risk by providing ineffective treatments, well intentioned policies may end up hurting society.</span></p> <p><strong>Update: </strong>A reader pointed me to <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2534811">these</a> <a href="http://eml.berkeley.edu//~crwalters/papers/kline_walters.pdf">preprints</a> which point out that the control group in <a href="http://www.acf.hhs.gov/sites/default/files/opre/executive_summary_final.pdf">one of the cited</a> early education RCTs included children that receive care in a range of different settings, not just staying at home. This implies that the signal is attenuated if what we want to know is if the program is effective for children that would otherwise stay at home. In <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2534811">this preprint</a> they use statistical methodology (principal stratification framework) to obtain separate estimates: the effect for children that would otherwise go to other center-based care and the effect for children that would otherwise stay at home. They find no effect for the former group but a significant effect for the latter. Note that in this analysis the effect being estimated is no longer based on groups assigned at random. Instead, model assumptions are used to infer the two effects. To avoid dependence on these assumptions we will have to perform an RCT with better defined controls. Also note that the<span style="line-height: 1.5;"> RCT data facilitated the principal stratification framework analysis. I also want to restate what <a href="http://simplystatistics.org/2014/04/17/correlation-does-not-imply-causation-parental-involvement-edition/">I’ve posted before</a>, “I am not saying that observational studies are uninformative. If properly analyzed, observational data can be very valuable. For example, the data supporting smoking as a cause of lung cancer is all observational. Furthermore, there is an entire subfield within statistics (referred to as causal inference) that develops methodologies to deal with observational data. But unfortunately, observational data are commonly misinterpreted.”</span></p> So you are getting crushed on the internet? The new normal for academics. 2015-11-16T09:49:04+00:00 http://simplystats.github.io4426 <p>Roger and I were just talking about all the discussion around the <a href="http://www.pnas.org/content/early/2015/10/29/1518393112.full.pdf">Case and Deaton paper</a> on death rates for middle class people. Andrew Gelman <a href="http://www.slate.com/articles/health_and_science/science/2015/11/death_rates_for_white_middle_aged_americans_are_not_increasing.html">discussed it</a> among many others. They noticed a potential bias in the analysis and did some re-analysis. Just yesterday <a href="http://noahpinionblog.blogspot.com/2015/11/gelman-vs-case-deaton-academics-vs.html">Roger and I were just talking about all the discussion around the [Case and Deaton paper](http://www.pnas.org/content/early/2015/10/29/1518393112.full.pdf) on death rates for middle class people. Andrew Gelman [discussed it](http://www.slate.com/articles/health_and_science/science/2015/11/death_rates_for_white_middle_aged_americans_are_not_increasing.html) among many others. They noticed a potential bias in the analysis and did some re-analysis. Just yesterday</a> wrote a piece about academics versus blogs and how many academics are taken by surprise when they see their paper being discussed so rapidly on the internet. Much of the debate comes down to the speed, tone, and ferocity of internet discussion of academic work - along with the fact that sometimes it isn’t fully fleshed out.</p> <p>I have been seeing this play out not just in the case of this specific paper, but many times that folks have been confronted with blogs or the quick publication process of <a href="http://f1000research.com/">f1000Research</a>. I think it is pretty scary for folks who aren’t used to “internet speed” to see this play out and I thought it would be helpful to make a few points.</p> <ol> <li><strong>Everyone is an internet scientist now.</strong> The internet has arrived as part of academics and if you publish a paper that is of interest (or if you are a Nobel prize winner, or if you dispute a claim, etc.) you will see discussion of that paper within a day or two on the blogs. This is now a fact of life.</li> <li><strong>The internet loves a fight</strong>. The internet responds best to personal/angry blog posts or blog posts about controversial topics like p-values, errors, and bias. Almost certainly if someone writes a blog post about your work or an f1000 paper it will be about an error/bias/correction or something personal.</li> <li><strong>Takedowns are easier than new research and happen faster</strong>. It is much, much easier to critique a paper than to design an experiment, collect data, figure out what question to ask, ask it quantitatively, analyze the data, and write it up. This doesn’t mean the critique won’t be good/right it just means it will happen much much faster than it took you to publish the paper because it is easier to do. All it takes is noticing one little bug in the code or one error in the regression model. So be prepared for speed in the response.</li> </ol> <p>In light of these three things, you have a couple of options about how to react if you write an interesting paper and people are discussing it - which they will certainly do (point 1), in a way that will likely make you uncomfortable (point 2), and faster than you’d expect (point 3). The first thing to keep in mind is that the internet wants you to “fight back” and wants to declare a “winner”. Reading about amicable disagreements doesn’t build audience. That is why there is reality TV. So there will be pressure for you to score points, be clever, be fast, and refute every point or be declared the loser. I have found from my own experience that is what I feel like doing too. I think that resisting this urge is both (a) very very hard and (b) the right thing to do. I find the best solution is to be proud of your work, but be humble, because no paper is perfect and thats ok. If you do the best you can , sensible people will acknowledge that.</p> <p>I think these are the three ways to respond to rapid internet criticism of your work.</p> <ul> <li><strong>Option 1: Respond on internet time.</strong> This means if you publish a big paper that you think might be controversial  you should block off a day or two to spend time on the internet responding. You should be ready to do new analysis quickly, be prepared to admit mistakes quickly if they exist, and you should be prepared to make it clear when there aren’t. You will need social media accounts and you should probably have a blog so you can post longer form responses. Github/Figshare accounts make it better for quickly sharing quantitative/new analyses. Again your goal is to avoid the personal and stick to facts, so I find that Twitter/Facebook are best for disseminating your more long form responses on blogs/Github/Figshare. If you are going to go this route you should try to respond to as many of the major criticisms as possible, but usually they cluster into one or two specific comments, which you can address all in one.</li> <li><strong>Option2 : Respond in academic time.</strong> You might have spent a year writing a paper to have people respond to it essentially instantaneously. Sometimes they will have good points, but they will rarely have carefully thought out arguments given the internet-speed response (although remember point 3 that good critiques can be faster than good papers). One approach is to collect all the feedback, ignore the pressure for an immediate response, and write a careful, scientific response which you can publish in a journal or in a fast outlet like f1000Research. I think this route can be the most scientific and productive if executed well. But this will be hard because people will treat that like “you didn’t have a good answer so you didn’t respond immediately”. The internet wants a quick winner/loser and that is terrible for science. Even if you choose this route though, you should make sure you have a way of publicizing your well thought out response - through blogs, social media, etc. once it is done.</li> <li><strong>Option 3: Do not respond.</strong> This is what a lot of people do and I’m unsure if it is ok or not. Clearly internet facing commentary can have an impact on you/your work/how it is perceived for better or worse. So if you ignore it, you are ignoring those consequences. This may be ok, but depending on the severity of the criticism may be hard to deal with and it may mean that you have a lot of questions to answer later. Honestly, I think as time goes on if you write a big paper under a lot of scrutiny Option 3 is going to go away.</li> </ul> <p>All of this only applies if you write a paper that a ton of people care about/is controversial. Many technical papers won’t have this issue and if you keep your claims small, this also probably won’t apply. But I thought it was useful to try to work out how to act under this “new normal”.</p> Prediction Markets for Science: What Problem Do They Solve? 2015-11-10T20:29:19+00:00 http://simplystats.github.io4422 <p>I’ve recently seen a bunch of press on <a href="http://www.pnas.org/content/early/2015/11/04/1516179112.abstract">this paper</a>, which describes an experiment with developing a prediction market for scientific results. From FiveThirtyEight:</p> <blockquote> <p>Although <a href="http://fivethirtyeight.com/datalab/psychology-is-starting-to-deal-with-its-replication-problem/">replication is essential for verifying results</a>, the <a href="http://fivethirtyeight.com/features/science-isnt-broken/">current scientific culture does little to encourage it in most fields</a>. That’s a problem because it means that misleading scientific results, like those from the “shades of gray” study, <a href="http://pss.sagepub.com/content/22/11/1359.short?rss=1&amp;ssource=mfr">could be common in the scientific literature</a>. Indeed, a 2005 study claimed that <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124">most published research findings are false.</a></p> <p>[…]</p> <p>The researchers began by selecting some studies slated for replication in the <a href="https://osf.io/ezcuj/wiki/home/">Reproducibility Project: Psychology</a> — a project that aimed to reproduce 100 studies published in three high-profile psychology journals in 2008. They then recruited psychology researchers to take part in <a href="https://osf.io/yjmht/">two prediction markets</a>. These are the same types of markets that people use <a href="http://www.nytimes.com/2015/10/24/upshot/betting-markets-call-marco-rubio-front-runner-in-gop.html?_r=0">to bet on who’s going to be president</a>. In this case, though, researchers were betting on whether a study would replicate or not.</p> </blockquote> <p>There are all kinds of prediction markets these days–for politics, general ideas–so having one for scientific ideas is not too controversial. But I’m not sure I see exactly what problem is solved by having a prediction market for science. In the paper, they claim that the market-based bets were better predictors of the general survey that was administrated to the scientists. I’ll admit that’s an interesting result, but I’m not yet convinced.</p> <p>First off, it’s worth noting that this work comes out of the massive replication project conducted by the Center for Open Science, where I believe they <a href="http://simplystatistics.org/2015/10/01/a-glass-half-full-interpretation-of-the-replicability-of-psychological-science/">have a</a> <a href="http://simplystatistics.org/2015/10/20/we-need-a-statistically-rigorous-and-scientifically-meaningful-definition-of-replication/">fundamentally flawed definition of replication</a>. So I’m not sure I can really agree with the idea of basing a prediction market on such a definition, but I’ll let that go for now.</p> <p>The purpose of most markets is some general notion of “price discovery”. One popular market is the stock market and I think it’s instructive to see how that works. Basically, people continuously bid on the shares of certain companies and markets keep track of all the bids/offers and the completed transactions. If you are interested in finding out what people are willing to pay for a share of Apple, Inc., then it’s probably best to look at…what people are willing to pay. That’s exactly what the stock market gives you. You only run into trouble when there’s no liquidity, so no one shows up to bid/offer, but that would be a problem for any market.</p> <p>Now, suppose you’re interested in finding out what the “true fundamental value” of Apple, Inc. Some people think the stock market gives you that at every instance, while <a href="http://www.econ.yale.edu/~shiller/">others</a> think that the stock market can behave irrationally for long periods of time. Perhaps in the very long run, you get a sense of the fundamental value of a company, but that may not be useful information at that point.</p> <p>What does the market for scientific hypotheses give you? Well, it would be one thing if granting agencies participated in the market. Then, we would never have to write grant applications. The granting agencies could then signal what they’d be willing to pay for different ideas. But that’s not what we’re talking about.</p> <p>Here, we’re trying to get at whether a given hypothesis is <em>true or not</em>. The only real way to get information about that is to conduct an experiment. How many people betting in the markets will have conducted an experiment? Likely the minority, given that the whole point is to save money by not having people conduct experiments investigating hypotheses that are likely false.</p> <p>But if market participants aren’t contributing real information about an hypothesis, what are they contributing? Well, they’re contributing their <em>opinion</em> about an hypothesis. How is that related to science? I’m not sure. Of course, participants could be experts in the field (although not necessarily) and so their opinions will be informed by past results. And ultimately, it’s consensus amongst scientists that determines, after repeated experiments, whether an hypothesis is true or not. But at the early stages of investigation, it’s not clear how valuable people’s opinions are.</p> <p>In a way, this reminds me of a time a while back when the EPA was soliciting “expert opinion” about the health effects of outdoor air pollution, as if that were a reasonable substitute for collecting actual data on the topic. At least it cost less money–just the price of a conference call.</p> <p>There’s a version of this playing out in the health tech market right now. Companies like <a href="http://simplystatistics.org/2015/10/28/discussion-of-the-theranos-controversy-with-elizabeth-matsui/">Theranos</a> and 23andMe are selling health products that they claim are better than some current benchmark. In particular, Theranos claims its blood tests are accurate when only using a tiny sample of blood. Is this claim true or not? No one outside Theranos knows for sure, but we can look to the financial markets.</p> <p>Theranos can point to the marketplace and show that people are willing to pay for its products. Indeed, the 9 billion valuation of the private company is another indicator that people…highly value the company. But ultimately, <em>we still don’t know if their blood tests are accurate</em> because we don’t have any data. If we were to go by the financial markets alone, we would necessarily conclude that their tests are good, because why else would anyone invest so much money in the company?</p> <p>I think there may be a role to play for prediction markets in science, but I’m not sure discovering the truth about nature is one of them.</p> Biostatistics: It's not what you think it is 2015-11-09T10:00:20+00:00 http://simplystats.github.io4415 <p><a href="http://www.hsph.harvard.edu/biostatistics/"></a> recently sent me on a recruitment trip for our graduate program. I had the opportunity to chat with undergrads interested in pursuing a career related to data analysis. I found that several did not know about the existence of Departments of <em>Biostatistics</em> and most of the rest_ <em>thought _Biostatistics</em> was the study of clinical trials. We <a href="http://simplystatistics.org/2012/08/14/statistics-statisticians-need-better-marketing/">have</a> <a href="http://simplystatistics.org/2011/11/02/we-need-better-marketing/">posted</a> on the need for better marketing for Statistics, but Biostatistics needs it even more. So this post is for students considering a career as applied statisticians or data scientists and who are considering PhD programs.</p> <p>There are dozens of Biostatistics departments and most run PhD programs. You may have never heard of it because they are usually in schools that undergrads don’t regularly frequent: Public Health and Medicine. However, they are very active in research and teaching graduate students. In fact, the 2014 US News &amp; World Report <a href="http://US News and R">ranking of Statistics Departments</a> includes three Biostat departments in the top five spots. Although clinical trials are a popular area of interest in these departments, there are now many other areas of research. With so many fields of science shifting to data intensive research, Biostatistics has adapted to work in these areas. Today pretty much any Biostat department will have people working on projects related to genetics, genomics, computational biology, electronic medical records, neuroscience, environmental sciences, and epidemiology, health-risk analysis, and clinical decision making. Through collaborations, academic biostatisticians have early access to the cutting edge datasets produced by public health scientists and biomedical researchers. Our research usually revolves in either developing statistical methods that are used by researchers working in these fields or working directly with a collaborator in data-driven discovery.</p> <p><strong>How is it different from Statistics? </strong>In the grand scheme of things, they are not very different. As implied by the name, Biostatisticians focus on data related to biology while statisticians tend to be more general. However, the underlying theory and skills we learn are similar. In my view, the major difference is that Biostatisticians, in general, tend to be more interested in data and the subject matter, while in Statistics Departments more emphasis is given to the mathematical theory.</p> <p><strong>What type of job can I get with a Phd In Biostatistics? </strong><a href="http://fortune.com/2015/04/27/best-worst-graduate-degrees-jobs/">A well paying one</a>. And you will have many options to chose from. Our graduates tend to go to academia, industry or government. Also, the <strong>Bio </strong>in the name does not keep our graduates for landing non-bio related jobs, such as in high tech. The reason for this is that the training our students receive and the what they learn from research experiences can be widely applied to data analysis challenges.</p> <p><strong>How should I prepare if I want to apply to a PhD program? </strong>First you need to decide if you are going to like it. One way to do this is to participate in one of the [<a href="http://www.hsph.harvard.edu/biostatistics/"></a> recently sent me on a recruitment trip for our graduate program. I had the opportunity to chat with undergrads interested in pursuing a career related to data analysis. I found that several did not know about the existence of Departments of <em>Biostatistics</em> and most of the rest_ <em>thought _Biostatistics</em> was the study of clinical trials. We <a href="http://simplystatistics.org/2012/08/14/statistics-statisticians-need-better-marketing/">have</a> <a href="http://simplystatistics.org/2011/11/02/we-need-better-marketing/">posted</a> on the need for better marketing for Statistics, but Biostatistics needs it even more. So this post is for students considering a career as applied statisticians or data scientists and who are considering PhD programs.</p> <p>There are dozens of Biostatistics departments and most run PhD programs. You may have never heard of it because they are usually in schools that undergrads don’t regularly frequent: Public Health and Medicine. However, they are very active in research and teaching graduate students. In fact, the 2014 US News &amp; World Report <a href="http://US News and R">ranking of Statistics Departments</a> includes three Biostat departments in the top five spots. Although clinical trials are a popular area of interest in these departments, there are now many other areas of research. With so many fields of science shifting to data intensive research, Biostatistics has adapted to work in these areas. Today pretty much any Biostat department will have people working on projects related to genetics, genomics, computational biology, electronic medical records, neuroscience, environmental sciences, and epidemiology, health-risk analysis, and clinical decision making. Through collaborations, academic biostatisticians have early access to the cutting edge datasets produced by public health scientists and biomedical researchers. Our research usually revolves in either developing statistical methods that are used by researchers working in these fields or working directly with a collaborator in data-driven discovery.</p> <p><strong>How is it different from Statistics? </strong>In the grand scheme of things, they are not very different. As implied by the name, Biostatisticians focus on data related to biology while statisticians tend to be more general. However, the underlying theory and skills we learn are similar. In my view, the major difference is that Biostatisticians, in general, tend to be more interested in data and the subject matter, while in Statistics Departments more emphasis is given to the mathematical theory.</p> <p><strong>What type of job can I get with a Phd In Biostatistics? </strong><a href="http://fortune.com/2015/04/27/best-worst-graduate-degrees-jobs/">A well paying one</a>. And you will have many options to chose from. Our graduates tend to go to academia, industry or government. Also, the <strong>Bio </strong>in the name does not keep our graduates for landing non-bio related jobs, such as in high tech. The reason for this is that the training our students receive and the what they learn from research experiences can be widely applied to data analysis challenges.</p> <p><strong>How should I prepare if I want to apply to a PhD program? </strong>First you need to decide if you are going to like it. One way to do this is to participate in one of the ](http://www.nhlbi.nih.gov/research/training/summer-institute-biostatistics-t15) where you get a glimpse of what we do. My department runs <a href="http://www.hsph.harvard.edu/biostatistics/diversity/summer-program/">one of these as well</a>. However, as an undergrad I would mainly focus on courses. Undergraduate research experiences are a good way to get an idea of what it’s like, but it is difficult to do real research unless you can set aside several hours a week for several consecutive months. This is</p> <p>difficult as an undergrad because you have to make sure to do well in your courses,</p> <p>prepare for the GRE, and get a solid mathematical and computing</p> <p>foundation in order to conduct research later. This is why these</p> <p>programs are usually in the summer.</p> <p>If you decide to apply to a PhD program, I recommend you take advanced math courses such as Real Analysis and Matrix Algebra. If you plan to develop software for complex datasets, I recommend CS courses that cover algorithms and optimization. Note that programming skills are not the same thing as the theory taught in these CS courses. Programming skills in R will serve</p> <p>you well if you plan to analyze data regardless of what academic route</p> <p>you follow. Python and a low-level language such as C++ are more</p> <p>powerful languages that many biostatisticians use these days.</p> <p>I think the demand for well-trained researchers that can make sense of data will continue to be on the rise. If you want a fulfilling job where you analyze data for a living, you should consider a PhD in Biostatistics.</p> <p> </p> Not So Standard Deviations: Episode 4 - A Gajillion Time Series 2015-11-07T11:46:49+00:00 http://simplystats.github.io4413 <p>Episode 4 of Not So Standard Deviations is hot off the audio editor. In this episode Hilary first explains to me what heck is DevOps and then we talk about the statistical challenges in detecting rare events in an enormous set of time series data. There’s also some discussion of Ben and Jerry’s and the t-test, so you’ll want to hang on for that.</p> <p>Notes:</p> <ul> <li><a href="https://goo.gl/259VKI">Nobody Loves Graphite Anymore</a></li> <li><a href="http://goo.gl/zB7wM9">A response</a></li> <li><a href="https://goo.gl/7PgLKY">Why Gosset is awesome</a></li> </ul> <p> </p> How I decide when to trust an R package 2015-11-06T13:41:02+00:00 http://simplystats.github.io4409 <p>One thing that I’ve given a lot of thought to recently is the process that I use to decide whether I trust an R package or not. Kasper Hansen took a break from <a href="https://twitter.com/KasperDHansen/status/657589509975076864">trolling me</a> <a href="https://twitter.com/KasperDHansen/status/621315346633519104">on Twitter</a> to talk about how he trusts packages on Github less than packages that are on CRAN and particularly Bioconductor. A couple of points he makes that I think are very relevant. First, that having a package on CRAN/Bioconductor raises trust in that package:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> .<a href="https://twitter.com/michaelhoffman">@michaelhoffman</a> But it's not on Bioconductor or CRAN. This decreases trust substantially. </p> <p> &mdash; Kasper Daniel Hansen (@KasperDHansen) <a href="https://twitter.com/KasperDHansen/status/659777449098637312">October 29, 2015</a> </p> </blockquote> <p>The primary reason is because Bioc/CRAN demonstrate something about the developer’s willingness to do the boring but critically important parts of package development like documentation, vignettes, minimum coding standards, and being sure that their code isn’t just a rehash of something else. The other big point Kasper made was the difference between a repository - which is user oriented and should provide certain guarantees and Github - which is a developer platform and makes things easier/better for developers but doesn’t have a user guarantee system in place.</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> .<a href="https://twitter.com/StrictlyStat">@StrictlyStat</a> CRAN is a repository, not a development platform. It is user oriented, not developer oriented. GH is the reverse. </p> <p> &mdash; Kasper Daniel Hansen (@KasperDHansen) <a href="https://twitter.com/KasperDHansen/status/661746848437243904">November 4, 2015</a> </p> </blockquote> <p>This discussion got me thinking about when/how I depend on R packages and how I make that decision. The scenarios where I depend on R packages are:</p> <ol> <li>Quick and dirty analyses for myself</li> <li>Shareable data analyses that I hope are reproducible</li> <li>As dependencies of R packages I maintain</li> </ol> <p>As you move from 1-3 it is more and more of a pain if the package I’m depending on breaks. If it is just something I was doing for fun, its not that big of a deal. But if it means I have to rewrite/recheck/rerelease my R package than that is a much bigger headache.</p> <p>So my scale for how stringent I am about relying on packages varies by the type of activity, but what are the criteria I use to measure how trustworthy a package is? For me, the criteria are in this order:</p> <ol> <li><strong>People prior </strong></li> <li><strong>Forced competence</strong></li> <li><strong>Indirect data</strong></li> </ol> <p>I’ll explain each criteria in a minute, but the main purpose of using these criteria is (a) to ensure that I’m using a package that works and (b) to ensure that if the package breaks I can trust it will be fixed or at least I can get some help from the developer.</p> <p><strong>People prior</strong></p> <p>The first thing I do when I look at a package I might depend on is look at who the developer is. If that person is someone I know has developed widely used, reliable software and who quickly responds to requests/feedback then I immediately trust the package. I have a list of people like <a href="https://en.wikipedia.org/wiki/Brian_D._Ripley">Brian</a>, or <a href="https://github.com/hadley">Hadley,</a> or <a href="https://github.com/jennybc">Jenny</a>, or <a href="http://rafalab.dfci.harvard.edu/index.php/software-and-data">Rafa</a>, who could post their package just as a link to their website and I would trust it. It turns out almost all of these folks end up putting their packages on CRAN/Bioconductor anyway. But even if they didn’t I assume that the reason is either (a) the package is very new or (b) they have a really good reason for not distributing it through the normal channels.</p> <p><strong>Forced competence</strong></p> <p>For people who I don’t know about or whose software I’ve never used, then I have very little confidence in the package a priori. This is because there are a ton of people developing R packages now with highly variable levels of commitment to making them work. So as a placeholder for all the variables I don’t know about them, I use the repository they choose as a surrogate. My personal prior on the trustworthiness of a package from someone I don’t know goes something like:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/11/Screen-Shot-2015-11-06-at-1.25.01-PM.png"><img class="aligncenter wp-image-4410 size-full" src="http://simplystatistics.org/wp-content/uploads/2015/11/Screen-Shot-2015-11-06-at-1.25.01-PM.png" alt="Screen Shot 2015-11-06 at 1.25.01 PM" width="843" height="197" srcset="http://simplystatistics.org/wp-content/uploads/2015/11/Screen-Shot-2015-11-06-at-1.25.01-PM-300x70.png 300w, http://simplystatistics.org/wp-content/uploads/2015/11/Screen-Shot-2015-11-06-at-1.25.01-PM-260x61.png 260w, http://simplystatistics.org/wp-content/uploads/2015/11/Screen-Shot-2015-11-06-at-1.25.01-PM.png 843w" sizes="(max-width: 843px) 100vw, 843px" /></a></p> <p>This prior is based on the idea of forced competence. In general, you have to do more to get a package approved on Bioconductor than on CRAN (for example you have to have a good vignette) and you have to do more to get a package on CRAN (pass R CMD CHECK and survive the review process) than to put it on Github.</p> <p>This prior isn’t perfect, but it does tell me something about how much the person cares about their package. If they go to the work of getting it on CRAN/Bioc, then at least they cared enough to document it. They are at least forced to be minimally competent - at least at the time of submission and enough for the packages to still pass checks.</p> <p><strong>Indirect data</strong></p> <p>After I’ve applied my priors I then typically look at the data. For Bioconductor I look at the badges, like how downloaded it is, whether it passes the checks, and how well it is covered by tests. I’m already inclined to trust it a bit since it is on that platform, but I use the data to adjust my prior a bit. For CRAN I might look at the <a href="http://cran-logs.rstudio.com/">download stats</a> provided by Rstudio. The interesting thing is that as John Muschelli points out, Github actually has the most indirect data available for a package:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> .<a href="https://twitter.com/KasperDHansen">@KasperDHansen</a> Flipside: CRAN has no issue pages, stars/ratings, outdated limits on size, and limited development cycle/turnover. </p> <p> &mdash; John Muschelli (@StrictlyStat) <a href="https://twitter.com/StrictlyStat/status/661746348409114624">November 4, 2015</a> </p> </blockquote> <p>If I’m going to use a package that is on Github from a person who isn’t on my prior list of people to trust then I look at a few things. The number of stars/forks/watchers is one thing that is a quick and dirty estimate of how used a package is. I also look very carefully at how many commits the person has submitted to both the package in question and in general all other packages over the last couple of months. If the person isn’t actively developing either the package or anything else on Github, that is a bad sign. I also look to see how quickly they have responded to issues/bug reports on the package in the past if possible. One idea I haven’t used but I think is a good one is to submit an issue for a trivial change to the package and see if I get a response very quickly. Finally I look and see if they have some demonstration their package works across platforms (say with a <a href="https://travis-ci.org/">travis badge</a>). If the package is highly starred, frequently maintained, all issues are responded to and up-to-date, and passes checks on all platform then that data might overwhelm my prior and I’d go ahead and trust the package.</p> <p><strong>Summary</strong></p> <p>In general one of the best things about the R ecosystem is being able to rely on other packages so that you don’t have to write everything from scratch. But there is a hard balance to strike with keeping the dependency list small. One way I maintain this balance is using the strategy I’ve outlined to worry less about trustworthy dependencies.</p> The Statistics Identity Crisis: Am I a Data Scientist 2015-10-30T14:21:08+00:00 http://simplystats.github.io4404 <p>The joint ASA/Simply Statistics webinar on the statistics identity crisis is now live!</p> Faculty/postdoc job opportunities in genomics across Johns Hopkins 2015-10-30T10:33:06+00:00 http://simplystats.github.io4400 <p>It’s pretty exciting to be in genomics at Hopkins right now with three new Bloomberg professors in genomics areas, a ton of stellar junior faculty, and a really fun group of students/postdocs. If you want to get in on the action here is a non-comprehensive list of great opportunities.</p> <h2 id="span-styletext-decoration-underlinestrongfaculty-jobsstrongspan"><span style="text-decoration: underline;"><strong>Faculty Jobs</strong></span></h2> <p><strong>Job: </strong>Multiple tenure track faculty positions in all areas including in genomics</p> <p><strong>Department: </strong> Biostatistics</p> <p><strong>To apply</strong>: <a href="http://www.jhsph.edu/departments/biostatistics/_docs/faculty-ad-2016-combined-large-final.pdf">http://www.jhsph.edu/departments/biostatistics/_docs/faculty-ad-2016-combined-large-final.pdf</a></p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> Tenure track position in data intensive biology</p> <p><strong>Department: </strong> Biology</p> <p><strong>To apply</strong>: <a href="http://apply.interfolio.com/31146">http://apply.interfolio.com/31146</a></p> <p><strong>Deadline: </strong>Nov 1st and ongoing</p> <p><strong>Job:</strong> Tenure track positions in bioinformatics, with focus on proteomics or sequencing data analysis</p> <p><strong>Department: </strong> Oncology Biostatistics</p> <p><strong>To apply</strong>: <a href="https://www.research-it.onc.jhmi.edu/DBB/PhD_Statistician.pdf">https://www.research-it.onc.jhmi.edu/DBB/PhD_Statistician.pdf</a></p> <p><strong>Deadline:</strong> Review ongoing</p> <p> </p> <h2 id="span-styletext-decoration-underlinestrongpostdoc-jobsstrongspan"><span style="text-decoration: underline;"><strong>Postdoc Jobs</strong></span></h2> <p><strong>Job:</strong> Postdoc(s) in statistical methods/software development for RNA-seq</p> <p><strong>Employer: </strong> Jeff Leek</p> <p><strong>To apply</strong>: email Jeff (<a href="http://jtleek.com/jobs/">http://jtleek.com/jobs/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> Data scientist for integrative genomics in the human brain (MS/PhD)</p> <p><strong>Employer: </strong> Andrew Jaffe</p> <p><strong>To apply</strong>: email Andrew (<a href="http://www.aejaffe.com/jobs.html">http://www.aejaffe.com/jobs.html</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> Research associate for genomic data processing and analysis (BA+)</p> <p><strong>Employer: </strong> Andrew Jaffe</p> <p><strong>To apply</strong>: email Andrew (<a href="http://www.aejaffe.com/jobs.html">http://www.aejaffe.com/jobs.html</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> PhD developing scalable software and algorithms for analyzing sequencing data</p> <p><strong>Employer: </strong> Ben Langmead</p> <p><strong>To apply</strong>: http://www.cs.jhu.edu/graduate-studies/phd-program/</p> <p><strong>Deadline:</strong> See site</p> <p><strong>Job:</strong> Postdoctoral researcher developing scalable software and algorithms for analyzing sequencing data</p> <p><strong>Employer: </strong> Ben Langmead</p> <p><strong>To apply</strong>: email Ben (<a href="http://www.langmead-lab.org/open-positions/">http://www.langmead-lab.org/open-positions/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> Postdoctoral researcher developing algorithms for challenging problems in large-scale genomics whole-genome assenbly, RNA-seq analysis, and microbiome analysis</p> <p><strong>Employer: </strong> Steven Salzberg</p> <p><strong>To apply</strong>: email Steven (<a href="http://salzberg-lab.org/">http://salzberg-lab.org/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong> Research associate for genomic data processing and analysis (BA+) in cancer</p> <p><strong>Employer: </strong> Luigi Marchionni (with Don Geman)</p> <p><strong>To apply</strong>: email Luigi (<a href="http://luigimarchionni.org/">http://luigimarchionni.org/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job: </strong>Postdoctoral researcher developing algorithms for biomarkers development and precision medicine application in cancer</p> <p><strong>Employer: </strong> Luigi Marchionni (with Don Geman)</p> <p><strong>To apply</strong>: email Luigi (<a href="http://luigimarchionni.org/">http://luigimarchionni.org/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job:</strong>Postdoctoral researcher developing methods in machine learning, genomics, and regulatory variation</p> <p><strong>Employer: </strong> Alexis Battle</p> <p><strong>To apply</strong>: email Alexis (<a href="http://battlelab.jhu.edu/join_us.html">http://battlelab.jhu.edu/join_us.html</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job: </strong>Postdoctoral fellow with interests in biomarker discovery for Alzheimer’s disease</p> <p><strong>Employer: </strong> Madhav Thambisetty / Ingo Ruczinski</p> <p><strong>To apply</strong>: <a href="http://www.alzforum.org/jobs/postdoctoral-research-fellow-alzheimers-disease-biomarkers"> http://www.alzforum.org/jobs/postdoctoral-research-fellow-alzheimers-disease-biomarkers</a></p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job: </strong>Postdoctoral positions for research in the interface of statistical genetics, precision medicine and big data</p> <p><strong>Employer: </strong> Nilanjan Chatterjee</p> <p><strong>To apply</strong>: <a href="http://www.jhsph.edu/departments/biostatistics/_docs/postdoc-ad-chatterjee.pdf">http://www.jhsph.edu/departments/biostatistics/_docs/postdoc-ad-chatterjee.pdf</a></p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job: </strong>Postdoctoral research developing algorithms and software for time course pattern detection in genomics data</p> <p><strong>Employer: </strong> Elana Fertig</p> <p><strong>To apply</strong>: email Elana (ejfertig@jhmi.edu)</p> <p><strong>Deadline:</strong> Review ongoing</p> <p><strong>Job: </strong>Postdoctoral fellow to develop novel methods for large-scale DNA and RNA sequence analysis related to human and/or plant genetics, such as developing methods for discovering structural variations in cancer or for assembling and analyzing large complex plant genomes.</p> <p><strong>Employer: </strong> Mike Schatz</p> <p><strong>To apply</strong>: email Mike (<a href="http://schatzlab.cshl.edu/apply/">http://schatzlab.cshl.edu/apply/</a>)</p> <p><strong>Deadline:</strong> Review ongoing</p> <h2 id="span-styletext-decoration-underlinestrongstudentsstrongspan"><span style="text-decoration: underline;"><strong>Students</strong></span></h2> <p>We are all always on the hunt for good Ph.D. students. At Hopkins students are admitted to specific departments. So if you find a faculty member you want to work with, you can apply to their department. Here are the application details for the various departments admitting students to work on genomics:<a href="https://ccb.jhu.edu/students.shtml"> https://ccb.jhu.edu/students.shtml</a></p> <p> </p> <p> </p> <p> </p> The statistics identity crisis: am I really a data scientist? 2015-10-29T13:32:13+00:00 http://simplystats.github.io4396 <p> </p> <p> </p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/10/crisis.png"><img class="aligncenter wp-image-4397" src="http://simplystatistics.org/wp-content/uploads/2015/10/crisis-300x75.png" alt="crisis" width="508" height="127" srcset="http://simplystatistics.org/wp-content/uploads/2015/10/crisis-300x75.png 300w, http://simplystatistics.org/wp-content/uploads/2015/10/crisis-260x65.png 260w, http://simplystatistics.org/wp-content/uploads/2015/10/crisis.png 720w" sizes="(max-width: 508px) 100vw, 508px" /></a></p> <p> </p> <p><em>Tl;dr: We will host a Google Hangout of our popular JSM session October 30th 2-4 PM EST. </em></p> <p> </p> <p>I organized a session at JSM 2015 called <em>“The statistics identity crisis: am I really a data scientist?”</em> The session turned out to be pretty popular:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> Packed room of statisticians with identity crises at <a href="https://twitter.com/hashtag/JSM2015?src=hash">#JSM2015</a> session: are we really data scientists? <a href="http://t.co/eLsGosoTCt">pic.twitter.com/eLsGosoTCt</a> </p> <p> &mdash; Dr Ruth Etzioni (@retzioni) <a href="https://twitter.com/retzioni/status/631134032357502978">August 11, 2015</a> </p> </blockquote> <p>but it turns out not everyone fit in the room:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> This is the closest I can get to <a href="https://twitter.com/statpumpkin">@statpumpkin</a>'s talk. <a href="https://twitter.com/hashtag/jsm2015?src=hash">#jsm2015</a> still had no clue how to predict session attendance. <a href="http://t.co/gTb4OqdAo3">pic.twitter.com/gTb4OqdAo3</a> </p> <p> &mdash; sandy griffith (@sgrifter) <a href="https://twitter.com/sgrifter/status/631134590229442560">August 11, 2015</a> </p> </blockquote> <p>Thankfully, Steve Pierson at the ASA had the awesome idea to re-run the session for people who couldn’t be there. So we will be hosting a Google Hangout with the following talks:</p> <table width="100%" cellspacing="0" cellpadding="4" bgcolor="white"> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314339">'Am I a Data Scientist?': The Applied Statistics Student's Identity Crisis</a> — <b>Alyssa Frazee, Stripe</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314376">How Industry Views Data Science Education in Statistics Departments</a> — <b>Chris Volinsky, AT&amp;T</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314414">Evaluating Data Science Contributions in Teaching and Research</a> — <b>Lance Waller, Emory University</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314641">Teach Data Science and They Will Come</a> — <b>Jennifer Bryan, The University of British Columbia</b> </td> </tr> </table> <p>You can watch it on Youtube or Google Plus. Here is the link:</p> <p>https://plus.google.com/events/chuviltukohj2inbqueap9h7228</p> <p>The session will be held October 30th (tomorrow!) from 2-4PM EST. You can watch it live and discuss the talks using the hashtag [ </p> <p> </p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/10/crisis.png"><img class="aligncenter wp-image-4397" src="http://simplystatistics.org/wp-content/uploads/2015/10/crisis-300x75.png" alt="crisis" width="508" height="127" srcset="http://simplystatistics.org/wp-content/uploads/2015/10/crisis-300x75.png 300w, http://simplystatistics.org/wp-content/uploads/2015/10/crisis-260x65.png 260w, http://simplystatistics.org/wp-content/uploads/2015/10/crisis.png 720w" sizes="(max-width: 508px) 100vw, 508px" /></a></p> <p> </p> <p><em>Tl;dr: We will host a Google Hangout of our popular JSM session October 30th 2-4 PM EST. </em></p> <p> </p> <p>I organized a session at JSM 2015 called <em>“The statistics identity crisis: am I really a data scientist?”</em> The session turned out to be pretty popular:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> Packed room of statisticians with identity crises at <a href="https://twitter.com/hashtag/JSM2015?src=hash">#JSM2015</a> session: are we really data scientists? <a href="http://t.co/eLsGosoTCt">pic.twitter.com/eLsGosoTCt</a> </p> <p> &mdash; Dr Ruth Etzioni (@retzioni) <a href="https://twitter.com/retzioni/status/631134032357502978">August 11, 2015</a> </p> </blockquote> <p>but it turns out not everyone fit in the room:</p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> This is the closest I can get to <a href="https://twitter.com/statpumpkin">@statpumpkin</a>'s talk. <a href="https://twitter.com/hashtag/jsm2015?src=hash">#jsm2015</a> still had no clue how to predict session attendance. <a href="http://t.co/gTb4OqdAo3">pic.twitter.com/gTb4OqdAo3</a> </p> <p> &mdash; sandy griffith (@sgrifter) <a href="https://twitter.com/sgrifter/status/631134590229442560">August 11, 2015</a> </p> </blockquote> <p>Thankfully, Steve Pierson at the ASA had the awesome idea to re-run the session for people who couldn’t be there. So we will be hosting a Google Hangout with the following talks:</p> <table width="100%" cellspacing="0" cellpadding="4" bgcolor="white"> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314339">'Am I a Data Scientist?': The Applied Statistics Student's Identity Crisis</a> — <b>Alyssa Frazee, Stripe</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314376">How Industry Views Data Science Education in Statistics Departments</a> — <b>Chris Volinsky, AT&amp;T</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314414">Evaluating Data Science Contributions in Teaching and Research</a> — <b>Lance Waller, Emory University</b> </td> </tr> <tr> <td align="right" valign="top" width="110"> </td> <td> <a href="https://www.amstat.org/meetings/jsm/2015/onlineprogram/AbstractDetails.cfm?abstractid=314641">Teach Data Science and They Will Come</a> — <b>Jennifer Bryan, The University of British Columbia</b> </td> </tr> </table> <p>You can watch it on Youtube or Google Plus. Here is the link:</p> <p>https://plus.google.com/events/chuviltukohj2inbqueap9h7228</p> <p>The session will be held October 30th (tomorrow!) from 2-4PM EST. You can watch it live and discuss the talks using the hashtag](https://twitter.com/search?q=%23jsm2015) or you can watch later as the video will remain on Youtube.</p> Discussion of the Theranos Controversy with Elizabeth Matsui 2015-10-28T14:54:50+00:00 http://simplystats.github.io4391 <p>Theranos is a Silicon Valley diagnostic testing company that has been in the news recently. The story of Theranos has fascinated me because I think it represents a perfect collision of the tech startup culture and the health care culture and how combining them together can generate unique problems.</p> <p>I talked with Elizabeth Matsui, a Professor of Pediatrics in the Division of Allergy and Immunology here at Johns Hopkins, to discuss Theranos, the realities of diagnostic testing, and the unique challenges that a health-tech startup faces with respect to doing good science and building products people want to buy.</p> <p>Notes:</p> <ul> <li>Original <a href="http://www.wsj.com/articles/theranos-has-struggled-with-blood-tests-1444881901">Wall Street Journal story</a> on Theranos (paywalled)</li> <li>Related stories in <a href="http://www.wired.com/2015/10/theranos-scandal-exposes-the-problem-with-techs-hype-cycle/">Wired</a> and NYT’s <a href="http://www.nytimes.com/2015/10/28/business/dealbook/theranos-under-fire.html">Dealbook</a> (not paywalled)</li> <li>Theranos <a href="https://www.theranos.com/news/posts/custom/theranos-facts">response</a> to WSJ story</li> </ul> Not So Standard Deviations: Episode 3 - Gilmore Girls 2015-10-24T23:17:18+00:00 http://simplystats.github.io4389 <p>I just uploaded Episode 3 of <a href="https://soundcloud.com/nssd-podcast">Not So Standard Deviations</a> so check your feeds. In this episode Hilary and I talk about our jobs and the life of the data scientist in both academia and the tech industry. It turns out that they’re not as different as I would have thought.</p> <p><a href="https://api.soundcloud.com/tracks/229957578/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio file for this episode</a>.</p> We need a statistically rigorous and scientifically meaningful definition of replication 2015-10-20T10:05:22+00:00 http://simplystats.github.io4365 <p>Replication and confirmation are indispensable concepts that help define scientific facts. However, the way in which we reach scientific consensus on a given finding is rather complex. Although <a href="http://simplystatistics.org/2015/06/24/how-public-relations-and-the-media-are-distorting-science/">some press releases try to convince us otherwise</a>, rarely is one publication enough. In fact, most published results go unnoticed and no attempts to replicate them are made. These are not debunked either; they simply get discarded to the dustbin of history. The very few results that garner enough attention for others to spend time and energy on them are assessed by an ad-hoc process involving a community of peers. The assessments are usually a combination of deductive reasoning, direct attempts at replication, and indirect checks obtained by attempting to build on the result in question. This process eventually leads to a result either being accepted by consensus or not. For particularly important cases, an official scientific consensus report may be commissioned by a national academy or an established scientific society. Examples of results that have become part of the scientific consensus in this way include smoking causing lung cancer, HIV causing AIDS, and climate change being caused by humans. In contrast, the published result that vaccines cause autism has been thoroughly debunked by several follow up studies. In none of these four cases a simple definition of replication was used to confirm or falsify a result. The same is true for most results for which there is consensus. Yet science moves on, and continues to be an incomparable force at improving our quality of life.</p> <p>Regulatory agencies, such as the FDA, are an exception since they clearly spell out a <a href="http://www.fda.gov/downloads/Drugs/.../Guidances/ucm078749.pdf">definition</a> of replication. For example, to approve a drug they may require two independent clinical trials, adequately powered, to show statistical significance at some predetermined level. They also require a large enough effect size to justify the cost and potential risks associated with treatment. This is not to say that FDA approval is equivalent to scientific consensus, but they do provide a clearcut definition of replication.</p> <p>In response to a growing concern over a <em><a href="http://www.nature.com/news/reproducibility-1.17552">reproducibility crisis</a></em>, projects such as the <a href="http://osc.centerforopenscience.org/">Open Science Collaboration</a> have commenced to systematically try to replicate published results. In a <a href="http://simplystatistics.org/2015/10/01/a-glass-half-full-interpretation-of-the-replicability-of-psychological-science/">recent post</a>, Jeff described one of their <a href="http://www.sciencemag.org/content/349/6251/aac4716">recent papers</a> on estimating the reproducibility of psychological science (they really mean replicability; see note below). This Science paper led to lay press reports with eye-catching headlines such as “only 36% of psychology experiments replicate”. Note that the 36% figure comes from a definition of replication that mimics the definition used by regulatory agencies: results are considered replicated if a p-value &lt; 0.05 was reached in both the original study and the replicated one. Unfortunately, this definition ignores both effect size and statistical power. If power is not controlled, then the expected proportion of correct findings that replicate can be quite small. For example, if I try to replicate the smoking-causes-lung-cancer result with a sample size of 5, there is a good chance it will not replicate. In his post, Jeff notes that for several of the studies that did not replicate, the 95% confidence intervals intersected. So should intersecting confidence intervals be our definition of replication? This too has a flaw since it favors imprecise studies with very large confidence intervals. If effect size is ignored, we may waste our time trying to replicate studies reporting practically meaningless findings. Generally defining replication for published studies is not as easy as for highly controlled clinical trials. However, one clear improvement from what is currently being done is to consider statistical power and effect sizes.</p> <p>To further illustrate this, let’s consider a very concrete example with real life consequences. Imagine a loved one has a disease with high mortality rates and asks for your help in evaluating the scientific evidence on treatments. Four experimental drugs are available all with promising clinical trials resulting in p-values &lt;0.05. However, a replication project redoes the experiments and finds that only the drug A and drug B studies replicate (p&lt;0.05). So which drug do you take? Let’s give a bit more information to help you decide. Here are the p-values for both original and replication trials:</p> <table style="width: 100%;"> <tr> <td> Drug </td> <td> Original </td> <td> Replication </td> <td> Replicated </td> </tr> <tr> <td> A </td> <td> 0.0001 </td> <td> 0.001 </td> <td> Yes </td> </tr> <tr> <td> B </td> <td> &lt;0.000001 </td> <td> 0.03 </td> <td> Yes </td> </tr> <tr> <td> C </td> <td> 0.03 </td> <td> 0.06 </td> <td> No </td> </tr> <tr> <td> D </td> <td> &lt;0.000001 </td> <td> 0.10 </td> <td> No </td> <td> </td> </tr> </table> <p>Which drug would you take now? The information I have provided is based on p-values and therefore is missing a key piece of information: the effect sizes. Below I show the confidence intervals for all four studies (left) and four replication studies (right). Note that except for drug B, all confidence intervals intersect. In light of the figure below, which one would you chose?</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/10/replication.png"><img class=" wp-image-4368 alignright" src="http://simplystatistics.org/wp-content/uploads/2015/10/replication.png" alt="replication" width="359" height="338" srcset="http://simplystatistics.org/wp-content/uploads/2015/10/replication-300x283.png 300w, http://simplystatistics.org/wp-content/uploads/2015/10/replication-212x200.png 212w, http://simplystatistics.org/wp-content/uploads/2015/10/replication.png 617w" sizes="(max-width: 359px) 100vw, 359px" /></a></p> <p>I would be inclined to go with drug D because it has a large effect size, a small p-value, and the replication experiment effect estimate fell inside a 95% confidence interval. I would definitely not go with A since it provides marginal benefits, even if the trial found a statistically significant effect and was replicated. So the p-value based definition of replication is practically worthless from a practical standpoint.</p> <p>It seems that before continuing the debate over replication, and certainly before declaring that we are in a <a href="http://www.nature.com/news/reproducibility-1.17552">reproducibility crisis</a>, we need a statistically rigorous and scientifically meaningful definition of replication. This definition does not necessarily need to be dichotomous (replicated or not) and it will probably require more than one replication experiment and more than one summary statistic: one for effect size and one for uncertainty. In the meantime, we should be careful not to dismiss the current scientific process, which seems to be working rather well at either ignoring or debunking false positive results while producing useful knowledge and discovery.</p> <hr /> <p>Footnote on reproducible versus replication: As Jeff pointed out, the cited Open Science Collaboration paper is about replication, not reproducibility. A study is considered reproducible if an independent researcher can recreate the tables and figures from the original raw data. Replication is not nearly as simple to define because it involves probability. To replicate the experiment it has to be performed again, with a different random sample and new set of measurement errors.</p> Theranos runs head first into the realities of diagnostic testing 2015-10-16T08:42:11+00:00 http://simplystats.github.io4359 <p>The Wall Street Journal has published a <a href="http://www.wsj.com/articles/theranos-has-struggled-with-blood-tests-1444881901">lengthy investigation</a> into the diagnostic testing company Theranos.</p> <blockquote> <p>The company offers more than 240 tests, ranging from cholesterol to cancer. It claims its technology can work with just a finger prick. Investors have poured more than400 million into Theranos, valuing it at 9 billion and her majority stake at more than half that. The 31-year-old Ms. Holmes’s bold talk and black turtlenecks draw comparisons to Apple<span class="company-name-type"> Inc.</span> cofounder Steve Jobs.</p> </blockquote> <p>If ever there were a warning sign, the comparison to Steve Jobs has got to be it.</p> <blockquote> <p>But Theranos has struggled behind the scenes to turn the excitement over its technology into reality. At the end of 2014, the lab instrument developed as the linchpin of its strategy handled just a small fraction of the tests then sold to consumers, according to four former employees.</p> <div class=" media-object wrap scope-web|mobileapps " data-layout="wrap "> One former senior employee says Theranos was routinely using the device, named Edison after the prolific inventor, for only 15 tests in December 2014. Some employees were leery about the machine’s accuracy, according to the former employees and emails reviewed by The Wall Street Journal. </div> <div class=" media-object wrap scope-web|mobileapps " data-layout="wrap "> </div> <div class=" media-object wrap scope-web|mobileapps " data-layout="wrap "> In a complaint to regulators, one Theranos employee accused the company of failing to report test results that raised questions about the precision of the Edison system. Such a failure could be a violation of federal rules for laboratories, the former employee said. </div> </blockquote> <div class=" media-object wrap scope-web|mobileapps " data-layout="wrap "> With these kinds of stories, it's always hard to tell whether there's reality here or it's just a bunch of axe grinding. But one thing that's for sure is that people are talking, and probably not for good reasons. </div> Minimal R Package Check List 2015-10-14T08:21:48+00:00 http://simplystats.github.io4355 <p>A little while back I had the pleasure of flying in a small Cessna with a friend and for the first time I got to see what happens in the cockpit with a real pilot. One thing I noticed was that basically you don’t lift a finger without going through some sort of check list. This starts before you even roll the airplane out of the hangar. It makes sense because flying is a pretty dangerous hobby and you want to prevent problems from occurring when you’re in the air.</p> <p>That experience got me thinking about what might be the minimal check list for building an R package, a somewhat less dangerous hobby. First off, much has changed (for the better) since I started making R packages and I wanted to have some clean documentation of the process, particularly with using RStudio’s tools. So I wiped off my installations of both R and RStudio and started from scratch to see what it would take to get someone to build their first R package.</p> <p>The list is basically a “pre-flight” list-–the presumption here is that you actually know the important details of building packages, but need to make sure that your environment is setup correctly so that you don’t run into errors or problems. I find this is often a problem for me when teaching students to build packages because I focus on the details of actually making the packages (i.e. DESCRIPTION files, Roxygen, etc.) and forget that way back when I actually configured my environment to do this.</p> <p><strong>Pre-flight Procedures for R Packages</strong></p> <ol> <li>Install most recent version of R</li> <li>Install most recent version of RStudio</li> <li>Open RStudio</li> <li>Install <strong>devtools</strong> package</li> <li>Click on Project –&gt; New Project… –&gt; New Directory –&gt; R package</li> <li>Enter package name</li> <li>Delete boilerplate code and “hello.R” file</li> <li>Goto “man” directory an delete “hello.Rd” file</li> <li>In File browser, click on package name to go to the top level directory</li> <li>Click “Build” tab in environment browser</li> <li>Click “Configure Build Tools…”</li> <li>Check “Generate documentation with Roxygen”</li> <li>Check “Build &amp; Reload” when Roxygen Options window opens –&gt; Click OK</li> <li>Click OK in Project Options window</li> </ol> <p>At this point, you’re clear to build your package, which obviously involves writing R code, Roxygen documentation, writing package metadata, and building/checking your package.</p> <p>If I’m missing a step or have too many steps, I’d like to hear about it. But I think this is the minimum number of steps you need to configure your environment for building R packages in RStudio.</p> <p>UPDATE: I’ve made some changes to the check list and will be posting future updates/modifications to my <a href="https://github.com/rdpeng/daprocedures/blob/master/lists/Rpackage_preflight.md">GitHub repository</a>.</p> Profile of Data Scientist Shannon Cebron 2015-10-03T09:32:20+00:00 http://simplystats.github.io4340 <p>The “This is Statistics” campaign has a nice <a href="http://thisisstatistics.org/interview-with-shannon-cebron-from-pegged-software/">profile of Shannon Cebron</a>, a data scientist working at the Baltimore-based Pegged Software.</p> <blockquote> <p><strong>What advice would you give to someone thinking of a career in data science?</strong></p> <p>Take some advanced statistics courses if you want to see what it’s like to be a statistician or data scientist. By that point, you’ll be familiar with enough statistical methods to begin solving real-world problems and understanding the power of statistical science. I didn’t realize I wanted to be a data scientist until I took more advanced statistics courses, around my third year as an undergraduate math major.</p> </blockquote> Not So Standard Deviations: Episode 2 - We Got it Under 40 Minutes 2015-10-02T09:00:29+00:00 http://simplystats.github.io4348 <p>Episode 2 of my podcast with Hilary Parker, <a href="https://soundcloud.com/nssd-podcast">Not So Standard Deviations</a>, is out! In this episode, we talk about user testing for statistical methods, navigating the Hadleyverse, the crucial significance of rename(), and the secret reason for creating the podcast (hint: it rhymes with “bee”). Also, I erroneously claim that <a href="http://www.stat.purdue.edu/~wsc/">Bill Cleveland</a> is <em>way</em> older than he actually is. Sorry Bill.</p> <p>In other news, <a href="https://itunes.apple.com/us/podcast/not-so-standard-deviations/id1040614570">we are finally on iTunes</a> so you can subscribe from there directly if you want (just search for “Not So Standard Deviations” or paste the link directly into your podcatcher.</p> <p><a href="https://api.soundcloud.com/tracks/226538106/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio file for this episode</a>.</p> <p>Notes:</p> <ul> <li><a href="http://www.sciencemag.org/content/229/4716/828.short">Bill Cleveland’s paper in Science</a>, on graphical perception, <strong>published in 1985</strong></li> <li><a href="https://www.eventbrite.com/e/statistics-making-a-difference-a-conference-in-honor-of-tom-louis-tickets-16248614042">TomFest</a></li> </ul> A glass half full interpretation of the replicability of psychological science 2015-10-01T10:00:53+00:00 http://simplystats.github.io4336 <p style="line-height: 18.0pt;"> <em>tl;dr: 77% of replication effects from the psychology replication study were in (or above) the 95% prediction interval based on the original effect size. This isn't perfect and suggests (a) there is still room for improvement, (b) the scientists who did the replication study are pretty awesome at replicating, (c) we need a better definition of replication that respects uncertainty but (d) the scientific sky isn't falling. We wrote this up in a <a href="http://arxiv.org/abs/1509.08968">paper on arxiv</a>; <a href="https://github.com/jtleek/replication_paper">the code is here.</a> </em> </p> <p style="line-height: 18.0pt;"> <span style="font-size: 12.0pt; font-family: Georgia; color: #333333;">A week or two ago a paper came out in Science on<span class="apple-converted-space"> </span><a href="http://www.sciencemag.org/content/349/6251/aac4716">Estimating the reproducibility of psychological science</a>. The basic behind the study was to take a sample of studies that appeared in a particular journal in 2008 and try to replicate each of these studies. Here I'm using the definition that reproducibility is the ability to recalculate all results given the raw data and code from a study and replicability is the ability to re-do the study and get a consistent result. </span> </p> <p style="line-height: 18.0pt;"> <span style="font-size: 12.0pt; font-family: Georgia; color: #333333;">The paper is pretty incredible and the authors did an amazing job of going back to the original sources and trying to be faithful to the original study designs. I have to admit when I first heard about the study design I was incredibly pessimistic about the results (I suppose grouchy is a natural default state for many statisticians –especially those with sleep deprivation). I mean 2008 was well before the push toward reproducibility had really taken off (Biostatistics was one of the first journals to adopt a policy on reproducible research and that didn't happen <a href="http://biostatistics.oxfordjournals.org/content/10/3/405.full">until 2009</a>). More importantly, the student researchers from those studies had possibly moved on, study populations may change, there could be any number of minor variations in the study design and so forth. I thought the chances of getting any effects in the same range was probably pretty low. </span> </p> <p style="line-height: 18.0pt;"> So when the results were published I was pleasantly surprised. I wasn’t the only one: </p> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> Someone has to say it, but this plot shows that science is, in fact, working. <a href="http://t.co/JUy10xHfbH">http://t.co/JUy10xHfbH</a> <a href="http://t.co/lJSx6IxPw2">pic.twitter.com/lJSx6IxPw2</a> </p> <p> &mdash; Roger D. Peng (@rdpeng) <a href="https://twitter.com/rdpeng/status/637009904289452032">August 27, 2015</a> </p> </blockquote> <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> Looks like psychologists are in a not-too-bad spot on the ROC curves of science (<a href="http://t.co/fPsesCn2yK">http://t.co/fPsesCn2yK</a>) <a href="http://t.co/9rAOdZWvzv">http://t.co/9rAOdZWvzv</a> </p> <p> &mdash; Joe Pickrell (@joe_pickrell) <a href="https://twitter.com/joe_pickrell/status/637304244538896384">August 28, 2015</a> </p> </blockquote> <p>But that was definitely not the prevailing impression that the paper left on social and mass media. A lot of the discussion around the paper focused on the <a href="https://github.com/jtleek/replication_paper/blob/gh-pages/in_the_media.md">idea that only 36% of the studies</a> had a p-value less than 0.05 in both the original and replication study. But many of the sample sizes were small and the effects were modest. So the first question I asked myself was, “Well what would we expect to happen if we replicated these studies?” The original paper measured replicability in several ways and tried hard to calibrate expected coverage of confidence intervals for the measured effects.</p> <p>With <a href="http://www.biostat.jhsph.edu/~rpeng/">Roger</a> and <a href="http://www.biostat.jhsph.edu/~prpatil/">Prasad</a> we tried a little different approach. We estimated the 95% prediction interval for the replication effect given the original effect size.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter.png"><img class="aligncenter wp-image-4337" src="http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter-300x300.png" alt="pi_figure_nofilter" width="397" height="397" srcset="http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter-1024x1024.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/10/pi_figure_nofilter.png 1050w" sizes="(max-width: 397px) 100vw, 397px" /></a></p> <p> </p> <p>72% of the replication effects were within the 95% prediction interval and 2 were above the interval (showed a stronger signal in replication in than predicted from original study). This definitely shows that there is still room for improvement in replication of these studies - we would expect 95% of the effects to fall into the 95% prediction interval. But at least my opinion is that 72% (or 77% if you count the 2 above the P.I.) of studies falling in the prediction interval is (a) not bad and (b) a testament to the authors of the reproducibility paper and their efforts to get the studies right.</p> <p>An important point here is that replication and reproducibility aren’t the same thing. When reproducing a study we expect the numbers and figures to be <em>exactly the same. _But a replication involves recollection of data and is subject to variation and so _we don’t expect the answer to be exactly the same in the replication</em>. This is of course made more confusing by regression to the mean, publication bias, and <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf">the garden of forking paths</a>. Our use of a prediction interval measures both the variation expected in the original study and in the replication. One thing we noticed when re-analyzing the data is how many of the studies had very low sample sizes. <a href="http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter.png"><img class="aligncenter wp-image-4339" src="http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter-300x300.png" alt="samplesize_figure_nofilter" width="450" height="450" srcset="http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter-1024x1024.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/10/samplesize_figure_nofilter.png 1050w" sizes="(max-width: 450px) 100vw, 450px" /></a></p> <p> </p> <p>Sample sizes were generally bigger in the replication, but often very low regardless. This makes it more difficult to disentangle what didn’t replicate from what is just expected variation for a small sample size study. The point remains whether those small studies should be trusted in general, but for the purposes of measuring replication it makes the problem more difficult.</p> <p>One thing I have been thinking about a lot and this study drove home is that if we are measuring replication we need a definition that incorporates uncertainty directly. Suppose that you collect a data set <strong>D0</strong> from an original study and <strong>D1</strong> from a replication. Then replication means that the data from a study replicates if <strong>D0 ~ F </strong>and <strong>D1 ~ F. </strong>Informally, if the data are generated from the same distribution in both experiments then the study replicates. To get an estimate you apply a pipeline to the data set to get an estimate <strong>e0 = p(D0). </strong>If the study is also reproducible than <strong>p</strong><strong>()</strong> is the same for both studies and <strong>p</strong><strong>(D0) ~ G </strong>and <strong>p</strong><strong>(D1)</strong> <strong>~ G</strong>, subject to some conditions on <strong>p</strong><strong>(). </strong></p> <p>One interesting consequence of this definition is that each complete replication data set represents <em>only a single data point</em> for measuring replication. To measure replication with this definition you either need to make assumptions about the data generating distribution for <strong>D0</strong> and <strong>D1</strong> or you need to perform a complete replication of a study many times to determine if it replicates. However, it does mean that we can define replication even for studies with very small number of replicates as the data generating distribution may be arbitrarily variable in each case.</p> <p>Regardless of this definition I was excited that the <a href="https://osf.io/">OSF </a>folks did the study and pulled it off as well as they did and was a bit bummed about the most common reaction. I think there is an easy narrative that “science is broken” which I think isn’t a positive thing for a number of reasons. I love the way that {reproducibility/replicability/open science/open publication} are becoming more and more common, but often think we fall into the same trap in wanting to report these results as clear cut as we do when reporting exaggerations or oversimplifications of scientific discoveries in headlines. I’m excited to see how these kinds of studies look in 10 years when Github/open science/pre-prints/etc. are all the standards.</p> Apple Music's Moment of Truth 2015-09-30T07:38:08+00:00 http://simplystats.github.io4332 <p>Today is the day when Apple, Inc. learns whether it’s brand new streaming music service, Apple Music, is going to be a major contributor to the bottom line or just another streaming service (JASS?). Apple Music launched 3 months ago and all new users are offered a 3-month free trial. Today, that free trial ends and the big question is how many people will start to <strong>pay</strong> for their subscription, as opposed to simply canceling it. My guess is that most people (&gt; 50%) will opt to pay, but that’s a complete guess. For what it’s worth, I’ll be paying for my subscription. After adding all this music to my library, I’d hate to see it all go away.</p> <p>Back on August 18, 2015, consumer market research firm MusicWatch <a href="http://www.businesswire.com/news/home/20150818005755/en#.VddbR7Scy6F">released a study</a> that claimed, among other things, that</p> <blockquote> <p>Among people who had tried Apple Music, 48 percent reported they are not currently using the service.</p> </blockquote> <p>This would suggest that almost half of people who had signed up for the free trial period of Apple Music were not interested in using it further and would likely not pay for it once the trial ended. If it were true, it would be a blow to the newly launched service.</p> <p>But how did MusicWatch arrive at its number? It claimed to have surveyed 5,000 people in its study. Shortly before the survey by MusicWatch was released, Apple claimed that about 11 million people had signed up for their new Apple Music service (because the service had just launched, everyone who had signed up was in the free trial period). Clearly, 5,000 people do not make up the entire population, so we have but a small sample of users.</p> <p>What is the target that MusicWatch was trying to answer? It seems that they wanted to know the percentage of <strong>all people who had signed up for Apple Music</strong> that were still using the service. Can they make inference about the entire population from the sample of 5,000?</p> <p>If the sample is representative and the individuals are independent, we could use the number 48% as an estimate of the percentage in the population who no longer use the service. The press release from MusicWatch did not indicate any measure of uncertainty, so we don’t know how reliable the number is.</p> <p>Interestingly, soon after the MusicWatch survey was released, Apple released a statement to the publication <em>The Verge</em>, stating that 79% of users who had signed up were still using the service (i.e. only 21% had stopped using it, as opposed to 48% reported by MusicWatch). In other words, Apple just came out and <em>gave us the truth</em>! This was unusual because Apple typically does not make public statements about newly launched products. I just found this amusing because I’ve never been in a situation where I was trying to estimate a parameter and then someone later just told me what its value was.</p> <p>If we believe that Apple and MusicWatch were measuring the same thing in their analyses (and it’s not clear that they were), then it would suggest that MusicWatch’s estimate of the population percentage (48%) was quite far off from the true value (21%). What would explain this large difference?</p> <ol> <li><strong>Random variation</strong>. It’s true that MusicWatch’s survey was a small sample relative to the full population, but the sample was still big with 5,000 people. Furthermore, the analysis was fairly simple (just taking the proportion of users still using the service), so the uncertainty associated with that estimate is unlikely to be that large.</li> <li><strong>Selection bias</strong>. Recall that it’s not clear how MusicWatch sampled its respondents, but it’s possible that the way that they did it led them to capture a set of respondents who were less inclined to use Apple Music. Beyond this, we can’t really say more without knowing the details of the survey process.</li> <li><strong>Respondents are not independent</strong>. It’s possible that the survey respondents are not independent of each other. This would primiarily affect the uncertainty about the estimate, making it larger than we might expect if the respondents were all independent. However, since we do not know what MusicWatch’s uncertainty about their estimate was in the first place, it’s difficult to tell if dependence between respondents could play a role. Apple’s number, of course, has no uncertainty.</li> <li><strong>Measurement differences</strong>. This is the big one, in my opinion. We don’t know is how either MusicWatch or Apple defined “still using the service”. You could imagine a variety of ways to determine whether a person was still using the service. You could ask “Have you used it in the last week?” or perhaps “Did you use it yesterday?” Responses to these questions would be quite different and would likely lead to different overall percentages of usage.</li> </ol> We Used Data to Improve our HarvardX Courses: New Versions Start Oct 15 2015-09-29T09:53:31+00:00 http://simplystats.github.io4323 <p>You can sign up following links <a href="http://genomicsclass.github.io/book/pages/classes.html">here</a></p> <p>Last semester we successfully [You can sign up following links <a href="http://genomicsclass.github.io/book/pages/classes.html">here</a></p> <p>Last semester we successfully](http://simplystatistics.org/2014/11/25/harvardx-biomedical-data-science-open-online-training-curriculum-launches-on-january-19/) of my <a href="http://simplystatistics.org/2014/03/31/data-analysis-for-genomic-edx-course/">Data Analysis course</a>. To create the second version, the first was split into eight courses. Over 2,000 students successfully completed the first of these, but, as expected, the numbers were lower for the more advanced courses. We wanted to remove any structural problems keeping students from maximizing what they get from our courses, so we studied the assessment questions data, which included completion rate and time, and used the findings to make improvements. We also used qualitative data from the discussion board. The major changes to version 3 are the following:</p> <ul> <li>We no longer use R packages that Microsoft Windows users had trouble installing in the first course.</li> <li>All courses are now designed to be completed in 4 weeks.</li> <li>We added new assessment questions.</li> <li>We improved the assessment questions determined to be problematic.</li> <li>We split the two courses that students took the longest to complete into smaller modules. Students now have twice as much time to complete these.</li> <li>We consolidated the case studies into one course.</li> <li>We combined the materials from the statistics courses into a <a href="http://simplystatistics.org/2015/09/23/data-analysis-for-the-life-sciences-a-book-completely-written-in-r-markdown/">book</a>, which you can download <a href="https://leanpub.com/dataanalysisforthelifesciences">here</a>. The material in the book match the materials taught in class so you can use it to follow along.</li> </ul> <p>You can enroll into any of the seven courses following the links below. We will be on the discussion boards starting October 15, and we hope to see you there.</p> <ol> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-1-statistics-harvardx-ph525-1x">Statistics and R for the Life Sciences</a> starts October 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-2-harvardx-ph525-2x">Introduction to Linear Models and Matrix Algebra</a> starts November 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-3-harvardx-ph525-3x">Statistical Inference and Modeling for High-throughput Experiments</a> starts December 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-4-harvardx-ph525-4x">High-Dimensional Data Analysis</a> starts January 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-5-harvardx-ph525-5x">Introduction to Bioconductor: Annotation and Analysis of Genomes and Genomic Assays</a> starts February 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-6-high-harvardx-ph525-6x">High-performance Computing for Reproducible Genomics</a> starts March 15.</li> <li><a href="https://www.edx.org/course/data-analysis-life-sciences-7-case-harvardx-ph525-7x">Case Studies in Functional Genomics</a> start April 15.</li> </ol> <p>The landing page for the series continues to be <a href="http://genomicsclass.github.io/book/pages/classes.html">here</a>.</p> Data Analysis for the Life Sciences - a book completely written in R markdown 2015-09-23T09:37:27+00:00 http://simplystats.github.io4311 <p class="p1"> The book <em>Data Analysis for the Life Sciences</em> is now available on <a href="https://leanpub.com/dataanalysisforthelifesciences">Leanpub</a>. </p> <p class="p1"> <span class="s1"><img class="wp-image-4313 alignright" src="http://simplystatistics.org/wp-content/uploads/2015/09/title_page-232x300.jpg" alt="title_page" width="222" height="287" srcset="http://simplystatistics.org/wp-content/uploads/2015/09/title_page-232x300.jpg 232w, http://simplystatistics.org/wp-content/uploads/2015/09/title_page-791x1024.jpg 791w" sizes="(max-width: 222px) 100vw, 222px" />Data analysis is now part of practically every research project in the life sciences. In this book we use data and computer code to teach the necessary statistical concepts and programming skills to become a data analyst. Following in the footsteps of <a href="https://www.stat.berkeley.edu/~statlabs/">Stat Labs</a>, instead of showing theory first and then applying it to toy examples, we start with actual applications and describe the theory as it becomes necessary to solve specific challenges.<span class="Apple-converted-space"> We use simulations and data analysis examples to teach statistical concepts. </span></span><span class="s1">The book includes links to computer code that readers can use to program along as they read the book.</span> </p> <p class="p1"> It includes the following chapters: Inference, Exploratory Data Analysis, Robust Statistics, Matrix Algebra, Linear Models, Inference for High-Dimensional Data, Statistical Modeling, Distance and Dimension Reduction, Practical Machine Learning, and Batch Effects. </p> <p class="p1"> The text was completely written in R markdown and every section contains a link to the document that was used to create that section. This means that you can use <a href="http://yihui.name/knitr/">knitr</a> to reproduce any section of the book on your own computer. You can also access all these markdown documents directly from <a href="https://github.com/genomicsclass/labs">GitHub</a>. Please send a pull request if you fix a typo or other mistake! For now we are keeping the R markdowns for the exercises private since they contain the solutions. But you can see the solutions if you take our <a href="http://genomicsclass.github.io/book/pages/classes.html">online course</a> quizzes. If we find that most readers want access to the solutions, we will open them up as well. </p> <p class="p1"> The material is based on the online courses I have been teaching with <a href="http://mikelove.github.io/">Mike Love</a>. As we created the course, Mike and I wrote R markdown documents for the students and put them on GitHub. We then used<a href="http://www.stephaniehicks.com/githubPages_tutorial/pages/githubpages-jekyll.html"> jekyll</a> to create a <a href="http://genomicsclass.github.io/book/">webpage</a> with html versions of the markdown documents. Jeff then convinced us to publish it on <del>Leanbup</del><a href="https://leanpub.com/dataanalysisforthelifesciences">Leanpub</a>. So we wrote a shell script that compiled the entire book into a Leanpub directory, and after countless hours of editing and tinkering we have a 450+ page book with over 200 exercises. The entire book compiles from scratch in about 20 minutes. We hope you like it. </p> The Leek group guide to writing your first paper 2015-09-18T10:57:26+00:00 http://simplystats.github.io4307 <blockquote class="twitter-tweet" width="550"> <p lang="en" dir="ltr"> The <a href="https://twitter.com/jtleek">@jtleek</a> guide to writing your first academic paper <a href="https://t.co/APLrEXAS46">https://t.co/APLrEXAS46</a> </p> <p> &mdash; Stephen Turner (@genetics_blog) <a href="https://twitter.com/genetics_blog/status/644540432534368256">September 17, 2015</a> </p> </blockquote> <p>I have written guides on <a href="https://github.com/jtleek/reviews">reviewing papers</a>, <a href="https://github.com/jtleek/datasharing">sharing data</a>, and <a href="https://github.com/jtleek/rpackages">writing R packages</a>. One thing I haven’t touched on until now has been writing papers. Certainly for me, and I think for a lot of students, the hardest transition in graduate school is between taking classes and doing research.</p> <p>There are several hard parts to this transition including trying to find a problem, trying to find an advisor, and having a ton of unstructured time. One of the hardest things I’ve found is knowing (a) when to start writing your first paper and (b) how to do it. So I wrote a guide for students in my group:</p> <p><a href="https://github.com/jtleek/firstpaper">https://github.com/jtleek/firstpaper</a></p> <p>On how to write your first paper. It might be useful for other folks as well so I put it up on Github. Just like with the other guides I’ve written this is a very opinionated (read: doesn’t apply to everyone) guide. I also would appreciate any feedback/pull requests people have.</p> Not So Standard Deviations: The Podcast 2015-09-17T10:57:45+00:00 http://simplystats.github.io4299 <p>I’m happy to announce that I’ve started a brand new podcast called <a href="https://soundcloud.com/nssd-podcast">Not So Standard Deviations</a> with Hilary Parker at Etsy. Episode 1 “RCatLadies Origin Story” is available through SoundCloud. In this episode we talk about the origins of RCatLadies, evidence-based data analysis, my new book, and the Python vs. R debate.</p> <p>You can subscribe to the podcast using the <a href="http://feeds.soundcloud.com/users/soundcloud:users:174789515/sounds.rss">RSS feed</a> from SoundCloud. We’ll be getting it up on iTunes hopefully very soon.</p> <p><a href="https://api.soundcloud.com/tracks/224180667/download?client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&amp;oauth_token=1-138878-174789515-deb24181d01af">Download the audio file</a>.</p> <p>Show Notes:</p> <ul> <li><a href="https://twitter.com/rcatladies">RCatLadies Twitter account</a></li> <li>Hilary’s <a href="http://hilaryparker.com/2013/01/30/hilary-the-most-poisoned-baby-name-in-us-history/">analysis of the name Hilary</a></li> <li><a href="https://leanpub.com/artofdatascience">The Art of Data Science</a></li> <li>What is <a href="http://www.amstat.org/meetings/jsm.cfm">JSM</a>?</li> <li><a href="https://en.wikipedia.org/wiki/A_rising_tide_lifts_all_boats">A rising tide lifts all boats</a></li> </ul> Interview with COPSS award Winner John Storey 2015-08-25T09:25:28+00:00 http://simplystats.github.io4288 <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey.jpg"><img class="aligncenter wp-image-4289 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-198x300.jpg" alt="jdstorey" width="198" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-198x300.jpg 198w, http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-132x200.jpg 132w" sizes="(max-width: 198px) 100vw, 198px" /></a></p> <p> </p> <p><em>Editor’s Note: We are again pleased to interview the COPSS President’s award winner. The <a href="https://en.wikipedia.org/wiki/COPSS_Presidents%27_Award">COPSS Award</a> is one of the most prestigious in statistics, sometimes called the Nobel Prize in statistics. This year the award went to <a href="http://www.genomine.org/">John Storey</a> who also won the <a href="http://sml.princeton.edu/news/john-storey-receives-2015-mortimer-spiegelman-award">Mortimer Spiegelman award</a> for his outstanding contribution to public health statistics. This interview is a <a href="https://twitter.com/simplystats/status/631607146572988417">particular pleasure</a> since John was my Ph.D. advisor and has been a major role model and incredibly supportive mentor for me throughout my career. He also <a href="https://github.com/jdstorey/simplystatistics">did the whole interview in markdown and put it under version control at Github</a> so it is fully reproducible. </em></p> <p><strong>SimplyStats: Do you consider yourself to be a statistician, data scientist, machine learner, or something else?</strong></p> <p>JS: For the most part I consider myself to be a statistician, but I’m also very serious about genetics/genomics, data analysis, and computation. I was trained in statistics and genetics, primarily statistics. I was also exposed to a lot of machine learning during my training since Rob Tibshirani was my <a href="http://genealogy.math.ndsu.nodak.edu/id.php?id=69303">PhD advisor</a>. However, I consider my research group to be a data science group. We have the <a href="http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram">Venn diagram</a> reasonably well covered: experimentalists, programmers, data wranglers, and developers of theory and methods; biologists, computer scientists, and statisticians.</p> <p><strong>**SimplyStats:</strong> How did you find out you had won the COPSS Presidents’ Award?**</p> <p>JS: I received a phone call from the chairperson of the awards committee while I was visiting the Department of Statistical Science at Duke University to <a href="https://stat.duke.edu/events/15731.html">give a seminar</a>. It was during the seminar reception, and I stepped out into the hallway to take the call. It was really exciting to get the news!</p> <p><strong>**SimplyStats: </strong>One of the areas where you have had a big impact is inference in massively parallel problems. How do you feel high-dimensional inference is different from more traditional statistical inference?**</p> <p>JS: My experience is that the most productive way to approach high-dimensional inference problems is to first think about a given problem in the scenario where the parameters of interest are random, and the joint distribution of these parameters is incorporated into the framework. In other words, I first gain an understanding of the problem in a Bayesian framework. Once this is well understood, it is sometimes possible to move in a more empirical and nonparametric direction. However, I have found that I can be most successful if my first results are in this Bayesian framework.</p> <p>As an example, Theorem 1 from <a href="http://genomics.princeton.edu/storeylab/papers/Storey_Annals_2003.pdf">Storey (2003) Annals of Statistics</a> was the first result I obtained in my work on false discovery rates. This paper <a href="https://statistics.stanford.edu/research/false-discovery-rate-bayesian-interpretation-and-q-value">first appeared as a technical report in early 2001</a>, and the results spawned further work on a <a href="http://genomics.princeton.edu/storeylab/papers/directfdr.pdf">point estimation approach</a> to false discovery rates, the <a href="http://genomics.princeton.edu/storeylab/papers/ETST_JASA_2001.pdf">local false discovery rate</a>, <a href="http://www.bioconductor.org/packages/release/bioc/html/qvalue.html">q-value</a> and its <a href="http://www.pnas.org/content/100/16/9440.full">application to genomics</a>, and a <a href="http://genomics.princeton.edu/storeylab/papers/623.pdf">unified theoretical framework</a>.</p> <p>Besides false discovery rates, this approach has been useful in my work on the <a href="http://genomics.princeton.edu/storeylab/papers/Storey_JRSSB_2007.pdf">optimal discovery procedure</a> as well as <a href="http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.0030161">surrogate variable analysis</a> (in particular, <a href="http://amstat.tandfonline.com/doi/abs/10.1080/01621459.2011.645777#.VdxderxVhBc">Desai and Storey 2012</a> for surrogate variable analysis). For high-dimensional inference problems, I have also found it is important to consider whether there are any plausible underlying causal relationships among variables, even if causal inference in not the goal. For example, causal model considerations provided some key guidance in a <a href="http://www.nature.com/ng/journal/v47/n5/full/ng.3244.html">recent paper of ours</a> on testing for genetic associations in the presence of arbitrary population structure. I think there is a lot of insight to be gained by considering what is the appropriate approach for a high-dimensional inference problem under different causal relationships among the variables.</p> <p><strong>SimplyStats: Do you have a process when you are tackling a hard problem or working with students on a hard problem?</strong></p> <p>JS: I like to work on statistics research that is aimed at answering a specific scientific problem (usually in genomics). My process is to try to understand the why in the problem as much as the how. The path to success is often found in the former. I try first to find solutions to research problems by using simple tools and ideas. I like to get my hands dirty with real data as early as possible in the process. I like to incorporate some theory into this process, but I prefer methods that work really well in practice over those that have beautiful theory justifying them without demonstrated success on real-world applications. In terms of what I do day-to-day, listening to music is integral to my process, for both concentration and creative inspiration: typically <a href="https://en.wikipedia.org/wiki/King_Crimson">King Crimson</a> or some <a href="http://www.metal-archives.com/">variant of metal</a> or <a href="https://en.wikipedia.org/wiki/Brian_Eno">ambient</a> – which Simply Statistics co-founder [<a href="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey.jpg"><img class="aligncenter wp-image-4289 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-198x300.jpg" alt="jdstorey" width="198" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-198x300.jpg 198w, http://simplystatistics.org/wp-content/uploads/2015/08/jdstorey-132x200.jpg 132w" sizes="(max-width: 198px) 100vw, 198px" /></a></p> <p> </p> <p><em>Editor’s Note: We are again pleased to interview the COPSS President’s award winner. The <a href="https://en.wikipedia.org/wiki/COPSS_Presidents%27_Award">COPSS Award</a> is one of the most prestigious in statistics, sometimes called the Nobel Prize in statistics. This year the award went to <a href="http://www.genomine.org/">John Storey</a> who also won the <a href="http://sml.princeton.edu/news/john-storey-receives-2015-mortimer-spiegelman-award">Mortimer Spiegelman award</a> for his outstanding contribution to public health statistics. This interview is a <a href="https://twitter.com/simplystats/status/631607146572988417">particular pleasure</a> since John was my Ph.D. advisor and has been a major role model and incredibly supportive mentor for me throughout my career. He also <a href="https://github.com/jdstorey/simplystatistics">did the whole interview in markdown and put it under version control at Github</a> so it is fully reproducible. </em></p> <p><strong>SimplyStats: Do you consider yourself to be a statistician, data scientist, machine learner, or something else?</strong></p> <p>JS: For the most part I consider myself to be a statistician, but I’m also very serious about genetics/genomics, data analysis, and computation. I was trained in statistics and genetics, primarily statistics. I was also exposed to a lot of machine learning during my training since Rob Tibshirani was my <a href="http://genealogy.math.ndsu.nodak.edu/id.php?id=69303">PhD advisor</a>. However, I consider my research group to be a data science group. We have the <a href="http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram">Venn diagram</a> reasonably well covered: experimentalists, programmers, data wranglers, and developers of theory and methods; biologists, computer scientists, and statisticians.</p> <p><strong>**SimplyStats:</strong> How did you find out you had won the COPSS Presidents’ Award?**</p> <p>JS: I received a phone call from the chairperson of the awards committee while I was visiting the Department of Statistical Science at Duke University to <a href="https://stat.duke.edu/events/15731.html">give a seminar</a>. It was during the seminar reception, and I stepped out into the hallway to take the call. It was really exciting to get the news!</p> <p><strong>**SimplyStats: </strong>One of the areas where you have had a big impact is inference in massively parallel problems. How do you feel high-dimensional inference is different from more traditional statistical inference?**</p> <p>JS: My experience is that the most productive way to approach high-dimensional inference problems is to first think about a given problem in the scenario where the parameters of interest are random, and the joint distribution of these parameters is incorporated into the framework. In other words, I first gain an understanding of the problem in a Bayesian framework. Once this is well understood, it is sometimes possible to move in a more empirical and nonparametric direction. However, I have found that I can be most successful if my first results are in this Bayesian framework.</p> <p>As an example, Theorem 1 from <a href="http://genomics.princeton.edu/storeylab/papers/Storey_Annals_2003.pdf">Storey (2003) Annals of Statistics</a> was the first result I obtained in my work on false discovery rates. This paper <a href="https://statistics.stanford.edu/research/false-discovery-rate-bayesian-interpretation-and-q-value">first appeared as a technical report in early 2001</a>, and the results spawned further work on a <a href="http://genomics.princeton.edu/storeylab/papers/directfdr.pdf">point estimation approach</a> to false discovery rates, the <a href="http://genomics.princeton.edu/storeylab/papers/ETST_JASA_2001.pdf">local false discovery rate</a>, <a href="http://www.bioconductor.org/packages/release/bioc/html/qvalue.html">q-value</a> and its <a href="http://www.pnas.org/content/100/16/9440.full">application to genomics</a>, and a <a href="http://genomics.princeton.edu/storeylab/papers/623.pdf">unified theoretical framework</a>.</p> <p>Besides false discovery rates, this approach has been useful in my work on the <a href="http://genomics.princeton.edu/storeylab/papers/Storey_JRSSB_2007.pdf">optimal discovery procedure</a> as well as <a href="http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.0030161">surrogate variable analysis</a> (in particular, <a href="http://amstat.tandfonline.com/doi/abs/10.1080/01621459.2011.645777#.VdxderxVhBc">Desai and Storey 2012</a> for surrogate variable analysis). For high-dimensional inference problems, I have also found it is important to consider whether there are any plausible underlying causal relationships among variables, even if causal inference in not the goal. For example, causal model considerations provided some key guidance in a <a href="http://www.nature.com/ng/journal/v47/n5/full/ng.3244.html">recent paper of ours</a> on testing for genetic associations in the presence of arbitrary population structure. I think there is a lot of insight to be gained by considering what is the appropriate approach for a high-dimensional inference problem under different causal relationships among the variables.</p> <p><strong>SimplyStats: Do you have a process when you are tackling a hard problem or working with students on a hard problem?</strong></p> <p>JS: I like to work on statistics research that is aimed at answering a specific scientific problem (usually in genomics). My process is to try to understand the why in the problem as much as the how. The path to success is often found in the former. I try first to find solutions to research problems by using simple tools and ideas. I like to get my hands dirty with real data as early as possible in the process. I like to incorporate some theory into this process, but I prefer methods that work really well in practice over those that have beautiful theory justifying them without demonstrated success on real-world applications. In terms of what I do day-to-day, listening to music is integral to my process, for both concentration and creative inspiration: typically <a href="https://en.wikipedia.org/wiki/King_Crimson">King Crimson</a> or some <a href="http://www.metal-archives.com/">variant of metal</a> or <a href="https://en.wikipedia.org/wiki/Brian_Eno">ambient</a> – which Simply Statistics co-founder](http://jtleek.com/) got to <del>endure</del> enjoy for years during his PhD in my lab.</p> <p><strong>SimplyStats: You are the founding Director of the Center for Statistics and Machine Learning at Princeton. What parts of the new gig are you most excited about?</strong></p> <p>JS: Princeton closed its Department of Statistics in the early 1980s. Because of this, the style of statistician and machine learner we have here today is one who’s comfortable being appointed in a field outside of statistics or machine learning. Examples include myself in genomics, Kosuke Imai in political science, Jianqing Fan in finance and economics, and Barbara Engelhardt in computer science. Nevertheless, statistics and machine learning here is strong, albeit too small at the moment (which will be changing soon). This is an interesting place to start, very different from most universities.</p> <p>What I’m most excited about is that we get to answer the question: “What’s the best way to build a faculty, educate undergraduates, and create a PhD program starting now, focusing on the most important problems of today?”</p> <p>For those who are interested, we’ll be releasing a <a href="http://www.princeton.edu/strategicplan/taskforces/sml/">public version of our strategic plan</a> within about six months. We’re trying to do something unique and forward-thinking, which will hopefully make Princeton an influential member of the statistics, machine learning, and data science communities.</p> <p><strong>SimplyStats: You are organizing the Tukey conference at Princeton (to be held September 18, <a href="http://csml.princeton.edu/tukey">details here</a>).</strong> <strong>Do you think Tukey’s influence will affect your vision for re-building statistics at Princeton?</strong></p> <p>JS: Absolutely, Tukey has been and will be a major influence in how we re-build. He made so many important contributions, and his approach was extremely forward thinking and tied into real-world problems. I strongly encourage everyone to read Tukey’s 1962 paper titled <a href="https://projecteuclid.org/euclid.aoms/1177704711">The Future of Data Analysis</a>. Here he’s 50 years into the future, foreseeing the rise of data science. This paper has truly amazing insights, including:</p> <blockquote> <p>For a long time I have thought I was a statistician, interested in inferences from the particular to the general. But as I have watched mathematical statistics evolve, I have had cause to wonder and to doubt.</p> <p>All in all, I have come to feel that my central interest is in data analysis, which I take to include, among other things: procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data.</p> <p>Data analysis is a larger and more varied field than inference, or incisive procedures, or allocation.</p> <p>By and large, the great innovations in statistics have not had correspondingly great effects upon data analysis. . . . Is it not time to seek out novelty in data analysis?</p> </blockquote> <p>In this regard, another paper that has been influential in how we are re-building is Leo Breiman’s titled <a href="http://projecteuclid.org/euclid.ss/1009213726">Statistical Modeling: The Two Cultures</a>. We’re building something at Princeton that includes both cultures and seamlessly blends them into a bigger picture community concerned with data-driven scientific discovery and technology development.</p> <p><strong>SimplyStats:</strong> <strong>What advice would you give young statisticians getting into the discipline now?</strong></p> <p>JS: My most general advice is don’t isolate yourself within statistics. Interact with and learn from other fields. Work on problems that are important to practitioners of science and technology development. I recommend that students should master both “traditional statistics” and at least one of the following: (1) computational and algorithmic approaches to data analysis, especially those more frequently studied in machine learning or data science; (2) a substantive scientific area where data-driven discovery is extremely important (e.g., social sciences, economics, environmental sciences, genomics, neuroscience, etc.). I also recommend that students should consider publishing in scientific journals or computer science conference proceedings, in addition to traditional statistics journals. I agree with a lot of the constructive advice and commentary given on the Simply Statistics blog, such as encouraging students to learn about reproducible research, problem-driven research, software development, improving data analyses in science, and outreach to non-statisticians. These things are very important for the future of statistics.</p> The Next National Library of Medicine Director Can Help Define the Future of Data Science 2015-08-24T10:00:26+00:00 http://simplystats.github.io4277 <p>The main motivation for starting this blog was to share our enthusiasm about the increased importance of data and data analysis in science, industry, and society in general. Based on recent initiatives, such as <a href="https://datascience.nih.gov/bd2k">BD2k</a>, it is clear that the NIH is also enthusiastic and very much interested in supporting data science. For those that don’t know, the National Institutes of Health (NIH) is the largest public funder of biomedical research in the world. This federal agency has an annual budget of about30 billion.</p> <p>The NIH has <a href="http://www.nih.gov/icd/icdirectors.htm">several institutes</a>, each with its own budget and capability to guide funding decisions. Currently, the missions of most of these institutes relate to a specific disease or public health challenge.  Many of them fund research in statistics and computing because these topics are important components of achieving their specific mission. Currently, however, there is no institute directly tasked with supporting data science per se. This is about to change.</p> <p>The National Library of Medicine (NLM) is one of the few NIH institutes that is not focused on a particular disease or public health challenge. Apart from the important task of maintaining an actual library, it supports, among many other initiatives, indispensable databases such as PubMed, GeneBank and GEO. After over 30 years of successful service as NLM director, Dr. Donald Lindberg stepped down this year and, as is customary, an advisory board was formed to advice the NIH on what’s next for NLM. One of the main recommendations of <a href="http://acd.od.nih.gov/reports/Report-NLM-06112015-ACD.pdf">the report</a> is the following:</p> <blockquote> <p>NLM  should be the intellectual and programmatic epicenter for data science at NIH and stimulate its advancement throughout biomedical research and application.</p> </blockquote> <p>Data science features prominently throughout the report making it clear the NIH is very much interested in further supporting this field. The next director can therefore have an enormous influence in the futre of data science. So, if you love data, have administrative experience, and a vision about the future of data science as it relates to the medical and related sciences, consider this exciting opportunity.</p> <p>Here is the <a href="http://www.jobs.nih.gov/vacancies/executive/nlm_director.htm">ad</a>.</p> <p> </p> <p> </p> <p> </p> Interview with Sherri Rose and Laura Hatfield 2015-08-21T13:20:14+00:00 http://simplystats.github.io4272 <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/hatfieldrose.png"><img class="aligncenter wp-image-4273 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/08/hatfieldrose-300x200.png" alt="Sherri Rose and Laura Hatfield" width="300" height="200" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/hatfieldrose-300x200.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/hatfieldrose-260x173.png 260w, http://simplystatistics.org/wp-content/uploads/2015/08/hatfieldrose.png 975w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p style="text-align: center;"> Rose/Hatfield © Savannah Bergquist </p> <p><em><a href="http://www.hcp.med.harvard.edu/faculty/core/laura-hatfield-phd">Laura Hatfield</a> and <a href="http://www.drsherrirose.com/">Sherri Rose</a> are Assistant Professors specializing in biostatistics at Harvard Medical School in the <a href="http://www.hcp.med.harvard.edu">Department of Health Care Policy</a>. Laura received her PhD in Biostatistics from the University of Minnesota and Sherri completed her PhD in Biostatistics at UC Berkeley. They are developing novel statistical methods for health policy problems.</em></p> <p><strong><em>**_SimplyStats</em></strong>: Do you consider yourselves statisticians, data scientists, machine learners, or something else?_**</p> <p><strong>Rose</strong>: I’d definitely say a statistician. Even when I’m working on things that fall into the categories of data science or machine learning, there’s underlying statistical theory guiding that process, be it for methods development or applications. Basically, there’s a statistical foundation to everything I do.</p> <p><strong>Hatfield</strong>: When people ask what I do, I start by saying that I do research in health policy. Then I say I’m a statistician by training and I work with economists and physicians. People have mistaken ideas about what a statistician or professor does, so describing my context and work seems more informative. If I’m at a party, I usually wrap it up in a bow as, “I crunch numbers to study how Obamacare is working.” [laughs]</p> <p> </p> <p><strong><em>SimplyStats: What is the</em></strong> <a href="http://www.healthpolicydatascience.org/"><strong><em>Health Policy Data Science Lab</em></strong></a><strong><em>? How did you decide to start that?</em></strong></p> <p><strong>Hatfield</strong>: We wanted to give our trainees a venue to promote their work and get feedback from their peers. And it helps me keep up on the cool projects Sherri and her students are working on.</p> <p><strong>Rose</strong>: This grew out of us starting to jointly mentor trainees. It’s been a great way for us to make intellectual contributions to each other’s work through Lab meetings. Laura and I approach statistics from <em>completely</em> different frameworks, but work on related applications, so that’s a unique structure for a lab.</p> <p> </p> <p><strong><em>**_SimplyStats: </em></strong>What kinds of problems are your groups working on these days? Are they mostly focused on health policy?_**</p> <p><strong>Rose</strong>: One of the fun things about working in health policy is that it is quite expansive. Statisticians can have an even bigger impact on science and public health if we take that next step: thinking about the policy implications of our research. And then, who needs to see the work in order to influence relevant policies. A couple projects I’m working on that demonstrate this breadth include a machine learning framework for risk adjustment in insurance plan payment and a new estimator for causal effects in a complex epidemiologic study of chronic disease. The first might be considered more obviously health policy, but the second will have important policy implications as well.</p> <p><strong>Hatfield</strong>: When I start an applied collaboration, I’m also thinking, “Where is the methods paper?” Most of my projects use messy observational data, so there is almost always a methods paper. For example, many studies here need to find a control group from an administrative data source. I’ve been keeping track of challenges in this process. One of our Lab students is working with me on a pathological case of a seemingly benign control group selection method gone bad. I love the creativity required in this work; my first 10 analysis ideas may turn out to be infeasible given the data, but that’s what makes this fun!</p> <p> </p> <p><strong><em>**_SimplyStats: </em></strong>What are some particular challenges of working with large health data?_**</p> <p><strong>Hatfield</strong>: When I first heard about the huge sample sizes, I was excited! Then I learned that data not collected for research purposes…</p> <p><strong>Rose</strong>: This was going to be my answer!</p> <p><strong>Hatfield</strong>: …are <em>very</em> hard to use for research! In a recent project, I’ve been studying how giving people a tool to look up prices for medical services changes their health care spending. But the data set we have leaves out [painful pause] a lot of variables we’d like to use for control group selection and… a lot of the prices. But as I said, these gaps in the data are begging to be filled by new methods.</p> <p><strong>Rose</strong>: I think the fact that we have similar answers is important. I’ve repeatedly seen “big data” not have a strong signal for the research question, since they weren’t collected for that purpose. It’s easy to get excited about thousands of covariates in an electronic health record, but so much of it is noise, and then you end up with an R<sup>2</sup> of 10%. It can be difficult enough to generate an effective prediction function, even with innovative tools, let alone try to address causal inference questions. It goes back to basics: what’s the research question and how can we translate that into a statistical problem we can answer given the limitations of the data.</p> <p><strong><em>**_SimplyStats: </em></strong>You both have very strong data science skills but are in academic positions. Do you have any advice for students considering the tradeoff between academia and industry?_**</p> <p><strong>Hatfield</strong>: I think there is more variance within academia and within industry than between the two.</p> <p><strong>Rose</strong>: Really? That’s surprising to me…</p> <p><strong>Hatfield</strong>: I had stereotypes about academic jobs, but my current job defies those.</p> <p><strong>Rose</strong>: What if a larger component of your research platform included programming tools and R packages? My immediate thought was about computing and its role in academia. Statisticians in genomics have navigated this better than some other areas. It can surely be done, but there are still challenges folding that into an academic career.</p> <p><strong>Hatfield</strong>: I think academia imposes few restrictions on what you can disseminate compared to industry, where there may be more privacy and intellectual property concerns. But I take your point that R packages do not impress most tenure and promotion committees.</p> <p><strong>Rose</strong>: You want to find a good match between how you like spending your time and what’s rewarded. Not all academic jobs are the same and not all industry jobs are alike either. I wrote a more detailed <a href="http://simplystatistics.org/2015/02/18/navigating-big-data-careers-with-a-statistics-phd/">guest post</a> on this topic for <em>Simply Statistics</em>.</p> <p><strong>Hatfield</strong>: I totally agree you should think about how you’d actually spend your time in any job you’re considering, rather than relying on broad ideas about industry versus academia. Do you love writing? Do you love coding? etc.</p> <p> </p> <p><strong><em>**_SimplyStats: </em></strong>You are both adopters of social media as a mechanism of disseminating your work and interacting with the community. What do you think of social media as a scientific communication tool? Do you find it is enhancing your careers?_**</p> <p><strong>Hatfield</strong>: Sherri is my social media mentor!</p> <p><strong>Rose</strong>: I think social media can be a useful tool for networking, finding and sharing neat articles and news, and putting your research out there to a broader audience. I’ve definitely received speaking invitations and started collaborations because people initially “knew me from Twitter.” It’s become a way to recruit students as well. Prospective students are more likely to “know me” from a guest post or Twitter than traditional academic products, like journal articles.</p> <p><strong>Hatfield</strong>: I’m grateful for our <a href="https://twitter.com/HPDSLab">Lab’s new Twitter</a> because it’s a purely academic account. My personal account has been awkwardly transitioning to include professional content; I still tweet silly things there.</p> <p><strong>Rose</strong>: My timeline might have <a href="https://twitter.com/sherrirose/status/569613197600272386">a cat picture</a> or <a href="https://twitter.com/sherrirose/status/601822958491926529">two</a>.</p> <p><strong>Hatfield</strong>: My very favorite thing about academic Twitter is discovering things I wouldn’t have even known to search for, especially packages and tricks in R. For example, that’s how I got converted to tidy data and dplyr.</p> <p><strong>Rose</strong>: I agree. I think it’s a fantastic place to become exposed to work that’s incredibly related to your own but in another field, and you wouldn’t otherwise find it preparing a typical statistics literature review.</p> <p> </p> <p><strong><em>**</em></strong><em>SimplyStats: </em><strong><em>**What would you change in the statistics community?</em></strong></p> <p><strong>Rose</strong>: Mentoring. I was tremendously lucky to receive incredible mentoring as a graduate student and now as a new faculty member. Not everyone gets this, and trainees don’t know where to find guidance. I’ve actively reached out to trainees during conferences and university visits, erring on the side of offering too much unsolicited help, because I feel there’s a need for that. I also have a <a href="http://drsherrirose.com/resources">resources page</a> on my website that I continue to update. I wish I had a more global solution beyond encouraging statisticians to take an active role in mentoring not just your own trainees. We shouldn’t lose good people because they didn’t get the support they needed.</p> <p><strong>Hatfield</strong>: I think we could make conferences much better! Being in the same physical space at the same time is very precious. I would like to take better advantage of that at big meetings to do work that requires face time. Talks are not an example of this. Workshops and hackathons and panels and working groups – these all make better use of face-to-face time. And are a lot more fun!</p> <p> </p> If you ask different questions you get different answers - one more way science isn't broken it is just really hard 2015-08-20T14:52:34+00:00 http://simplystats.github.io4268 <p>If you haven’t already read the amazing piece by Christie Aschwanden on why <a href="http://fivethirtyeight.com/features/science-isnt-broken/">Science isn’t Broken</a> you should do so immediately. It does an amazing job of capturing the nuance of statistics as applied to real data sets and how that can be misconstrued as science being “broken” without falling for the easy “everything is wrong” meme.</p> <p>One thing that caught my eye was how the piece highlighted a crowd-sourced data analysis of soccer red cards. The key figure for that analysis is this one:</p> <p> </p> <p><a href="http://fivethirtyeight.com/features/science-isnt-broken/"><img class="aligncenter" src="https://espnfivethirtyeight.files.wordpress.com/2015/08/truth-vigilantes-soccer-calls2.png?w=1024&amp;h=597" alt="" width="1024" height="597" /></a></p> <p>I think the figure and <a href="https://osf.io/qix4g/">underlying data</a> for this figure are fascinating in that they really highlight the human behavioral variation in data analysis and you can even see some <a href="http://simplystatistics.org/2015/04/29/data-analysis-subcultures/">data analysis subcultures </a>emerging from the descriptions of how people did the analysis and justified or not the use of covariates.</p> <p>One subtlety of the figure that I missed on the original reading is that not all of the estimates being reported are measuring the same thing. For example, if some groups adjusted for the country of origin of the referees and some did not, then the estimates for those two groups are measuring different things (the association conditional on country of origin or not, respectively). In this case the estimates may be different, but entirely consistent with each other, since they are just measuring different things.</p> <p>If you ask two people to do the analysis and you only ask them the simple question: <em>Are referees more likely to give  red cards to dark skinned players?</em> then you may get a different answer based on those two estimates. But the reality is the answers the analysts are reporting are actually to the questions:</p> <ol> <li>Are referees more likely to give  red cards to dark skinned players holding country of origin fixed?</li> <li>Are referees more likely to give  red cards to dark skinned players averaging over country of origin (and everything else)?</li> </ol> <p>The subtlety lies in the fact that changes to covariates in the analysis are actually changing the hypothesis you are studying.</p> <p>So in fact the conclusions in that figure may all be entirely consistent after you condition on asking the same question. I’d be interested to see the same plot, but only for the groups that conditioned on the same set of covariates, for example. This is just one more reason that science is really hard and why I’m so impressed at how well the FiveThirtyEight piece captured this nuance.</p> <p> </p> <p> </p> P > 0.05? I can make any p-value statistically significant with adaptive FDR procedures 2015-08-19T10:38:31+00:00 http://simplystats.github.io4236 <p>Everyone knows now that you have to correct for multiple testing when you calculate many p-values otherwise this can happen:</p> <div style="width: 550px" class="wp-caption aligncenter"> <a href="http://xkcd.com/882/"><img class="" src=" http://imgs.xkcd.com/comics/significant.png" alt="" width="540" height="1498" /></a> <p class="wp-caption-text"> http://xkcd.com/882/ </p> </div> <p> </p> <p>One of the most popular ways to correct for multiple testing is to estimate or control the <a href="https://en.wikipedia.org/wiki/False_discovery_rate">false discovery rate</a>. The false discovery rate attempts to quantify the fraction of made discoveries that are false. If we call all p-values less than some threshold <em>t</em> significant, then borrowing notation from this <a href="http://www.ncbi.nlm.nih.gov/pubmed/12883005">great introduction to false discovery rates </a></p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/fdr3.gif"><img class="aligncenter size-full wp-image-4246" src="http://simplystatistics.org/wp-content/uploads/2015/08/fdr3.gif" alt="fdr3" width="285" height="40" /></a></p> <p> </p> <p>So <em>F(t)</em> is the (unknown) total number of null hypotheses called significant and <em>S(t)</em> is the total number of hypotheses called significant. The FDR is the expected ratio of these two quantities, which, under certain assumptions can be approximated by the ratio of the expectations.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/fdr4.gif"><img class="aligncenter size-full wp-image-4247" src="http://simplystatistics.org/wp-content/uploads/2015/08/fdr4.gif" alt="fdr4" width="246" height="44" /></a></p> <p> </p> <p>To get an estimate of the FDR we just need an estimate for  <em>E[_F(t)]</em> _ and <em>E[S(t)]. _The latter is pretty easy to estimate as just the total number of rejections (the number of _p &lt; t</em>). If you assume that the p-values follow the expected distribution then <em>E[_F(t)]</em>  <em>can be approximated by multiplying the fraction of null hypotheses, multiplied by the total number of hypotheses and multiplied by _t</em> since the p-values are uniform. To do this, we need an estimate for <span class="MathJax_Preview"><img src="http://simplystatistics.org/wp-content/plugins/latex/cache/tex_d4c98d75e25f5d28461f1da221eb7a95.gif" style="vertical-align: middle; border: none; padding-bottom:1px;" class="tex" alt="\pi_0" /></span>, the proportion of null hypotheses. There are a large number of ways to estimate this quantity but it is almost always estimated using the full distribution of computed p-values in an experiment. The most popular estimator compares the fraction of p-values greater than some cutoff to the number you would expect if every single hypothesis were null. This fraction is about the fraction of null hypotheses.</p> <p>Combining the above equation with our estimates for <em>E[_F(t)]</em> _ and _E[S(t)] _we get:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/fdr5.gif"><img class="aligncenter size-full wp-image-4250" src="http://simplystatistics.org/wp-content/uploads/2015/08/fdr5.gif" alt="fdr5" width="238" height="42" /></a></p> <p> </p> <p>The q-value is a multiple testing analog of the p-value and is defined as:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/fdr61.gif"><img class="aligncenter size-full wp-image-4258" src="http://simplystatistics.org/wp-content/uploads/2015/08/fdr61.gif" alt="fdr6" width="163" height="26" /></a></p> <p> </p> <p>This is of course a very loose version of this and you can get a more technical description <a href="http://www.genomine.org/papers/directfdr.pdf">here</a>. But the main thing to notice is that the q-value depends on the estimated proportion of null hypotheses, which depends on the distribution of the observed p-values. The smaller the estimated fraction of null hypotheses, the smaller the FDR estimate and the smaller the q-value. This suggests a way to make any p-value significant by altering its “testing partners”. Here is a quick example. Suppose that we have done a test and have a p-value of 0.8. Not super significant. Suppose we perform this test in conjunction with a number of hypotheses that are null generating a p-value distribution like this.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/uniform-pvals.png"><img class="aligncenter size-medium wp-image-4260" src="http://simplystatistics.org/wp-content/uploads/2015/08/uniform-pvals-300x300.png" alt="uniform-pvals" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/uniform-pvals-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/uniform-pvals-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/08/uniform-pvals.png 480w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Then you get a q-value greater than 0.99 as you would expect. But if you test that exact same p-value with a ton of other non-null hypotheses that generate tiny p-values in a distribution that looks like this:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/significant-pvals.png"><img class="aligncenter size-medium wp-image-4261" src="http://simplystatistics.org/wp-content/uploads/2015/08/significant-pvals-300x300.png" alt="significant-pvals" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/significant-pvals-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/significant-pvals-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/08/significant-pvals.png 480w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>Then you get a q-value of 0.0001 for that same p-value of 0.8. The reason is that the estimate of the fraction of null hypotheses goes essentially to zero, which drives down the q-value. You can do this with any p-value, if you make its testing partners have sufficiently low p-values then the q-value will also be as small as you like.</p> <p>A couple of things to note:</p> <ul> <li>Obviously doing this on purpose to change the significance of a calculated p-value is cheating and shouldn’t be done.</li> <li>For correctly calculated p-values on a related set of hypotheses this is actually a sensible property to have - if you have almost all very small p-values and one very large p-value, you are doing a set of tests where almost everything appears to be alternative and you should weight that in some sensible way.</li> <li>This is the reason that sometimes a “multiple testing adjusted” p-value (or q-value) is smaller than the p-value itself.</li> <li>This doesn’t affect non-adaptive FDR procedures - but those procedures still depend on the “testing partners” of any p-value through the total number of tests performed. This is why people talk about the so-called “multiple testing burden”. But that is a subject for a future post. It is also the reason non-adaptive procedures can be severely underpowered compared to adaptive procedures when the p-values are correct.</li> <li>I’ve appended the code to generate the histograms and calculate the q-values in this post in the following gist.</li> </ul> <p> </p> UCLA Statistics 2015 Commencement Address 2015-08-12T10:34:03+00:00 http://simplystats.github.io4229 <p>I was asked to speak at the <a href="http://www.stat.ucla.edu">UCLA Department of Statistics</a> Commencement Ceremony this past June. As one of the first graduates of that department back in 2003, I was tremendously honored to be invited to speak to the graduates. When I arrived I was just shocked at how much the department had grown. When I graduated I think there were no more than 10 of us between the PhD and Master’s programs. Now they have ~90 graduates per year with undergrad, Master’s and PhD. It was just stunning.</p> <p>Here’s the text of what I said, which I think I mostly stuck to in the actual speech.</p> <p> </p> <p><strong>UCLA Statistics Graduation: Some thoughts on a career in statistics</strong></p> <p>When I asked Rick [Schoenberg] what I should talk about, he said to ‘talk for 95 minutes on asymptotic properties of maximum likelihood estimators under nonstandard conditions”. I thought this is a great opportunity! I busted out Tom Ferguson’s book and went through my old notes. Here we go. Let X be a complete normed vector space….</p> <p>I want to thank the department for inviting me here today. It’s always good to be back. I entered the UCLA stat department in 1999, only the second entering class, and graduated from UCLA Stat in 2003. Things were different then. Jan was the chair and there were not many classes so we could basically do whatever we wanted. Things are different now and that’s a good thing. Since 2003, I’ve been at the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health, where I was first a postdoctoral fellow and then joined the faculty. It’s been a wonderful place for me to grow up and I’ve learned a lot there.</p> <p>It’s just an incredible time to be a statistician. You guys timed it just right. I’ve been lucky enough to witness two periods like this, the first time being when I graduated from college at the height of the dot come boom. Today, it’s not computer programming skills that the world needs, but rather it’s statistical skills. I wish I were in your shoes today, just getting ready to startup. But since I’m not, I figured the best thing I could do is share some of the things I’ve learned and talk about the role that these things have played in my own life.</p> <p>Know your edge: What’s the one thing that you know that no one else seems to know? You’re not a clone—you have original ideas and skills. You might think they’re not valuable but you’re wrong. Be proud of these ideas and use them to your advantage. As an example, I’ll give you my one thing. Right now, I believe the greatest challenge facing the field of statistics today is getting the entire world to know what we in this room already know. Data are everywhere today and the biggest barrier to progress is our collective inability to process and analyze those data to produce useful information. The need for the things that we know has absolutely exploded and we simply have not caught up. That’s why I created, along with Jeff Leek and Brian Caffo, the Johns Hopkins Data Science Specialization, which is currently the most successful massive open online course program ever. Our goal is to teach the entire world statistics, which we think is an essential skill. We’re not quite there yet, but—assuming you guys don’t steal my idea—I’m hopeful that we’ll get there sometime soon.</p> <p>At some point the edge you have will no longer work: That sounds like a bad thing, but it’s actually good. If what you’re doing really matters, then at some point everyone will be doing it. So you’ll need to find something else. I’ve been confronted with this problem at least 3 times in my life so far. Before college, I was pretty good at the violin, and it opened a lot of doors for me. It got me into Yale. But when I got to Yale, I quickly realized that there were a lot of really good violinists here. Suddenly, my talent didn’t have so much value. This was when I started to pick up computer programming and in 1998 I learned an obscure little language called R. When I got to UCLA I realized I was one of the only people who knew R. So I started a little brown bag lunch series where I’d talk about some feature of R to whomever would show up (which wasn’t many people usually). Picking up on R early on turned out to be really important because it was a small community back then and it was easy to have a big impact. Also, as more and more people wanted to learn R, they’d usually call on me. It’s always nice to feel needed. Over the years, the R community exploded and R’s popularity got to the point where it was being talked about in the New York Times. But now you see the problem. Saying that you know R doesn’t exactly distinguish you anymore, so it’s time to move on again. These days, I’m realizing that the one useful skill that I have is the ability to make movies. Also, my experience being a performer on the violin many years ago is coming in handy. My ability to quickly record and edit movies was one of the key factors that enabled me to create an entire online data science program in 2 months last year.</p> <p>Find the right people, and stick with them forever. Being a statistician means working with other people. Choose those people wisely and develop a strong relationship. It doesn’t matter how great the project is or how famous or interesting the other person is, if you can’t get along then bad things will happen. Statistics and data analysis is a highly verbal process that requires constant and very clear communication. If you’re uncomfortable with someone in any way, everything will suffer. Data analysis is unique in this way—our success depends critically on other people. I’ve only had a few collaborators in the past 12 years, but I love them like family. When I work with these people, I don’t necessarily know what will happen, but I know it will be good. In the end, I honestly don’t think I’ll remember the details of the work that I did, but I’ll remember the people I worked with and the relationships I built.</p> <p>So I hope you weren’t expecting a new asymptotic theorem today, because this is pretty much all I’ve got. As you all go on to the next phase of your life, just be confident in your own ideas, be prepared to change and learn new things, and find the right people to do them with. Thank you.</p> Correlation is not a measure of reproducibility 2015-08-12T10:33:25+00:00 http://simplystats.github.io4192 <p>Biologists make wide use of correlation as a measure of reproducibility. Specifically, they quantify reproducibility with the correlation between measurements obtained from replicated experiments. For example, <a href="https://genome.ucsc.edu/ENCODE/protocols/dataStandards/ENCODE_RNAseq_Standards_V1.0.pdf">the ENCODE data standards document</a> states</p> <blockquote> <p>A typical R<sup>2</sup> (Pearson) correlation of gene expression (RPKM) between two biological replicates, for RNAs that are detected in both samples using RPKM or read counts, should be between 0.92 to 0.98. Experiments with biological correlations that fall below 0.9 should be either be repeated or explained.</p> </blockquote> <p>However, for  reasons I will explain here, correlation is not necessarily informative with regards to reproducibility. The mathematical results described below are not inconsequential theoretical details, and understanding them will help you assess new technologies, experimental procedures and computation methods.</p> <p>Suppose you have collected data from an experiment</p> <p style="text-align: center;"> <em>x</em><sub>1</sub>, <em>x</em><sub>2</sub>,..., <em>x</em><sub>n</sub> </p> <p>and want to determine if  a second experiment replicates these findings. For simplicity, we represent data from the second experiment as adding unbiased (averages out to 0) and statistically independent measurement error <em>d</em> to the first:</p> <p style="text-align: center;"> <em>y</em><sub>1</sub>=<em>x</em><sub>1</sub>+<em>d</em><sub>1</sub>, <em>y</em><sub>2</sub>=<em>x</em><sub>2</sub>+<em>d</em><sub>2</sub>, ... <em>y</em><sub>n</sub>=<em>x</em><sub>n</sub>+<em>d</em><sub>n</sub>. </p> <p>For us to claim reproducibility we want the differences</p> <p style="text-align: center;"> <em>d</em><sub>1</sub>=<em>y</em><sub>1</sub>-<em>x</em><sub>1</sub>, <em>d</em><sub>2</sub>=<em>y</em><sub>2</sub>-<em>x</em><sub>2</sub>,<em>... </em>,<em>d</em><sub>n</sub>=<em>y</em><sub>n</sub>-<em>x</em><sub>n</sub> </p> <p>to be “small”. To give this some context, imagine the <em>x</em> and <em>y</em> are log scale (base 2) gene expression measurements which implies the <em>d</em> represent log fold changes. If these differences have a standard deviation of 1, it implies that fold changes of 2 are typical between replicates. If our replication experiment produces measurements that are typically twice as big or twice as small as the original, I am not going to claim the measurements are reproduced. However, as it turns out, such terrible reproducibility can still result in correlations higher than 0.92.</p> <p>To someone basing their definition of correlation on the current common language usage this may seem surprising, but to someone basing it on math, it is not. To see this, note that the mathematical definition of correlation tells us that because <em>d</em> and <em>x</em> are independent:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/pearsonformula.png"><img class=" aligncenter" src="http://simplystatistics.org/wp-content/uploads/2015/08/pearsonformula-300x55.png" alt="pearsonformula" width="300" height="55" /></a></p> <p>This tells us that correlation summarizes the variability of <em>d</em> relative to the variability of <em>x</em>. Because of the wide range of gene expression values we observe in practice, the standard deviation of <em>x</em> can easily be as large as 3 (variance is 9). This implies we expect to see correlations as high as 1/sqrt(1+1/9) = 0.95, despite the lack of reproducibility when comparing <em>x</em> to <em>y</em>.</p> <p>Note that using Spearman correlation does not fix this problem. A Spearman correlation of 1 tells us that the ranks of <em>x</em> and <em>y</em> are preserved, yet doest not summarize the actual differences. The problem comes down to the fact that we care about the variability of <em>d</em> and correlation, Pearson or Spearman, does not provide an optimal summary. While correlation relates to the preservation of ranks, a much more appropriate summary of reproducibly is the distance between <em>x</em> and <em>y</em> which is related to the standard deviation of the differences <em>d</em>. A very simple R command you can use to generate this summary statistic is:</p> <pre>sqrt(mean(d^2))</pre> <p>or the robust version:</p> <pre>median(abs(d)) ##multiply by 1.4826 for unbiased estimate of true sd </pre> <p>The equivalent suggestion for plots it to make an <a href="https://en.wikipedia.org/wiki/MA_plot">MA-plot</a> instead of a scatterplot.</p> <p>But aren’t correlations and distances directly related? Sort of, and this actually brings up another problem. If the <em>x</em> and <em>y</em> are standardized to have average 0 and standard deviation 1 then, yes, correlation and distance are directly related:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/distcorr.png"><img class=" size-medium wp-image-4202 aligncenter" src="http://simplystatistics.org/wp-content/uploads/2015/08/distcorr-300x51.png" alt="distcorr" width="300" height="51" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/distcorr-300x51.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/distcorr-260x44.png 260w, http://simplystatistics.org/wp-content/uploads/2015/08/distcorr.png 878w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>However, if instead <em>x</em> and <em>y</em> have different average values, which would put into question reproducibility, then distance is sensitive to this problem while correlation is not. If the standard devtiation is 1, the formula is:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/distcor2.png"><img class=" size-medium wp-image-4204 aligncenter" src="http://simplystatistics.org/wp-content/uploads/2015/08/distcor2-300x27.png" alt="distcor2" width="300" height="27" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/distcor2-300x27.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/distcor2-1024x94.png 1024w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Once we consider units (standard deviations different from 1) then the relationship becomes even more complicated. Two advantages of distance you should be aware of are:</p> <ol> <li>it is in the same units as the data, while correlations have no units making it hard to interpret and select thresholds, and</li> <li>distance accounts for bias (differences in average), while correlation does not.</li> </ol> <p>A final important point relates to the use of correlation with data that is not approximately normal. The useful interpretation of correlation as a summary statistic stems from the bivariate normal approximation: for every standard unit increase in the first variable, the second variable increased _r _standard units, with _r _the correlation. A  summary of this is <a href="http://genomicsclass.github.io/book/pages/exploratory_data_analysis_2.html">here</a>. However, when data is not normal this interpretation no longer holds. Furthermore, heavy tail distributions, which are common in genomics, can lead to instability. Here is an example of uncorrelated data with a single pointed added that leads to correlations close to 1. This is quite common with RNAseq data.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/supp_figure_2.png"><img class=" size-medium wp-image-4208 aligncenter" src="http://simplystatistics.org/wp-content/uploads/2015/08/supp_figure_2-300x300.png" alt="supp_figure_2" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/supp_figure_2-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/supp_figure_2-1024x1024.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/08/supp_figure_2-200x200.png 200w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> rafalib package now on CRAN 2015-08-10T10:00:26+00:00 http://simplystats.github.io4165 <p>For the last several years I have been <a href="https://github.com/ririzarr/rafalib">collecting functions</a> I routinely use during exploratory data analysis in a private R package. <a href="http://mike-love.net/">Mike Love</a> and I used some of these in our HarvardX course and now, due to popular demand, I have created man pages and added the <a href="https://cran.r-project.org/web/packages/rafalib/">rafalib</a> package to CRAN. Mike has made several improvements and added some functions of his own. Here is quick descriptions of the rafalib functions I most use:</p> <p>mypar - Before making a plot in R I almost always type <tt>mypar()</tt>. This basically gets around the suboptimal defaults of <tt>par</tt>. For example, it makes the margins (<tt>mar</tt>, <tt>mpg</tt>) smaller and defines RColorBrewer colors as defaults.  It is optimized for the RStudio window. Another advantage is that you can type <tt>mypar(3,2)</tt> instead of <tt>par(mfrow=c(3,2))</tt>. <tt>bigpar()</tt> is optimized for R presentations or PowerPoint slides.</p> <p>as.fumeric - This function turns characters into factors and then into numerics. This is useful, for example, if you want to plot values <tt>x,y</tt> with colors defined by their corresponding categories saved in a character vector <tt>labs</tt><tt>plot(x,y,col=as.fumeric(labs))</tt>.</p> <p>shist (smooth histogram, pronounced <em>shitz</em>) - I wrote this function because I have a hard time interpreting the y-axis of <tt>density</tt>. The height of the curve drawn by <tt>shist</tt> can be interpreted as the height of a histogram if you used the units shown on the plot. Also, it automatically draws a smooth histogram for each entry in a matrix on the same plot.</p> <p>splot (subset plot) - The datasets I work with are typically large enough that</p> <p><tt>plot(x,y)</tt> involves millions of points, which is <a href="http://stackoverflow.com/questions/7714677/r-scatterplot-with-too-many-points">a problem</a>. Several solution are available to avoid over plotting, such as alpha-blending, hexbinning and 2d kernel smoothing. For reasons I won’t explain here, I generally prefer subsampling over these solutions. <tt>splot</tt> automatically subsamples. You can also specify an index that defines the subset.</p> <p>sboxplot (smart boxplot) - This function draws points, boxplots or outlier-less boxplots depending on sample size. Coming soon is the kaboxplot (Karl Broman box-plots) for when you have too many boxplots.</p> <p>install_bioc - For Bioconductor users, this function simply does the <tt>source(“http://www.bioconductor.org/biocLite.R”)</tt> for you and then uses <tt>BiocLite</tt> to install.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1.png"><img class="alignnone size-large wp-image-4190" src="http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1-1024x773.png" alt="unnamed" width="990" height="747" srcset="http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1-300x226.png 300w, http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1-1024x773.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1-260x196.png 260w, http://simplystatistics.org/wp-content/uploads/2015/08/unnamed1.png 1035w" sizes="(max-width: 990px) 100vw, 990px" /></a></p> Interested in analyzing images of brains? Get started with open access data. 2015-08-09T21:29:17+00:00 http://simplystats.github.io4185 <div> <i>Editor's note: This is a guest post by <a href="http://www.anieloyan.com/" target="_blank"><span class="lG">Ani</span> Eloyan</a>. She is an Assistant Professor of Biostatistics at Brown University. Dr. Eloyan’s work focuses on</i> <i>semi-parametric likelihood based methods for matrix decompositions, statistical analyses of brain images, and the integration of various types of complex data structures for analyzing health care data</i><i>. She received her PhD in statistics from North Carolina State University and subsequently completed a postdoctoral fellowship in the <a href="http://www.biostat.jhsph.edu/">Department of Biostatistics at Johns Hopkins University</a>. Dr. Eloyan and her team won the <a>ADHD200 Competition</a></i> <i>discussed in <a href="http://journal.frontiersin.org/article/10.3389/fnsys.2012.00061/abstract" target="_blank">this</a> article. She tweets <a href="https://twitter.com/eloyan_ani">@eloyan_ani</a>.</i> </div> <div> <i> </i> </div> <div> <div> Neuroscience is one of the exciting new fields for biostatisticians interested in real world applications where they can contribute novel statistical approaches. Most research in brain imaging has historically included studies run for small numbers of patients. While justified by the costs of data collection, the claims based on analyzing data for such small numbers of subjects often do not hold for our populations of interest. As discussed in <a href="http://www.huffingtonpost.com/american-statistical-association/wanted-neuroquants_b_3749363.html" target="_blank">this</a> article, there is a huge demand for biostatisticians in the field of quantitative neuroscience; so called neuroquants or neurostatisticians. However, while more statisticians are interested in the field, we are far from competing with other substantive domains. For instance, a quick search of abstract keywords in the online program of the upcoming <a href="https://www.amstat.org/meetings/jsm/2015/" target="_blank">JSM2015</a> conference of “brain imaging” and “neuroscience” results in 15 records, while a search of the words “genomics” and “genetics” generates 76 <a>records</a>. </div> <div> </div> <div> Assuming you are trained in statistics and an aspiring neuroquant, how would you go about working with brain imaging data? As a graduate student in the <a href="http://www.stat.ncsu.edu/" target="_blank">Department of Statistics at NCSU</a> several years ago, I was very interested in working on statistical methods that would be directly applicable to solve problems in neuroscience. But I had this same question: “Where do I find the data?” I soon learned that to <i>really</i>approach substantial relevant problems I also needed to learn about the subject matter underlying these complex data structures. </div> <div> </div> <div> In recent years, several leading groups have uploaded their lab data with the common goal of fostering the collection of high dimensional brain imaging data to build powerful models that can give generalizable results. <a href="http://www.nitrc.org/" target="_blank">Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC)</a> founded in 2006 is a platform for public data sharing that facilitates streamlining data processing pipelines and compiling high dimensional imaging datasets for crowdsourcing the analyses. It includes data for people with neurological diseases and neurotypical children and adults. If you are interested in Alzheimer’s disease, you can check out <a href="http://adni.loni.usc.edu/" target="_blank">ADNI</a>. <a href="http://fcon_1000.projects.nitrc.org/indi/abide/" target="_blank">ABIDE</a> provides data for people with Autism Spectrum Disorder and neurotypical peers. <a href="http://fcon_1000.projects.nitrc.org/indi/adhd200/" target="_blank">ADHD200</a> was released in 2011 as a part of a competition to motivate building predictive methods for disease diagnoses using functional magnetic resonance imaging (MRI) in addition to demographic information to predict whether a child has attention deficit hyperactivity disorder (ADHD). While the competition ended in 2011, the dataset has been widely utilized afterwards in studies of ADHD.  According to Google Scholar, the <a href="http://www.nature.com/mp/journal/v19/n6/abs/mp201378a.html" target="_blank">paper</a> introducing the ABIDE set has been cited 129 times since 2013 while the <a href="http://journal.frontiersin.org/article/10.3389/fnsys.2012.00062/full" target="_blank">paper</a> discussing the ADHD200 has been cited 51 times since <span style="font-family: Arial;">2012. These are only a few examples from the list of open access datasets that could of utilized by statisticians. </span> </div> <div> </div> <div> Anyone can download these datasets (you may need to register and complete some paperwork in some cases), however, there are several data processing and cleaning steps to perform before the final statistical analyses. These preprocessing steps can be daunting for a statistician new to the field, especially as the tools used for preprocessing may not be available in R. <a href="https://hopstat.wordpress.com/2014/08/27/statisticians-in-neuroimaging-need-to-learn-preprocessing/" target="_blank">This</a> discussion makes the case as to why statisticians need to be involved in every step of preprocessing the data, while <u><a href="https://hopstat.wordpress.com/2014/06/17/fslr-an-r-package-interfacing-with-fsl-for-neuroimaging-analysis/" target="_blank">this R package</a></u> contains new tools linking R to a commonly used platform <a href="http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/" target="_blank">FSL</a>. However, as a newcomer, it can be easier to start with data that are already processed. <a href="http://projecteuclid.org/euclid.ss/1242049389" target="_blank">This</a> excellent overview by Dr. Martin Lindquist provides an introduction to the different types of analyses for brain imaging data from a statisticians point of view, while our<a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0089470" target="_blank">paper</a> provides tools in R and example datasets for implementing some of these methods. At least one course on Coursera can help you get started with <a href="https://www.coursera.org/course/fmri" target="_blank">functional MRI</a> data. Talking to and reading the papers of biostatisticians working in the field of quantitative neuroscience and scientists in the field of neuroscience is the key. </div> </div> Statistical Theory is our "Write Once, Run Anywhere" 2015-08-09T11:19:53+00:00 http://simplystats.github.io4177 <p>Having followed the software industry as a casual bystander, I periodically see the tension flare up between the idea of writing “native apps”, software that is tuned to a particular platform (Windows, Mac, etc.) and more cross-platform apps, which run on many platforms without too much modification. Over the years it has come up in many different forms, but they fundamentals are the same. Back in the day, there was Java, which was supposed to be the platform that ran on any computing device. Sun Microsystems originated the phrase “<a href="https://en.wikipedia.org/wiki/Write_once,_run_anywhere">Write Once, Run Anywhere</a>” to illustrate the cross-platform strengths of Java. More recently, Steve Jobs famously <a href="https://www.apple.com/hotnews/thoughts-on-flash/">banned Flash</a> from any iOS device. Apple is also moving away from standards like OpenGL and towards its own Metal platform.</p> <p>What’s the problem with “write once, run anywhere”, or of cross-platform development more generally, assuming it’s possible? Well, there are a <a href="https://en.wikipedia.org/wiki/Cross-platform#Challenges_to_cross-platform_development">number of issues</a>: often there are performance penalties, it may be difficult to use the native look and feel of a platform, and you may be reduced to using the “lowest common denominator” of feature sets. It seems to me that anytime a new meta-platform comes out that promises to relieve programmers of the burden of having to write for multiple platforms, it eventually gets modified or subsumed by the need to optimize apps for a given platform as much as possible. The need to squeeze as much juice out of an app seems to be too important an opportunity to pass up.</p> <p>In statistics, theory and theorems are our version of “write once, run anywhere”. The basic idea is that theorems provide an abstract layer (a “virtual machine”) that allows us to reason across a large number of specific problems. Think of the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem">central limit theorem</a>, probably our most popular theorem. It could be applied to any problem/situation where you have a notion of sample size that could in principle be increasing.</p> <p>But can it be applied to every situation, or even any situation? This might be more of a philosophical question, given that the CLT is stated asymptotically (maybe we’ll find out the answer eventually). In practice, my experience is that many people attempt to apply it to problems where it likely is not appropriate. Think, large-scale studies with a sample size of 10. Many people will use Normal-based confidence intervals in those situations, but they probably have very poor coverage.</p> <p>Because the CLT doesn’t apply in many situations (small sample, dependent data, etc.), variations of the CLT have been developed, as well as entirely different approaches to achieving the same ends, like confidence intervals, p-values, and standard errors (think bootstrap, jackknife, permutation tests). While the CLT an provide beautiful insight in a large variety of situations, in reality, one must often resort to a custom solution when analyzing a given dataset or problem. This should be a familiar conclusion to anyone who analyzes data. The promise of “write once, run anywhere” is always tantalizing, but the reality never seems to meet that expectation.</p> <p>Ironically, if you look across history and all programming languages, probably the most “cross-platform” language is C, which was originally considered to be too low-level to be broadly useful. C programs run on basically every existing platform and the language has been completely standardized so that compilers can be written to produce well-defined output. The keys to C’s success I think are that it’s a very simple/small language which gives enormous (sometimes dangerous) power to the programmer, and that an enormous toolbox (compiler toolchains, IDEs) has been developed over time to help developers write applications on all platforms.</p> <p>In a sense, we need “compilers” that can help us translate statistical theory for specific data analysis problems. In many cases, I’d imagine the compiler would “fail”, meaning the theory was not applicable to that problem. This would be a Good Thing, because right now we have no way of really enforcing the appropriateness of a theorem for specific problems.</p> <p>More practically (perhaps), we could develop <a href="http://simplystatistics.org/2012/08/27/a-deterministic-statistical-machine/">data analysis pipelines</a> that could be applied to broad classes of data analysis problems. Then a “compiler” could be employed to translate the pipeline so that it worked for a given dataset/problem/toolchain.</p> <p>The key point is to recognize that there is a “translation” process that occurs when we use theory to justify certain data analysis actions, but this translation process is often not well documented or even thought through. Having an explicit “compiler” for this would help us to understand the applicability of certain theorems and may serve to prevent bad data analysis from occurring.</p> Autonomous killing machines won't look like the Terminator...and that is why they are so scary 2015-07-30T11:09:22+00:00 http://simplystats.github.io4159 <p>Just a few days ago many of the most incredible minds in science and technology <a href="http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons">urged governments to avoid using artificial intelligence</a> to create autonomous killing machines. One thing that always happens when such a warning is put into place is you see the inevitable Terminator picture:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/07/terminator.jpeg"><img class="aligncenter wp-image-4160 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/07/terminator-300x180.jpeg" alt="terminator" width="300" height="180" srcset="http://simplystatistics.org/wp-content/uploads/2015/07/terminator-300x180.jpeg 300w, http://simplystatistics.org/wp-content/uploads/2015/07/terminator-260x156.jpeg 260w, http://simplystatistics.org/wp-content/uploads/2015/07/terminator.jpeg 620w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>The reality is that robots that walk and talk are getting better but still have a ways to go:</p> <p> </p> <p> </p> <p>Does this mean that I think all those really smart people are silly for making this plea about AI now though? No, I think they are probably just in time.</p> <p>The reason is that the first autonomous killing machines will definitely not look anything like the Terminator. They will more likely than not be drones, that are already in widespread use by the military, and will soon be flying over our heads <a href="http://money.cnn.com/2015/07/29/technology/amazon-drones-air-space/">delivering Amazon products</a>.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/07/drone.jpg"><img class="aligncenter size-medium wp-image-4161" src="http://simplystatistics.org/wp-content/uploads/2015/07/drone-300x238.jpg" alt="drone" width="300" height="238" srcset="http://simplystatistics.org/wp-content/uploads/2015/07/drone-300x238.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/07/drone-1024x814.jpg 1024w, http://simplystatistics.org/wp-content/uploads/2015/07/drone.jpg 1200w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>I also think that when people think about “artificial intelligence” they also think about robots that can mimic the behaviors of a human being, including the ability to talk, hold a conversation, <a href="https://en.wikipedia.org/wiki/Turing_test">or pass the Turing test</a>. But it turns out that the “artificial intelligence” you would need to create an automated killing system is much much simpler than that and is mostly some basic data science. The things you would need are:</p> <ol> <li>A drone with the ability to fly on its own</li> <li>The ability to make decisions about what people to target</li> <li>The ability to find those people and attack them</li> </ol> <p> </p> <p>The first issue, being able to fly on autopilot, is something that has existed for a while. You have probably flown on a plane that has <a href="https://en.wikipedia.org/wiki/Autopilot">used autopilot</a> for at least some of the flight. I won’t get into the details on this one because I think it is the least interesting - it has been around a while and we didn’t get the dire warnings about autonomous agents.</p> <p>The second issue, about deciding which people to target is already in existence as well. We have already seen programs like <a href="https://en.wikipedia.org/wiki/PRISM_(surveillance_program)">PRISM</a> and others that collect individual level metadata and presumably use those to make predictions. We have already seen programs like <a href="https://en.wikipedia.org/wiki/PRISM_(surveillance_program)">PRISM</a> and others that collect individual level metadata and presumably use those to make predictions. While the true and false positive rates are probably messed up by the fact that there are very very few “true positives” these programs are being developed and even relatively simple statistical models can be used to build a predictor - even if those don’t work.</p> <p>The second issue is being able to find people to attack them. This is where the real “artificial intelligence” comes in to play. But it isn’t artificial intelligence like you might think about. It could be just as simple as having the drone fly around and take people’s pictures. Then we could use those pictures to match up with the people identified through metadata and attack them. Facebook has a [Just a few days ago many of the most incredible minds in science and technology <a href="http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons">urged governments to avoid using artificial intelligence</a> to create autonomous killing machines. One thing that always happens when such a warning is put into place is you see the inevitable Terminator picture:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/07/terminator.jpeg"><img class="aligncenter wp-image-4160 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/07/terminator-300x180.jpeg" alt="terminator" width="300" height="180" srcset="http://simplystatistics.org/wp-content/uploads/2015/07/terminator-300x180.jpeg 300w, http://simplystatistics.org/wp-content/uploads/2015/07/terminator-260x156.jpeg 260w, http://simplystatistics.org/wp-content/uploads/2015/07/terminator.jpeg 620w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>The reality is that robots that walk and talk are getting better but still have a ways to go:</p> <p> </p> <p> </p> <p>Does this mean that I think all those really smart people are silly for making this plea about AI now though? No, I think they are probably just in time.</p> <p>The reason is that the first autonomous killing machines will definitely not look anything like the Terminator. They will more likely than not be drones, that are already in widespread use by the military, and will soon be flying over our heads <a href="http://money.cnn.com/2015/07/29/technology/amazon-drones-air-space/">delivering Amazon products</a>.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/07/drone.jpg"><img class="aligncenter size-medium wp-image-4161" src="http://simplystatistics.org/wp-content/uploads/2015/07/drone-300x238.jpg" alt="drone" width="300" height="238" srcset="http://simplystatistics.org/wp-content/uploads/2015/07/drone-300x238.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/07/drone-1024x814.jpg 1024w, http://simplystatistics.org/wp-content/uploads/2015/07/drone.jpg 1200w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>I also think that when people think about “artificial intelligence” they also think about robots that can mimic the behaviors of a human being, including the ability to talk, hold a conversation, <a href="https://en.wikipedia.org/wiki/Turing_test">or pass the Turing test</a>. But it turns out that the “artificial intelligence” you would need to create an automated killing system is much much simpler than that and is mostly some basic data science. The things you would need are:</p> <ol> <li>A drone with the ability to fly on its own</li> <li>The ability to make decisions about what people to target</li> <li>The ability to find those people and attack them</li> </ol> <p> </p> <p>The first issue, being able to fly on autopilot, is something that has existed for a while. You have probably flown on a plane that has <a href="https://en.wikipedia.org/wiki/Autopilot">used autopilot</a> for at least some of the flight. I won’t get into the details on this one because I think it is the least interesting - it has been around a while and we didn’t get the dire warnings about autonomous agents.</p> <p>The second issue, about deciding which people to target is already in existence as well. We have already seen programs like <a href="https://en.wikipedia.org/wiki/PRISM_(surveillance_program)">PRISM</a> and others that collect individual level metadata and presumably use those to make predictions. We have already seen programs like <a href="https://en.wikipedia.org/wiki/PRISM_(surveillance_program)">PRISM</a> and others that collect individual level metadata and presumably use those to make predictions. While the true and false positive rates are probably messed up by the fact that there are very very few “true positives” these programs are being developed and even relatively simple statistical models can be used to build a predictor - even if those don’t work.</p> <p>The second issue is being able to find people to attack them. This is where the real “artificial intelligence” comes in to play. But it isn’t artificial intelligence like you might think about. It could be just as simple as having the drone fly around and take people’s pictures. Then we could use those pictures to match up with the people identified through metadata and attack them. Facebook has a](file:///Users/jtleek/Downloads/deepface.pdf) that demonstrates an algorithm that can identify people with near human level accuracy. This approach is based on something called deep neural nets, which sounds very intimidating, but is actually just a set of nested nonlinear <a href="https://en.wikipedia.org/wiki/Deep_learning">logistic regression models</a>. These models have gotten very good because (a) we are getting better at fitting them mathematically and computationally but mostly (b) we have much more data to train them with than we ever did before. The speed that this part of the process is developing is (I think) why there is so much recent concern about potentially negative applications like autonomous killing machines.</p> <p>The scary thing is that these technologies could be combined *right now* to create such a system that was not controlled directly by humans but made automated decisions and flew drones to carry out those decisions. The technology to shrink these type of deep neural net systems to identify people is so good it can even be made simple enough to <a href="http://googleresearch.blogspot.com/2015/07/how-google-translate-squeezes-deep.html">run on a phone f</a>or things like language translation and could easily be embedded in a drone.</p> <p>So I am with Musk, Hawking, and others who would urge caution by governments in developing these systems. Just because we can make it doesn’t mean it will do what we want. Just look at how well Facebook/Amazon/Google make suggestions for “other things you might like” to get an idea about how potentially disastrous automated killing systems could be.</p> <p> </p> Announcing the JHU Data Science Hackathon 2015 2015-07-28T13:31:04+00:00 http://simplystats.github.io4155 <p>We are pleased to announce that the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health will be hosting the first ever <a href="https://www.regonline.com/jhudash">JHU Data Science Hackathon</a> (DaSH) on <strong>September 21-23, 2015</strong> at the Baltimore Marriott Waterfront.</p> <p>This event will be an opportunity for data scientists and data scientists-in-training to get together and hack on real-world problems collaboratively and to learn from each other. The DaSH will feature data scientists from government, academia, and industry presenting problems and describing challenges in their respective areas. There will also be a number of networking opportunities where attendees can get to know each other. We think this will be  fun event and we encourage people from all areas, including students (graduate and undergraduate), to attend.</p> <p>To get more details and to sign up for the hackathon, you can go to the <a href="https://www.regonline.com/jhudash">DaSH web site</a>. We will be posting more information as the event nears.</p> <p>Organizers:</p> <ul> <li>Jeff Leek</li> <li>Brian Caffo</li> <li>Roger Peng</li> <li>Leah Jager</li> </ul> <p>Funding:</p> <ul> <li>National Institutes of Health</li> <li>Johns Hopkins University</li> </ul> <p> </p> stringsAsFactors: An unauthorized biography 2015-07-24T11:04:20+00:00 http://simplystats.github.io4151 <p>Recently, I was listening in on the conversation of some colleagues who were discussing a bug in their R code. The bug was ultimately traced back to the well-known phenomenon that functions like ‘read.table()’ and ‘read.csv()’ in R convert columns that are detected to be character/strings to be factor variables. This lead to the spontaneous outcry from one colleague of</p> <blockquote> <p>Why does stringsAsFactors not default to FALSE????</p> </blockquote> <p>The argument ‘stringsAsFactors’ is an argument to the ‘data.frame()’ function in R. It is a logical that indicates whether strings in a data frame should be treated as factor variables or as just plain strings. The argument also appears in ‘read.table()’ and related functions because of the role these functions play in reading in table data and converting them to data frames. By default, ‘stringsAsFactors’ is set to TRUE.</p> <p>This argument dates back to May 20, 2006 when it was originally introduced into R as the ‘charToFactor’ argument to ‘data.frame()’. Soon afterwards, on May 24, 2006, it was changed to ‘stringsAsFactors’ to be compatible with S-PLUS by request from Bill Dunlap.</p> <p>Most people I talk to today who use R are completely befuddled by the fact that ‘stringsAsFactors’ is set to TRUE by default. First of all, it should be noted that before the ‘stringsAsFactors’ argument even existed, the behavior of R was to coerce all character strings to be factors in a data frame. If you didn’t want this behavior, you had to manually coerce each column to be character.</p> <p>So here’s the story:</p> <p>In the old days, when R was primarily being used by statisticians and statistical types, this setting strings to be factors made total sense. In most tabular data, if there were a column of the table that was non-numeric, it almost certainly encoded a categorical variable. Think sex (male/female), country (U.S./other), region (east/west), etc. In R, categorical variables are represented by ‘factor’ vectors and so character columns got converted factor.</p> <p>Why do we need factor variables to begin with? Because of modeling functions like ‘lm()’ and ‘glm()’. Modeling functions need to treat expand categorical variables into individual dummy variables, so that a categorical variable with 5 levels will be expanded into 4 different columns in your modeling matrix. There’s no way for R to know it should do this unless it has some extra information in the form of the factor class. From this point of view, setting ‘stringsAsFactors = TRUE’ when reading in tabular data makes total sense. If the data is just going to go into a regression model, then R is doing the right thing.</p> <p>There’s also a more obscure reason. Factor variables are encoded as integers in their underlying representation. So a variable like “disease” and “non-disease” will be encoded as 1 and 2 in the underlying representation. Roughly speaking, since integers only require 4 bytes on most systems, the conversion from string to integer actually saved some space for long strings. All that had to be stored was the integer levels and the labels. That way you didn’t have to repeat the strings “disease” and “non-disease” for as many observations that you had, which would have been wasteful.</p> <p>Around June of 2007, R introduced hashing of CHARSXP elements in the underlying C code thanks to Seth Falcon. What this meant was that effectively, character strings were hashed to an integer representation and stored in a global table in R. Anytime a given string was needed in R, it could be referenced by its underlying integer. This effectively put in place, globally, the factor encoding behavior of strings from before. Once this was implemented, there was little to be gained from an efficiency standpoint by encoding character variables as factor. Of course, you still needed to use ‘factors’ for the modeling functions.</p> <p>The difference nowadays is that R is being used a by a very wide variety of people doing all kinds of things the creators of R never envisioned. This is, of course, wonderful, but it introduces lots of use cases that were not originally planned for. I find that most often, the people complaining about ‘stringsAsFactors’ not being FALSE are people who are doing things that are not the traditional statistical modeling things (things that old-time statisticians like me used to do). In fact, I would argue that if you’re upset about ‘stringsAsFactors = TRUE’, then it’s a pretty good indicator that you’re either not a statistician by training, or you’re doing non-traditional statistical things.</p> <p>For example, in genomics, you might have the names of the genes in one column of data. It really doesn’t make sense to encode these as factors because they won’t be used in any modeling function. They’re just labels, essentially. And because of CHARSXP hashing, you don’t gain anything from an efficiency standpoint by converting them to factors either.</p> <p>But of course, given the long-standing behavior of R, many people depend on the default conversion of characters to factors when reading in tabular data. Changing this default would likely result in an equal number of people complaining about ‘stringsAsFactors’.</p> <p>I fully expect that this blog post will now make all R users happy. If you think I’ve missed something from this unauthorized biography, please let me know on Twitter (@rdpeng).</p> The statistics department Moneyball opportunity 2015-07-17T09:21:16+00:00 http://simplystats.github.io3922 <p><a href="https://en.wikipedia.org/wiki/Moneyball"></a> is a book and a movie about Billy Bean. It makes statisticians look awesome and I loved the movie. I loved it so much I’m putting the movie trailer right here:</p> <p>The basic idea behind Moneyball was that the Oakland Athletics were able to build a very successful baseball team on a tight budget by valuing skills that many other teams undervalued. In baseball those skills were things like on-base percentage and slugging percentage. By correctly valuing these skills and their impact on a teams winning percentage, the A’s were able to build one of the most successful regular season teams on a minimal budget. This graph shows what an outlier they were, from a nice <a href="http://fivethirtyeight.com/features/billion-dollar-billy-beane/">fivethirtyeight analysis</a>.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/07/oakland.png"><img class="aligncenter wp-image-4146" src="http://simplystatistics.org/wp-content/uploads/2015/07/oakland-1024x818.png" alt="oakland" width="500" height="400" srcset="http://simplystatistics.org/wp-content/uploads/2015/07/oakland-1024x818.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/07/oakland-250x200.png 250w, http://simplystatistics.org/wp-content/uploads/2015/07/oakland.png 1150w" sizes="(max-width: 500px) 100vw, 500px" /></a></p> <p> </p> <p>I think that the data science/data analysis revolution that we have seen over the last decade has created a similar moneyball opportunity for statistics and biostatistics departments. Traditionally in these departments the highest value activities have been publishing a select number of important statistics journals (JASA, JRSS-B, Annals of Statistics, Biometrika, Biometrics and more recently journals like Biostatistics and Annals of Applied Statistics). But there are some hugely valuable ways to contribute to statistics/data science that don’t necessarily end with papers in those journals like:</p> <ol> <li>Creating good, well-documented, and widely used software</li> <li>Being primarily an excellent collaborator who brings in grant money and is a major contributor to science through statistics</li> <li>Publishing in top scientific journals rather than statistics journals</li> <li>Being a good scientific communicator who can attract talent</li> <li>Being a statistics educator who can build programs</li> </ol> <p>Another thing that is undervalued is not having a Ph.D. in statistics or biostatistics. The fact that these skills are undervalued right now means that up and coming departments could identify and recruit talented people that might be missed by other departments and have a huge impact on the world. One tricky thing is that the rankings of department are based on the votes of people from other departments who may or may not value these same skills. Another tricky thing is that many industry data science positions put incredibly high value on these skills and so you might end up competing with them for people - a competition that will definitely drive up the market value of these data scientist/statisticians. But for the folks that want to stay in academia, now is a prime opportunity.</p> The Mozilla Fellowship for Science 2015-07-10T11:10:26+00:00 http://simplystats.github.io4139 <p>This looks like an <a href="https://www.mozillascience.org/fellows">interesting opportunity</a> for grad students, postdocs, and early career researchers:</p> <blockquote> <p>We’re looking for researchers with a passion for open source and data sharing, already working to shift research practice to be more collaborative, iterative and open. Fellows will spend 10 months starting September 2015 as community catalysts at their institutions, mentoring the next generation of open data practitioners and researchers and building lasting change in the global open science community.</p> <p>Throughout their fellowship year, chosen fellows will receive training and support from Mozilla to hone their skills around open source and data sharing. They will also craft code, curriculum and other learning resources that help their local communities learn open data practices, and teach forward to their peers.</p> </blockquote> <p>Here’s what you get:</p> <blockquote> <p>Fellows will receive:</p> <ul> <li>A stipend of $60,000 USD, paid in 10 monthly installments.</li> <li>One-time health insurance supplement for Fellows and their families, ranging from$3,500 for single Fellows to $7,000 for a couple with two or more children.</li> <li>One-time childcare allotment for families with children of up to$6,000.</li> <li>Allowance of up to $3,000 towards the purchase of laptop computer, digital cameras, recorders and computer software; fees for continuing studies or other courses, research fees or payments, to the extent related to the fellowship.</li> <li>All approved fellowship trips – domestic and international – are covered in full.</li> </ul> </blockquote> <p>Deadline is August 14.</p> JHU, UMD researchers are getting a really big Big Data center 2015-07-08T16:26:45+00:00 http://simplystats.github.io4137 <p>From <a href="http://technical.ly/baltimore/2015/07/07/jhu-umd-big-data-maryland-advanced-research-computing-center-marcc/">Technical.ly Baltimore</a>:</p> <blockquote> <p>A nondescript, 3,700-square-foot building on Johns Hopkins’ Bayview campus will house a new data storage and computing center for university researchers. The$30 million Maryland Advanced Research Computing Center (MARCC) will be available to faculty from JHU and the University of Maryland, College Park.</p> </blockquote> <p>The web site has a pretty cool time-lapse video of the construction of the computing center. There’s also a bit more detail at the <a href="http://hub.jhu.edu/2015/07/06/computing-center-bayview">JHU Hub</a> site.</p> The Massive Future of Statistics Education 2015-07-03T10:17:24+00:00 http://simplystats.github.io4133 <p><em>NOTE: This post was written as a chapter for the not-yet-released Handbook on Statistics Education. </em></p> <p>Data are eating the world, but our collective ability to analyze data is going on a starvation diet.</p> <div id="content"> <p> Everywhere you turn, data are being generated somehow. By the time you read this piece, you’ll probably have collected some data. (For example this piece has 2,072 words). You can’t avoid data—it’s coming from all directions. </p> <p> So what do we do with it? For the most part, nothing. There’s just too much data being spewed about. But for the data that we <em>are</em> interested in, we need to know the appropriate methods for thinking about and analyzing them. And by “we”, I mean pretty much everyone. </p> <p> In the future, everyone will need some data analysis skills. People are constantly confronted with data and the need to make choices and decisions from the raw data they receive. Phones deliver information about traffic, we have ratings about restaurants or books, and even rankings of hospitals. High school students can obtain complex and rich information about the colleges to which they’re applying while admissions committees can get real-time data on applicants’ interest in the college. </p> <p> Many people already have heuristic algorithms to deal with the data influx—and these algorithms may serve them well—but real statistical thinking will be needed for situations beyond choosing which restaurant to try for dinner tonight. </p> <p> <strong>Limited Capacity</strong> </p> <p> The McKinsey Global Institute, in a <a href="http://www.mckinsey.com/insights/americas/us_game_changers">highly cited report</a>, predicted that there would be a shortage of “data geeks” and that by 2018 there would be between 140,000 and 190,000 unfilled positions in data science. In addition, there will be an estimated 1.5 million people in managerial positions who will need to be trained to manage data scientists and to understand the output of data analysis. If history is any guide, it’s likely that these positions will get filled by people, regardless of whether they are properly trained. The potential consequences are disastrous as untrained analysts interpret complex big data coming from myriad sources of varying quality. </p> <p> Who will provide the necessary training for all these unfilled positions? The field of statistics’ current system of training people and providing them with master’s degrees and PhDs is woefully inadequate to the task. In 2013, the top 10 largest statistics master’s degree programs in the U.S. graduated a total of <a href="http://community.amstat.org/blogs/steve-pierson/2014/02/09/largest-graduate-programs-in-statistics">730 people</a>. At this rate we will never train the people needed. While statisticians have greatly benefited from the sudden and rapid increase in the amount of data flowing around the world, our capacity for scaling up the needed training for analyzing those data is essentially nonexistent. </p> <p> On top of all this, I believe that the McKinsey report is a gross underestimation of how many people will need to be trained in <em>some</em> data analysis skills in the future. Given how much data is being generated every day, and how critical it is for everyone to be able to intelligently interpret these data, I would argue that it’s necessary for <em>everyone</em> to have some data analysis skills. Needless to say, it’s foolish to suggest that everyone go get a master’s or even bachelor’s degrees in statistics. We need an alternate approach that is both high-quality and scalable to a large population over a short period of time. </p> <p> <strong>Enter the MOOCs</strong> </p> <p> In April of 2014, Jeff Leek, Brian Caffo, and I launched the <a href="https://www.coursera.org/specialization/jhudatascience/1">Johns Hopkins Data Science Specialization</a> on the Coursera platform. This is a sequence of nine courses that intends to provide a “soup-to-nuts” training in data science for people who are highly motivated and have some basic mathematical and computing background. The sequence of the nine courses follow what we believe is the essential “data science process”, which is </p> <ol> <li> Formulating a question that can be answered with data </li> <li> Assembling, cleaning, tidying data relevant to a question </li> <li> Exploring data, checking, eliminating hypotheses </li> <li> Developing a statistical model </li> <li> Making statistical inference </li> <li> Communicating findings </li> <li> Making the work reproducible </li> </ol> <p> We took these basic steps and designed courses around each one of them. </p> <p> Each course is provided in a massive open online format, which means that many thousands of people typically enroll in each course every time it is offered. The learners in the courses do homework assignments, take quizzes, and peer assess the work of others in the class. All grading and assessment is handled automatically so that the process can scale to arbitrarily large enrollments. As an example, the April 2015 session of the R Programming course had nearly 45,000 learners enrolled. Each class is exactly 4 weeks long and every class runs every month. </p> <p> We developed this sequence of courses in part to address the growing demand for data science training and education across the globe. Our background as biostatisticians was very closely aligned with the training needs of people interested in data science because, essentially, data science is <em>what we do every single day</em>. Indeed, one curriculum rule that we had was that we couldn’t include something if we didn’t in fact use it in our own work. </p> <p> The sequence has a substantial amount of standard statistics content, such as probability and inference, linear models, and machine learning. It also has non-standard content, such as git, GitHub, R programming, Shiny, and Markdown. Together, the sequence covers the full spectrum of tools that we believe will be needed by the practicing data scientist. </p> <p> For those who complete the nine courses, there is a capstone project at the end, that involves taking all of the skills in the course and developing a data product. For our first capstone project we partnered with <a href="http://swiftkey.com/en/">SwiftKey</a>, a predictive text analytics company, to develop a project where learners had to build a statistical model for predicting words in a sentence. This project involves taking unstructured, messy data, processing it into an analyzable form, developing a statistical model while making tradeoffs for efficiency and accuracy, and creating a Shiny app to show off their model to the public. </p> <p> <strong>Degree Alternatives</strong> </p> <p> The Data Science Specialization is not a formal degree program offered by Johns Hopkins University—learners who complete the sequence do not get any Johns Hopkins University credit—and so one might wonder what the learners get out of the program (besides, of course, the knowledge itself). To begin with, the sequence is completely portfolio based, so learners complete projects that are immediately viewable by others. This allows others to evaluate a learner’s ability on the spot with real code or data analysis. </p> <p> All of the lecture content is openly available and hosted on GitHub, so outsiders can view the content and see for themselves what is being taught. This give outsiders an opportunity to evaluate the program directly rather than have to rely on the sterling reputation of the institution teaching the courses. </p> <p> Each learner who completes a course using Coursera’s “Signature Track” (which currently costs 49 per course) can get a badge on their LinkedIn profile, which shows that they completed the course. This can often be as valuable as a degree or other certification as recruiters scouring LinkedIn for data scientist positions will be able to see our completers’ certifications in various data science courses. </p> <p> Finally, the scale and reach of our specialization immediately creates a large alumni social network that learners can take advantage of. As of March 2015, there were approximately 700,000 people who had taken at least one course in the specialization. These 700,000 people have a shared experience that, while not quite at the level of a college education, still is useful for forging connections between people, especially when people are searching around for jobs. </p> <p> <strong>Early Numbers</strong> </p> <p> So far, the sequence has been wildly successful. It averaged 182,507 enrollees a month for the first year in existence. The overall course completion rate was about 6% and the completion rate amongst those in the “Signature Track” (i.e. paid enrollees) was 67%. In October of 2014, barely 7 months since the start of the specialization, we had 663 learners enroll in the capstone project. </p> <p> <strong>Some Early Lessons</strong> </p> <p> From running the Data Science Specialization for over a year now, we have learned a number of lessons, some of which were unexpected. Here, I summarize the highlights of what we’ve learned. </p> <p> <strong>Data Science as Art and Science. </strong>Ironically, although the word “Science” appears in the name “Data Science”, there’s actually quite a bit about the practice of data science that doesn’t really resemble science at all. Much of what statisticians do in the act of data analysis is intuitive and ad hoc, with each data analysis being viewed as a unique flower. </p> <p> When attempting to design data analysis assignments that could be graded at scale with tens of thousands of people, we discovered that designing the rubrics for grading these assignments was not trivial. The reason is because our understanding of what makes a “good” analysis different from a bad one is not well-articulated. We could not identify any community-wide understanding of what are the components of a good analysis. What are the “correct” methods to use in a given data analysis situation? What is definitely the “wrong” approach? </p> <p> Although each one of us had been doing data analysis for the better part of a decade, none of us could succinctly write down what the process was and how to recognize when it was being done wrong. To paraphrase Daryl Pregibon from his <a href="http://www.nap.edu/catalog/1910/the-future-of-statistical-software-proceedings-of-a-forum">1991 talk at the National Academies of Science</a>, we had a process that we regularly espoused but barely understood. </p> <p> <strong>Content vs. Curation</strong>.<strong> </strong>Much of the content that we put online is available elsewhere. With YouTube, you can find high-quality videos on almost any topic, and our videos are not really that much better. Furthermore, the subject matter that we were teaching was in now way proprietary. The linear models that we teach are the same linear models taught everywhere else. So what exactly was the value we were providing? </p> <p> Searching on YouTube requires that you know what you are looking for. This is a problem for people who are just getting into an area. Effectively, what we provided was a <em>curation</em> of all the knowledge that’s out there on the topic of data science (we also added our own quirky spin). Curation is hard, because you need to make definitive choices between what is and is not a core element of a field. But curation is essential for learning a field for the uninitiated. </p> <p> <strong>Skill sets vs. Certification</strong>. Because we knew that we were not developing a true degree program, we knew we had to develop the program in a manner so that the learners could quickly see for themselves the value they were getting out of it. This lead us to taking a portfolio approach where learners produced things that could be viewed publicly. </p> <p> In part because of the self-selection of the population seeking to learn data science skills, our learners were more interested in being able to demonstrate the skills taught in the course rather than an abstract (but official) certification as might be gotten in a degree program. This is not unlike going to a music conservatory, where the output is your ability to play an instrument rather than the piece of paper you receive upon graduation. We feel that giving people the ability to demonstrate skills and skill sets is perhaps more important than official degrees in some instances because it gives employers a concrete sense of what a person is capable of doing. </p> <p> <strong>Conclusions</strong> </p> <p> As of April 2015, we had a total of 1,158 learners complete the entire specialization, including the capstone project. Given these numbers and our rate of completion for the specialization as a whole, we believe we are on our way to achieving our goal of creating a highly scalable program for training people in data science skills. Of course, this program alone will not be sufficient for all of the data science training needs of society. But we believe that the approach that we’ve taken, using non-standard MOOC channels, focusing on skill sets instead of certification, and emphasizing our role in curation, is a rich opportunity for the field of statistics to explore in order to educate the masses about our important work. </p> </div> Looks like this R thing might be for real 2015-07-02T10:01:45+00:00 http://simplystats.github.io4131 <p>Not sure how I missed this, but the Linux Foundation just announced the <a href="http://www.linuxfoundation.org/news-media/announcements/2015/06/linux-foundation-announces-r-consortium-support-millions-users">R Consortium</a> for supporting the “world’s most popular language for analytics and data science and support the rapid growth of the R user community”. From the Linux Foundation:</p> <blockquote> <p>The R language is used by statisticians, analysts and data scientists to unlock value from data. It is a free and open source programming language for statistical computing and provides an interactive environment for data analysis, modeling and visualization. The R Consortium will complement the work of the R Foundation, a nonprofit organization based in Austria that maintains the language. The R Consortium will focus on user outreach and other projects designed to assist the R user and developer communities.</p> <p>Founding companies and organizations of the R Consortium include The R Foundation, Platinum members Microsoft and RStudio; Gold member TIBCO Software Inc.; and Silver members Alteryx, Google, HP, Mango Solutions, Ketchum Trading and Oracle.</p> </blockquote> How Airbnb built a data science team 2015-07-01T08:39:29+00:00 http://simplystats.github.io4129 <p>From <a href="http://venturebeat.com/2015/06/30/how-we-scaled-data-science-to-all-sides-of-airbnb-over-5-years-of-hypergrowth/">Venturebeat</a>:</p> <blockquote> <p>Back then we knew so little about the business that any insight was groundbreaking; data infrastructure was fast, stable, and real-time (I was querying our production MySQL database); the company was so small that everyone was in the loop about every decision; and the data team (me) was aligned around a singular set of metrics and methodologies.</p> <p>But five years and 43,000 percent growth later, things have gotten a bit more complicated. I’m happy to say that we’re also more sophisticated in the way we leverage data, and there’s now a lot more of it. The trick has been to manage scale in a way that brings together the magic of those early days with the growing needs of the present — a challenge that I know we aren’t alone in facing.</p> </blockquote> How public relations and the media are distorting science 2015-06-24T10:07:45+00:00 http://simplystats.github.io4109 <p>Throughout history, engineers, medical doctors and other applied scientists have helped convert basic science discoveries into products, public goods and policy that have greatly improved our quality of life. With rare exceptions, it has taken years if not decades to establish these discoveries. And even the exceptions stand on the shoulders of incremental contributions. The researchers that produce this knowledge go through a slow and painstaking process to reach these achievements.</p> <p>In contrast, most science related media reports that grab the public’s attention fall into three categories:</p> <ol> <li>The <em>exaggerated big discovery</em>: Recent examples include the discovery of <a href="http://www.cbsnews.com/news/dangerous-pathogens-and-mystery-microbes-ride-the-subway/">the bubonic plague in the NYC subway</a>, <a href="http://www.bbc.com/news/science-environment-32287609">liquid water in mars</a>, and <a href="http://www.nytimes.com/2015/05/24/opinion/sunday/infidelity-lurks-in-your-genes.html?ref=opinion&amp;_r=3">the infidelity gene</a>.</li> <li><em>Over-promising: </em> These try to explain a complicated basic science finding and, in the case of biomedical research, then speculate without much explanation that the finding will ”lead to a deeper understanding of diseases and new ways to treat or cure them.”</li> <li>S_cience is broken: _These tend to report an anecdote about an allegedly corrupt scientist, maybe cite the “Why Most Published Research Findings are False” paper, and then extrapolate speculatively.</li> </ol> <p>In my estimation, despite the attention grabbing headlines, the great majority of the subject matter included in these reports will not have an impact on our lives and will not even make it into scientific textbooks. So does science still have anything to offer? Reports of the third category have even scientists particularly worried. I, however, remain optimistic. First, I do not see any empirical evidence showing that the negative effects of the lack of reproducibility are worse now than 50 years ago. Furthermore, although not widely reported in the lay press, I continue to see bodies of work built by several scientists over several years or decades with much promise of leading to tangible improvements to our quality of life. Recent advances that I am excited about include [Throughout history, engineers, medical doctors and other applied scientists have helped convert basic science discoveries into products, public goods and policy that have greatly improved our quality of life. With rare exceptions, it has taken years if not decades to establish these discoveries. And even the exceptions stand on the shoulders of incremental contributions. The researchers that produce this knowledge go through a slow and painstaking process to reach these achievements.</p> <p>In contrast, most science related media reports that grab the public’s attention fall into three categories:</p> <ol> <li>The <em>exaggerated big discovery</em>: Recent examples include the discovery of <a href="http://www.cbsnews.com/news/dangerous-pathogens-and-mystery-microbes-ride-the-subway/">the bubonic plague in the NYC subway</a>, <a href="http://www.bbc.com/news/science-environment-32287609">liquid water in mars</a>, and <a href="http://www.nytimes.com/2015/05/24/opinion/sunday/infidelity-lurks-in-your-genes.html?ref=opinion&amp;_r=3">the infidelity gene</a>.</li> <li><em>Over-promising: </em> These try to explain a complicated basic science finding and, in the case of biomedical research, then speculate without much explanation that the finding will ”lead to a deeper understanding of diseases and new ways to treat or cure them.”</li> <li>S_cience is broken: _These tend to report an anecdote about an allegedly corrupt scientist, maybe cite the “Why Most Published Research Findings are False” paper, and then extrapolate speculatively.</li> </ol> <p>In my estimation, despite the attention grabbing headlines, the great majority of the subject matter included in these reports will not have an impact on our lives and will not even make it into scientific textbooks. So does science still have anything to offer? Reports of the third category have even scientists particularly worried. I, however, remain optimistic. First, I do not see any empirical evidence showing that the negative effects of the lack of reproducibility are worse now than 50 years ago. Furthermore, although not widely reported in the lay press, I continue to see bodies of work built by several scientists over several years or decades with much promise of leading to tangible improvements to our quality of life. Recent advances that I am excited about include](http://scienceblogs.com/principles/2010/07/20/whats-a-topological-insulator/) <a href="http://physics.gmu.edu/~pnikolic/articles/Topological%20insulators%20(Physics%20World,%20February%202011).pdf">insulators</a>, <a href="http://www.ncbi.nlm.nih.gov/pubmed/24955707">PD-1 pathway inhibitors</a>, <a href="https://en.wikipedia.org/wiki/CRISPR">clustered regularly interspaced short palindromic repeats</a>, advances in solar energy technology, and prosthetic robotics.</p> <p>However, there is one general aspect of science that I do believe has become worse. Specifically, it’s a shift in how much scientists jockey for media attention, even if it’s short-lived. Instead of striving for having a sustained impact on our field, which may take decades to achieve, an increasing number of scientists seem to be placing more value on appearing in the New York Times, giving a Ted Talk or having a blog or tweet go viral. As a consequence, too many of us end up working on superficial short term challenges that, with the help of a professionally crafted press release, may result in an attention grabbing media report. NB: I fully support science communication efforts, but not when the primary purpose is garnering attention, rather than educating.</p> <p>My concern spills over to funding agencies and philanthropic organizations as well. Consider the following two options. Option 1: be the funding agency representative tasked with organizing a big science project with a well-oiled PR machine. Option 2: be the funding agency representative in charge of several small projects, one of which may with low, but non-negligible, probability result in a Nobel Prize 30 years down the road. In the current environment, I see a preference for option 1.</p> <p>I am also concerned about how this atmosphere may negatively affect societal improvements within science. Publicly shaming transgressors on Twitter or expressing one’s outrage on a blog post can garner many social media clicks. However, these may have a smaller positive impact than mundane activities such as serving on a committee that, after several months of meetings, implements incremental, yet positive, changes. Time and energy spent on trying to increase internet clicks is time and energy we don’t spend on the tedious administrative activities that are needed to actually affect change.</p> <p>Because so many of the scientists that thrive in this atmosphere of short-lived media reports are disproportionately rewarded, I imagine investigators starting their careers feel some pressure to garner some media attention of their own. Furthermore, their view of how they are evaluated may be highly biased because evaluators that ignore media reports and focus more on the specifics of the scientific content, tend to be less visible. So if you want to spend your academic career slowly building a body of work with the hopes of being appreciated decades from now, you should not think that it is hopeless based on what is perhaps, a distorted view of how we are currently being evaluated.</p> <p>Update: changed topological insulators links to <a href="http://scienceblogs.com/principles/2010/07/20/whats-a-topological-insulator/">these</a> <a href="http://physics.gmu.edu/~pnikolic/articles/Topological%20insulators%20(Physics%20World,%20February%202011).pdf">two</a>. <a href="http://spectrum.ieee.org/semiconductors/materials/topological-insulators">Here</a> is one more. Via David S.</p> Interview at Leanpub 2015-06-16T21:49:33+00:00 http://simplystats.github.io4106 <p>A few weeks ago I sat down with Len Epp over at Leanpub to talk about my recently published book <em><a href="https://leanpub.com/rprogramming">R Programming for Data Science</a></em>. So far, I’ve only published one book through Leanpub but I’m a huge fan. They’ve developed a system that is, in my opinion, perfect for academic publishing. The book’s written in Markdown and they compile it into PDF, ePub, and mobi formats automatically.</p> <p>The full interview transcript is over at the <a href="http://blog.leanpub.com/2015/06/roger-peng.html">Leanpub blog</a>. If you want to listen to the audio of the interview, you can subscribe to the Leanpub <a href="https://itunes.apple.com/ca/podcast/id517117137?mt=2">podcast on iTunes</a>.</p> <p><a href="https://leanpub.com/rprogramming"><em>R Programming for Data Science</em></a> is available at Leanpub for a suggested price of15 (but you can get it for free if you want). R code files, datasets, and video lectures are available through the various add-on packages. Thanks to all of you who’ve already bought a copy!</p> Johns Hopkins Data Science Specialization Captsone 2 Top Performers 2015-06-10T14:33:09+00:00 http://simplystats.github.io4102 <p><em>The second capstone session of the <a href="https://www.coursera.org/specialization/jhudatascience/1?utm_medium=listingPage">Johns Hopkins Data Science Specialization</a> concluded recently. This time, we had 1,040 learners sign up to participate in the session, which again featured a project developed in collaboration with the amazingly innovative folks at <a href="http://swiftkey.com/en/">SwiftKey</a>. </em></p> <p><em>We’ve identified the learners listed below as the top performers in this capstone session. This is an incredibly talented group of people who have worked very hard throughout the entire nine-course specialization.  Please take some time to read their stories and look at their work. </em></p> <h1 id="ben-apple">Ben Apple</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Ben_Apple.jpg"><img class="aligncenter size-medium wp-image-4091" src="http://simplystatistics.org/wp-content/uploads/2015/06/Ben_Apple-300x285.jpg" alt="Ben_Apple" width="300" height="285" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/Ben_Apple-300x285.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/06/Ben_Apple.jpg 360w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Ben Apple is a Data Scientist and Enterprise Architect with the Department of Defense.  Mr. Apple holds a MS in Information Assurance and is a PhD candidate in Information Sciences.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>As a self trained data scientist I was looking for a program that would formalize my established skills while expanding my data science knowledge and tool box.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>The capstone project was the most demanding aspect of the program.  As such I most proud of the finale project.  The project stretched each of us beyond the standard course work of the program and was quite satisfying.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>To open doors so that I may further my research into the operational value of applying data science thought and practice to analytics of my domain.</p> <p><strong>Final Project: </strong><a href="https://bengapple.shinyapps.io/coursera_nlp_capstone">https://bengapple.shinyapps.io/coursera_nlp_capstone</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/bengapple/71376">http://rpubs.com/bengapple/71376</a></p> <p> </p> <h1 id="ivan-corneillet">Ivan Corneillet</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Ivan.Corneillet.jpg"><img class="aligncenter size-medium wp-image-4092" src="http://simplystatistics.org/wp-content/uploads/2015/06/Ivan.Corneillet-300x300.jpg" alt="Ivan.Corneillet" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/Ivan.Corneillet-300x300.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/06/Ivan.Corneillet-200x200.jpg 200w, http://simplystatistics.org/wp-content/uploads/2015/06/Ivan.Corneillet.jpg 400w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>A technologist, thinker, and tinkerer, Ivan facilitates the establishment of start-up companies by advising these companies about the hiring process, product development, and technology development, including big data, cloud computing, and cybersecurity. In his 17-year career, Ivan has held a wide range of engineering and management positions at various Silicon Valley companies. Ivan is a recent Wharton MBA graduate, and he previously earned his master’s degree in computer science from the Ensimag, and his master’s degree in electrical engineering from Université Joseph Fourier, both located in France.</p> <p><strong>**Why did you take the JHU Data Science Specialization?</strong>**</p> <p>There are three reasons why I decided to enroll in the JHU Data Science Specialization. First, fresh from college, my formal education was best suited for scaling up the Internet’s infrastructure. However, because every firm in every industry now creates products and services from analyses of data, I challenged myself to learn about Internet-scale datasets. Second, I am a big supporter of MOOCs. I do not believe that MOOCs should replace traditional education; however, I do believe that MOOCs and traditional education will eventually coexist in the same way that open-source and closed-source software does (read my blog post for more information on this topic: http://ivantur.es/16PHild). Third, the Johns Hopkins University brand certainly motivated me to choose their program. With a great name comes a great curriculum and fantastic professors, right?</p> <p>Once I had completed the program, I was not disappointed at all. I had read a blog post that explained that the JHU Data Science Specialization was only a start to learning about data science. I certainly agree, but I would add that this program is a great start, because the curriculum emphasizes information that is crucial, while providing additional resources to those who wish to deepen their understanding of data science. My thanks to Professors Caffo, Leek, and Peng; the TAs, and Coursera for building and delivering this track!</p> <p><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</p> <p>The capstone project made for a very rich and exhilarating learning experience, and was my favorite course in the specialization. Because I did not have prior knowledge in natural language processing (NLP), I had to conduct a fair amount of research. However, the program’s minimal-guidance approach mimicked a real-world environment, and gave me the opportunity to leverage my experience with developing code and designing products to get the most out of the skillset taught in the track. The result was that I created a data product that implemented state-of-the-art NLP algorithms using what I think are the best technologies (i.e., C++, JavaScript, R, Ruby, and SQL), given the choices that I had made. Bringing everything together is what made me the most proud. Additionally, my product capabilities are a far cry from IBM’s Watson, but while I am well versed in supercomputer hardware, this track helped me to gain a much deeper appreciation of Watson’s AI.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-1"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>Thanks to the broad skillset that the specialization covered, I feel confident wearing a data science hat. The concepts and tools covered in this program helped me to better understand the concerns that data scientists have and the challenges they face. From a business standpoint, I am also better equipped to identify the opportunities that lie ahead.</p> <p><strong>Final Project: </strong><a href="https://paspeur.shinyapps.io/wordmaster-io/">https://paspeur.shinyapps.io/wordmaster-io/</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/paspeur/wordmaster-io">http://rpubs.com/paspeur/wordmaster-io</a></p> <p>#</p> <h1 id="oscar-de-len">Oscar de León</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon.jpg"><img class="aligncenter size-medium wp-image-4093" src="http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon-300x225.jpg" alt="Oscar_De_Leon" width="300" height="225" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon-120x90.jpg 120w, http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon-300x225.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon-1024x768.jpg 1024w, http://simplystatistics.org/wp-content/uploads/2015/06/Oscar_De_Leon-260x195.jpg 260w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Oscar is an assistant researcher at a research institute in a developing country, graduated as a licentiate in biochemistry and microbiology in 2010 from the same university which hosts the institute. He has always loved technology, programming and statistics and has engaged in self learning of these subjects from an early age, finally using his abilities in the health-related research in which he has been involved since 2008. He is now working on the design, execution and analysis of various research projects, consulting for other researchers and students, and is looking forward to develop his academic career in biostatistics.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-1"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>I wanted to integrate my R experience into a more comprehensive data analysis workflow, which is exactly what this specialization offers. This was in line with the objectives of my position at the research institute in which I work, so I presented a study plan to my supervisor and she approved it. I also wanted to engage in an activity which enabled me to document my abilities in a verifiable way, and a Coursera Specialization seemed like a good option.</p> <p>Additionally, I’ve followed the JHSPH group’s courses since the first offering of Mathematical Biostatistics Bootcamp in November 2012. They have proved the standards and quality of education at their institution, and it was not something to let go by.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-1"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>I’m not one to usually interact with other students, and certainly didn’t do it during most of the specialization courses, but I decided to try out the fora on the Capstone project. It was wonderful; sharing ideas with, and receiving criticism form, my peers provided a very complete learning experience. After all, my contributions ended being appreciated by the community and a few posts stating it were very rewarding. This re-kindled my passion for teaching, and I’ll try to engage in it more from now on.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-2"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>First, I’ll file it with HR at my workplace, since our research projects payed for the specialization <img src="http://simplystatistics.org/wp-includes/images/smilies/simple-smile.png" alt=":)" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I plan to use the certificate as a credential for data analysis with R when it is relevant. For example, I’ve been interested in offering an R workshop for life sciences students and researchers at my University, and this certificate (and the projects I prepared during the specialization) could help me show I have a working knowledge on the subject.</p> <p><strong>Final Project: </strong><a href="https://odeleon.shinyapps.io/ngram/">https://odeleon.shinyapps.io/ngram/</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/chemman/n-gram">http://rpubs.com/chemman/n-gram</a></p> <p>#</p> <h1 id="jeff-hedberg">Jeff Hedberg</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Jeff_Hedberg.jpg"><img class="aligncenter size-full wp-image-4094" src="http://simplystatistics.org/wp-content/uploads/2015/06/Jeff_Hedberg.jpg" alt="Jeff_Hedberg" width="200" height="200" /></a></p> <p>I am passionate about turning raw data into actionable insights that solve relevant business problems. I also greatly enjoy leading large, multi-functional projects with impact in areas pertaining to machine and/or sensor data.  I have a Mechanical Engineering Degree and an MBA, in addition to a wide range of Data Science (IT/Coding) skills.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-2"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>I was looking to gain additional exposure into Data Science as a current practitioner, and thought this would be a great program.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-2"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>I am most proud of completing all courses with distinction (top of peers).  Also, I’m proud to have achieved full points on my Capstone project having no prior experience in Natural Language Processing.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-3"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>I am going to add this to my Resume and LinkedIn Profile.  I will use it to solidify my credibility as a data science practitioner of value.</p> <p><strong>Final Project: </strong><a href="https://hedbergjeffm.shinyapps.io/Next_Word_Prediction/">https://hedbergjeffm.shinyapps.io/Next_Word_Prediction/</a></p> <p><strong>Project Slide Deck: </strong><a href="https://rpubs.com/jhedbergfd3s/74960">https://rpubs.com/jhedbergfd3s/74960</a></p> <p>#</p> <h1 id="hernn-martnez-foffani">Hernán Martínez-Foffani</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani.jpg"><img class="aligncenter size-medium wp-image-4095" src="http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani-300x225.jpg" alt="Hernán_Martínez-Foffani" width="300" height="225" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani-120x90.jpg 120w, http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani-300x225.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani-1024x768.jpg 1024w, http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani-260x195.jpg 260w, http://simplystatistics.org/wp-content/uploads/2015/06/Hernán_Martínez-Foffani.jpg 1256w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I was born in Argentina but now I’m settled in Spain. I’ve been working in computer technology since the eighties, in digital networks, programming, consulting, project management. Now, as CTO in a software company, I lead a small team of programmers developing a supply chain management app.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-3"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>In my opinion the curriculum is carefully designed with a nice balance between theory and practice. The JHU authoring and the teachers’ widely known prestige ensure the content quality. The ability to choose the learning pace, one per month in my case, fits everyone’s schedule.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-3"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>The capstone definitely. It resulted in a fresh and interesting challenge. I sweat a lot, learned much more and in the end had a lot of fun.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-4"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>While for the time being I don’t have any specific plan for the certificate, it’s a beautiful reward for the effort done.</p> <p><strong>Final Project: </strong><a href="https://herchu.shinyapps.io/shinytextpredict/">https://herchu.shinyapps.io/shinytextpredict/</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/herchu1/shinytextprediction">http://rpubs.com/herchu1/shinytextprediction</a></p> <p>#</p> <h1 id="francois-schonken">Francois Schonken</h1> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Francois-Schonken1.jpg"><img class="aligncenter size-medium wp-image-4097" src="http://simplystatistics.org/wp-content/uploads/2015/06/Francois-Schonken1-197x300.jpg" alt="Francois Schonken" width="197" height="300" /></a></p> <p>I’m a 36 year old South African male born and raised. I recently (4 years now) immigrated to lovely Melbourne, Australia. I wrapped up a BSc (Hons) Computer Science with specialization in Computer Systems back in 2001. Next I co-found a small boutique Software Development house operating from South Africa. I wrapped my MBA, from Melbourne Business School, in 2013 and now I consult for my small boutique Software Development house and 2 (very) small internet start-ups.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-4"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>One of the core subjects in my MBA was Data Analysis, basically an MBA take on undergrad Statistics with focus on application over theory (not that there was any shortage of theory). Waiting in a lobby room some 6 months later I was paging through the financial section of business focused weekly. I came across an article explaining how a Melbourne local applied a language called R to solve a grammatically and statistically challenging issue. The rest, as they say, is history.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-4"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>I’m quite proud of both my Developing Data Products and Capstone projects, but for me these tangible outputs merely served as a vehicle to better understand a different way of thinking about data. I’ve spend most of my Software Development life dealing with one form or the other form of RDBS (Relational Database Management System). This, in my experience, leads to a very set oriented way of thinking about data.</p> <p>I’m most proud of developing a new tool in my “Skills Toolbox” that I consider highly complementary to both my Software and Business outlook on projects.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-5"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>Honestly, I had not planned on using my Certificate in and of itself. The skills I’ve acquired has already helped shape my thinking in designing an in-house web based consulting collaboration platform.</p> <p>I do not foresee this being the last time I’ll be applying Data Science thinking moving forward on my journey.</p> <p><strong>Final Project: </strong><a href="https://schonken.shinyapps.io/WordPredictor">https://schonken.shinyapps.io/WordPredictor</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/schonken/sentence-builder">http://rpubs.com/schonken/sentence-builder</a></p> <p>#</p> <h1 id="david-j-tagler">David J. Tagler</h1> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/David-J.-Tagler.jpg"><img class="aligncenter size-medium wp-image-4098" src="http://simplystatistics.org/wp-content/uploads/2015/06/David-J.-Tagler-300x300.jpg" alt="David J. Tagler" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/David-J.-Tagler-300x300.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/06/David-J.-Tagler-200x200.jpg 200w, http://simplystatistics.org/wp-content/uploads/2015/06/David-J.-Tagler.jpg 384w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>David is passionate about solving the world’s most important and challenging problems. His expertise spans chemical/biomedical engineering, regenerative medicine, healthcare technology management, information technology/security, and data science/analysis. David earned his Ph.D. in Chemical Engineering from Northwestern University and B.S. in Chemical Engineering from the University of Notre Dame.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-5"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>I enrolled in this specialization in order to advance my statistics, programming, and data analysis skills. I was interested in taking a series of courses that covered the entire data science pipeline. I believe that these skills will be critical for success in the future.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-5"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>I am most proud of the R programming and modeling skills that I developed throughout this specialization. Previously, I had no experience with R. Now, I can effectively manage complex data sets, perform statistical analyses, build prediction models, create publication-quality figures, and deploy web applications.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-6"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>I look forward to utilizing these skills in future research projects. Furthermore, I plan to take additional courses in data science, machine learning, and bioinformatics.</p> <p><strong>Final Project: </strong><a href="http://dt444.shinyapps.io/next-word-predict">http://dt444.shinyapps.io/next-word-predict</a></p> <p><strong>Project Slide Deck: </strong><a href="http://rpubs.com/dt444/next-word-predict">http://rpubs.com/dt444/next-word-predict</a></p> <p>#</p> <h1 id="melissa-tan">Melissa Tan</h1> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/MelissaTan.png"><img class="aligncenter size-medium wp-image-4099" src="http://simplystatistics.org/wp-content/uploads/2015/06/MelissaTan-300x198.png" alt="MelissaTan" width="300" height="198" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/MelissaTan-300x198.png 300w, http://simplystatistics.org/wp-content/uploads/2015/06/MelissaTan-260x172.png 260w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I’m a financial journalist from Singapore. I did philosophy and computer science at the University of Chicago, and I’m keen on picking up more machine learning and data viz skills.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-6"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>I wanted to keep up with coding, while learning new tools and techniques for wrangling and analyzing data that I could potentially apply to my job. Plus, it sounded fun. <img src="http://simplystatistics.org/wp-includes/images/smilies/simple-smile.png" alt=":)" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-6"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>Building a word prediction app pretty much from scratch (with a truckload of forum reading). The capstone project seemed insurmountable initially and ate up all my weekends, but getting the app to work passably was worth it.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-7"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>It’ll go on my CV, but I think it’s more important to be able to actually do useful things. I’m keeping an eye out for more practical opportunities to apply and sharpen what I’ve learnt.</p> <p><strong>Final Project: </strong><a href="https://melissatan.shinyapps.io/word_psychic/">https://melissatan.shinyapps.io/word_psychic/</a></p> <p><strong>Project Slide Deck: </strong><a href="https://rpubs.com/melissatan/capstone">https://rpubs.com/melissatan/capstone</a></p> <p>#</p> <h1 id="felicia-yii">Felicia Yii</h1> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/FeliciaYii.jpg"><img class="aligncenter size-medium wp-image-4100" src="http://simplystatistics.org/wp-content/uploads/2015/06/FeliciaYii-232x300.jpg" alt="FeliciaYii" width="232" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/FeliciaYii-232x300.jpg 232w, http://simplystatistics.org/wp-content/uploads/2015/06/FeliciaYii-793x1024.jpg 793w" sizes="(max-width: 232px) 100vw, 232px" /></a></p> <p>Felicia likes to dream, think and do. With over 20 years in the IT industry, her current fascination is at the intersection of people, information and decision-making.  Ever inquisitive, she has acquired an expertise in subjects as diverse as coding to cookery to costume making to cosmetics chemistry. It’s not apparent that there is anything she can’t learn to do, apart from housework.  Felicia lives in Wellington, New Zealand with her husband, two children and two cats.</p> <h4 id="why-did-you-take-the-jhu-data-science-specialization-7"><strong>**Why did you take the JHU Data Science Specialization?</strong>**</h4> <p>Well, I love learning and the JHU Data Science Specialization appealed to my thirst for a new challenge. I’m really interested in how we can use data to help people make better decisions.  There’s so much data out there these days that it is easy to be overwhelmed by it all. Data visualisation was at the heart of my motivation when starting out. As I got into the nitty gritty of the course, I really began to see the power of making data accessible and appealing to the data-agnostic world. There’s so much potential for data science thinking in my professional work.</p> <h4 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-7"><strong>**What are you most proud of doing as part of the JHU Data Science Specialization?</strong>**</h4> <p>Getting through it for starters while also working and looking after two children. Seriously though, being able to say I know what ‘practical machine learning’ is all about.  Before I started the course, I had limited knowledge of statistics, let alone knowing how to apply them in a machine learning context.  I was thrilled to be able to use what I learned to test a cool game concept in my final project.</p> <h4 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-8"><strong>**How are you planning on using your Data Science Specialization Certificate?</strong>**</h4> <p>I want to use what I have learned in as many ways possible. Firstly, I see opportunities to apply my skills to my day-to-day work in information technology. Secondly, I would like to help organisations that don’t have the skills or expertise in-house to apply data science thinking to help their decision making and communication. Thirdly, it would be cool one day to have my own company consulting on data science. I’ve more work to do to get there though!</p> <p><strong>Final Project: </strong><a href="https://micasagroup.shinyapps.io/nwpgame/">https://micasagroup.shinyapps.io/nwpgame/</a></p> <p><strong>Project Slide Deck: </strong><a href="https://rpubs.com/MicasaGroup/74788">https://rpubs.com/MicasaGroup/74788</a></p> <p> </p> Batch effects are everywhere! Deflategate edition 2015-06-09T11:47:27+00:00 http://simplystats.github.io4075 <p>In my opinion, batch effects are the biggest challenge faced by genomics research, especially in precision medicine. As we point out in <a href="http://www.ncbi.nlm.nih.gov/pubmed/20838408">this review</a>, they are everywhere among high-throughput experiments. But batch effects are not specific to genomics technology. In fact, in <a href="http://amstat.tandfonline.com/doi/abs/10.1080/00401706.1972.10488878">this 1972 paper</a> (paywalled), <a href="http://en.wikipedia.org/wiki/William_J._Youden">WJ Youden</a> describes batch effects in the context of measurements made by physicists. Check out this plot of <a href="https://en.wikipedia.org/wiki/Astronomical_unit">astronomical unit</a> <del>speed of light</del> estimates <strong>with an estimate of spread <del>confidence intervals</del></strong> (red and green are same lab).</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/06/Rplot.png"><img class=" wp-image-4295 aligncenter" src="http://simplystatistics.org/wp-content/uploads/2015/06/Rplot.png" alt="Rplot" width="467" height="290" srcset="http://simplystatistics.org/wp-content/uploads/2015/06/Rplot-300x186.png 300w, http://simplystatistics.org/wp-content/uploads/2015/06/Rplot.png 903w" sizes="(max-width: 467px) 100vw, 467px" /></a></p> <p style="text-align: center;"> <p> &nbsp; </p> <p> Sometimes you find batch effects where you least expect them. For example, in the <a href="http://en.wikipedia.org/wiki/Deflategate">deflategate</a> debate. Here is quote from the New England patriot's deflategate<a href="http://www.boston.com/sports/football/patriots/2015/05/14/key-takeaways-from-the-patriots-deflategate-report-rebuttal/hK0J0J9abNgtGyhTwlW53L/story.html"> rebuttal</a> (written with help from Nobel Prize winner <a href="http://en.wikipedia.org/wiki/Roderick_MacKinnon">Roderick MacKinnon</a>) </p> <blockquote> <p> in other words, the Colts balls were measured after the Patriots balls and had warmed up more. For the above reasons, the Wells Report conclusion that physical law cannot explain the pressures is incorrect. </p> </blockquote> <p style="text-align: left;"> Here is another one: </p> <blockquote> <p style="text-align: left;"> In the pressure measurements physical conditions were not very well-defined and major uncertainties, such as which gauge was used in pre-game measurements, affect conclusions. </p> </blockquote> <p style="text-align: left;"> So NFL, please read <a href="http://www.ncbi.nlm.nih.gov/pubmed/20838408">our paper</a> before you accuse a player of cheating. </p> <p style="text-align: left;"> Disclaimer: I live in New England but I am <a href="http://www.urbandictionary.com/define.php?term=Ball+so+Hard+University">Ravens</a> fan. </p> </p> I'm a data scientist - mind if I do surgery on your heart? 2015-06-08T14:15:39+00:00 http://simplystats.github.io4068 <p>There has been a lot of recent interest from scientific journals and from other folks in creating checklists for data science and data analysis. The idea is that the checklist will help prevent results that won’t reproduce or replicate from the literature. One analogy that I’m frequently hearing is the analogy with checklists for surgeons that <a href="http://www.nejm.org/doi/full/10.1056/NEJMsa0810119">can help reduce patient mortality</a>.</p> <p>The one major difference between checklists for surgeons and checklists I’m seeing for research purposes is the difference in credentialing between people allowed to perform surgery and people allowed to perform complex data analysis. You would never let me do surgery on you. I have no medical training at all. But I’m frequently asked to review papers that include complicated and technical data analyses, but have no trained data analysts or statisticians. The most common approach is that a postdoc or graduate student in the group is assigned to do the analysis, even if they don’t have much formal training. Whenever this happens red flags are up all over the place. Just like I wouldn’t trust someone without years of training and a medical license to do surgery on me, I wouldn’t let someone without years of training and credentials in data analysis make major conclusions from complex data analysis.</p> <p>You might argue that the consequences for surgery and for complex data analysis are on completely different scales. I’d agree with you, but not in the direction that you might think. I would argue that high pressure and complex data analysis can have much larger consequences than surgery. In surgery there is usually only one person that can be hurt. But if you do a bad data analysis, say claiming say that <a href="http://www.ncbi.nlm.nih.gov/pubmed/9500320">vaccines cause autism</a>, that can have massive consequences for hundreds or even thousands of people. So complex data analysis, especially for important results, should be treated with at least as much care as surgery.</p> <p>The reason why I don’t think checklists alone will solve the problem is that they are likely to be used by people without formal training. One obvious (and recent) example that I think makes this really clear is the <a href="https://developer.apple.com/healthkit/">HealthKit</a> data we are about to start seeing. A ton of people signed up for studies on their iPhones and it has been all over the news. The checklist will (almost certainly) say to have a big sample size. HealthKit studies will certainly pass the checklist, but they are going to get <a href="http://en.wikipedia.org/wiki/Dewey_Defeats_Truman">Truman/Deweyed</a> big time if they aren’t careful about biased sampling.</p> <div> If I walked into an operating room and said I'm going to start dabbling in surgery I would be immediately thrown out. But people do that with statistics and data analysis all the time. What they really need is to require careful training and expertise in data analysis on each paper that analyzes data. Until we treat it as a first class component of the scientific process we'll continue to see retractions, falsifications, and irreproducible results flourish. </div> Interview with Class Central 2015-06-04T09:27:20+00:00 http://simplystats.github.io4063 <p>Recently I sat down with Class Central to do an interview about the Johns Hopkins Data Science Specialization. We talked about the motivation for designing the sequence and and the capstone project. With the demand for data science skills greater than ever, the importance of the specialization is only increasing.</p> <p>See the <a href="https://www.class-central.com/report/data-science-specialization/">full interview</a> at the Class Central site. Below is short excerpt.</p> Interview with Chris Wiggins, chief data scientist at the New York Times 2015-06-01T09:00:27+00:00 http://simplystats.github.io4056 <p><em>Editor’s note: We are trying something a little new here and doing an interview with Google Hangouts on Air. The interview will be live at 11:30am EST. I have some questions lined up for Chris, but if you have others you’d like to ask, you can tweet them @simplystats and I’ll see if I can work them in. After the livestream we’ll leave the video on Youtube so you can check out the interview if you can’t watch the live stream. I’m embedding the Youtube video here but if you can’t see the live stream when it is running go check out the event page: <a href="https://plus.google.com/events/c7chrkg0ene47mikqrvevrg3a4o">https://plus.google.com/events/c7chrkg0ene47mikqrvevrg3a4o</a>.</em></p> Science is a calling and a career, here is a career planning guide for students and postdocs 2015-05-28T10:16:47+00:00 http://simplystats.github.io4054 <p><em>Editor’s note: This post was inspired by a really awesome career planning guide that Ben Langmead</em> <a href="https://github.com/BenLangmead/langmead-lab/blob/master/postdoc_questionnaire.md"><em>Editor’s note: This post was inspired by a really awesome career planning guide that Ben Langmead</em></a> <em>which you should go check out right now. You can also find the slightly adapted</em> <a href="https://github.com/jtleek/careerplanning"><em>Leek group career planning guide</em></a> <em>here.</em></p> <p>The most common reason that people go into science is altruistic. They loved dinosaurs and spaceships when they were a kid and that never wore off. On some level this is one of the reasons I love this field so much, it is an area where if you can get past all the hard parts can really keep introducing wonder into what you work on every day.</p> <p>Sometimes I feel like this altruism has negative consequences. For example, I think that there is less emphasis on the career planning and development side in the academic community. I don’t think this is malicious, but I do think that sometimes people think of the career part of science as unseemly. But if you have any job that you want people to pay you to do, then there will be parts of that job that will be career oriented. So if you want to be a professional scientist, being brilliant and good at science is not enough. You also need to pay attention to and plan carefully your career trajectory.</p> <p>A colleague of mine, Ben Langmead, created a really nice guide for his postdocs to thinking about and planning the career side of a postdoc <a href="https://github.com/BenLangmead/langmead-lab/blob/master/postdoc_questionnaire.md">which he has over on Github</a>. I thought it was such a good idea that I immediately modified it and asked all of my graduate students and postdocs to fill it out. It is kind of long so there was no penalty if they didn’t finish it, but I think it is an incredibly useful tool for thinking about how to strategize a career in the sciences. I think that the more we are concrete about the career side of graduate school and postdocs, including being honest about all the realistic options available, the better prepared our students will be to succeed on the market.</p> <p>You can find the <a href="https://github.com/jtleek/careerplanning">Leek Group Guide to Career Planning</a> here and make sure you also go <a href="https://github.com/BenLangmead/langmead-lab/blob/master/postdoc_questionnaire.md">check out Ben’s</a> since it was his idea and his is great.</p> <p> </p> Is it species or is it batch? They are confounded, so we can't know 2015-05-20T11:11:18+00:00 http://simplystats.github.io4031 <p>In a 2005 OMICS <a href="http://online.liebertpub.com/doi/abs/10.1089/153623104773547462" target="_blank">paper</a>, an analysis of human and mouse gene expression microarray measurements from several tissues led the authors to conclude that “any tissue is more similar to any other human tissue examined than to its corresponding mouse tissue”. Note that this was a rather surprising result given how similar tissues are between species. For example, both mice and humans see with their eyes, breathe with their lungs, pump blood with their hearts, etc… Two follow-up papers (<a href="http://mbe.oxfordjournals.org/content/23/3/530.abstract?ijkey=2c3d98666afbc99949fdcf514f10e3fedadee259&amp;keytype2=tf_ipsecsha" target="_blank">here</a> and <a href="http://mbe.oxfordjournals.org/content/24/6/1283.abstract?ijkey=366fdf09da56a5dd0cfdc5f74082d9c098ae7801&amp;keytype2=tf_ipsecsha" target="_blank">here</a>) demonstrated that platform-specific technical variability was the cause of this apparent dissimilarity. The arrays used for the two species were different and thus measurement platform and species were completely <strong>confounded</strong>. In a 2010 paper, we confirmed that once this technical variability  was accounted for, the number of genes expressed in common  between the same tissue across the two species was much higher than the those expressed in common  between two species across the different tissues (see Figure 2 <a href="http://nar.oxfordjournals.org/content/39/suppl_1/D1011.full" target="_blank">here</a>).</p> <p>So <a href="http://genomicsclass.github.io/book/pages/confounding.html">what is confounding</a> and <a href="http://www.nature.com/ng/journal/v39/n7/full/ng0707-807.html">why is it a problem</a>? This topic has been discussed broadly. We wrote a <a href="http://www.nature.com/nrg/journal/v11/n10/full/nrg2825.html">review</a> some time ago. But based on recent discussions I’ve participated in, it seems that there is still some confusion. Here I explain, aided by some math, how confounding leads to problems in the context of estimating species effects in genomics. We will use</p> <ul> <li><em>X<sub>i</sub></em> to represent the gene expression measurements for human tissue <em>i,</em></li> <li><em>a<sub>X</sub></em> to represent the level of expression that is specific to humans and</li> <li><em>b<sub>X</sub></em> to represent the batch effect introduced by the use of the human microarray platform.</li> <li>Therefore <em>X<sub>i</sub></em> =<em>a<sub>X </sub></em>+ <em>b<sub>X </sub></em>+ _e<sub>i,</sub>_with _e<sub>i </sub>_the tissue _i  _effect and other uninteresting sources of variability.</li> </ul> <p>Similarly, we will use:</p> <ul> <li><em>Y<sub>i</sub></em> to represent the measurements for mouse tissue <em>i</em></li> <li><em>a<sub>Y</sub></em>  to represent the mouse specific level and</li> <li><em>b<sub>Y</sub></em> the batch effect introduced by the use of the mouse microarray platform.</li> <li>Therefore <em>Y</em><sub>i</sub> =<em>a<sub>Y </sub></em>+ <em>b<sub>Y </sub></em>+_f<sub>i,</sub>_with _f<sub>i </sub>_tissue _i  _effect and other uninteresting sources of variability.</li> </ul> <p>If we are interested in estimating a species effect that is general across tissues, then we are interested in the following quantity:</p> <p style="text-align: center;">  <em>a<sub>Y</sub> - a<sub>X</sub></em> </p> <p>Naively, we would think that we can estimate this quantity using the observed differences between the species that cancel out the tissue effect. We observe a difference for each tissue: <em>Y<sub>1 </sub></em> - <em>X<sub>1 </sub></em>, <em>Y<sub>2</sub></em> - <em>X<sub>2 </sub></em>, etc… The problem is that <em>a<sub>X </sub>_and _b<sub>X </sub>_are always together as are _a<sub>Y </sub>_and _b<sub>Y .</sub>_We say that the batch effect _b<sub>X </sub>_is <strong>confounded</strong> with the species effect _a<sub>X</sub></em>. Therefore, on average, the observed differences include both the species and the batch effects. To estimate the difference above we would write a a model like this:</p> <p style="text-align: center;"> <em>Y<sub>i </sub></em> - <em>X<sub>i</sub></em> = (<em>a<sub>Y</sub> - a<sub>X</sub></em>) + (<em>b<sub>Y</sub> - b<sub>X</sub></em>) + other sources of variability </p> <p style="text-align: left;"> and then estimate the unknown quantities of interest: (<em>a<sub>Y</sub> - a<sub>X</sub></em>) and (<em>b<sub>Y</sub> - b<sub>X</sub></em>) from the observed data <em>Y<sub>1</sub></em> - <em>X<sub>1</sub></em>, <em>Y<sub>2</sub></em> - <em>X<sub>2</sub></em>, etc... The problem is that, we can estimate the aggregate effect (<em>a<sub>Y</sub> - a<sub>X</sub></em>) + (<em>b<sub>Y</sub> - b<sub>X</sub></em>), but, mathematically, we can't tease apart the two differences.  To see this note that if we are using least squares, the estimates (<em>a<sub>Y</sub> - a<sub>X</sub></em>) = 7,  (<em>b<sub>Y</sub> - b<sub>X</sub></em>)=3  will fit the data exactly as well as (<em>a<sub>Y</sub> - a<sub>X</sub></em>)=3,(<em>b<sub>Y</sub> - b<sub>X</sub></em>)=7 since </p> <p style="text-align: center;"> <em>{(Y-X) -(7+3))^2 = {(Y-X)- (3+7)}^2.</em> </p> <p style="text-align: left;"> In fact, under these circumstances, there are an infinite number of solutions to the standard statistical estimation approaches. A simple analogy is to try to find a unique solution to the equations m+n = 0. If batch and species are not confounded then we are able to tease apart differences just as if we were given another equation: m+n=0; m-n=2. You can learn more about this in <a href="https://www.edx.org/course/introduction-linear-models-matrix-harvardx-ph525-2x">this linear models course</a>. </p> <p style="text-align: left;"> Note that the above derivation apply to each gene affected by the batch effect. In practice we commonly see hundreds of genes affected. As a consequence, when we compute distances between two samples from different species we may see large differences even where there is no species effect. This is because the <em>b<sub>Y</sub> - b<sub>X  </sub></em>differences for each gene are squared and added up. </p> <p style="text-align: left;"> In summary, if you completely confound your variable of interest, in this case species, with a batch effect, you will not be able to estimate the effect of either. In fact, in a <a href="http://www.nature.com/nrg/journal/v11/n10/full/nrg2825.html">2010 Nature Genetics Review</a>  about batch effects we warned about "cases in which batch effects are confounded with an outcome of interest and result in misleading biological or clinical conclusions". We also warned that none of the existing solutions for batch effects (Combat, SVA, RUV, etc...) can save you from a situation with perfect confounding. Because we can't always predict what will introduce unwanted variability, we recommend randomization as an experimental design approach. </p> <p style="text-align: left;"> Almost a decade later after the OMICS paper was published, the same surprising conclusion was reached in <a href="http://www.pnas.org/content/111/48/17224.abstract" target="_blank">this PNAS paper</a>:  "tissues appear more similar to one another within the same species than to the comparable organs of other species". This time RNAseq was used for both species and therefore the different platform issue was not considered<sup>*</sup>. Therefore, the authors implicitly assumed that (<em>b<sub>Y</sub> - b<sub>X</sub></em>)=0. However, in a recent F1000 Research <a href="http://f1000research.com/articles/4-121/v1" target="_blank">publication</a> Gilad and Mizrahi-Man describe describe an exercise in <a href="http://projecteuclid.org/euclid.aoas/1267453942">forensic bioinformatics</a> that led them to discover that mice and human samples were run in different lanes or different instruments. The confounding was near perfect (see <a href="https://f1000researchdata.s3.amazonaws.com/manuscripts/7019/9f5f4330-d81d-46b8-9a3f-d8cb7aaf577e_figure1.gif">Figure 1</a>). As pointed out by these authors, with this experimental design we can't  simply accept that (<em>b<sub>Y</sub> - b<sub>X</sub></em>)=0, which implies that we can't estimate a species effect. Gilad and Mizrahi-Man then apply a <a href="http://biostatistics.oxfordjournals.org/content/8/1/118.abstract">linear model</a> (ComBat) to account for the batch/species effect and find that <a href="https://f1000researchdata.s3.amazonaws.com/manuscripts/7019/9f5f4330-d81d-46b8-9a3f-d8cb7aaf577e_figure3.gif">samples cluster almost perfectly by tissue</a>. However, Gilad and Mizrahi-Man correctly note that,  due to the confounding, if there is in fact a species effect, this approach will remove it along with the batch effect. Unfortunately, due to the experimental design it will be hard or impossible to determine if it's batch or if it's species. More data  and more analyses are needed. </p> <p>Confounded designs ruin experiments. Current batch effect removal methods will not save you. If you are designing a large genomics experiments, learn about randomization.</p> <p style="text-align: left;">  * The fact that RNAseq was used does not necessarily mean there is no platform effect. The species have different genomes, with different sequences and thus can lead to different biases during experimental protocols. </p> <p style="text-align: left;"> <strong>Update: </strong>Shin Lin has repeated a small version of the experiment described in the <a href="http://www.pnas.org/content/111/48/17224.abstract" target="_blank">PNAS paper</a>. The new experimental design does not confound lane/instrument with species. The new data confirms their original results pointing to the fact that lane/instrument do not explain the clustering by species. You can see his response in the comments <a href="http://f1000research.com/articles/4-121/v1" target="_blank">here</a>. </p> Residual expertise - or why scientists are amateurs at most of science 2015-05-18T10:21:18+00:00 http://simplystats.github.io4024 <p><em>Editor’s note: I have been unsuccessfully attempting to finish a book I started 3 years ago about how and why everyone should get pumped about reading and understanding scientific papers. I’ve adapted part of one of the chapters into this blogpost. It is pretty raw but hopefully gets the idea across. </em></p> <p>An episode of_ The Daily Show with Jon Stewart_ featured physicist Lisa Randall, an incredible physicist and noted scientific communicator, as the invited guest.</p> <div style="background-color: #000000; width: 520px;"> <div style="padding: 4px;"> <p style="text-align: left; background-color: #ffffff; padding: 4px; margin-top: 4px; margin-bottom: 0px; font-family: Arial, Helvetica, sans-serif; font-size: 12px;"> <b><a href="http://thedailyshow.cc.com/">The Daily Show</a></b><br /> Get More: <a href="http://thedailyshow.cc.com/full-episodes/">Daily Show Full Episodes</a>,<a href="http://www.facebook.com/thedailyshow">The Daily Show on Facebook</a>,<a href="http://thedailyshow.cc.com/videos">Daily Show Video Archive</a> </p> </div> </div> <p>Near the end of the interview, Stewart asked Randall why, with all the scientific progress we have made, that we have been unable to move away from fossil fuel-based engines. The question led to the exchange:</p> <blockquote> <p><em>Randall: “So this is part of the problem, because I’m a scientist doesn’t mean I know the answer to that question.”</em></p> <p>**</p> </blockquote> <blockquote> <p>** <em>Stewart: ”Oh is that true? Here’s the thing, here’s what’s part of the answer. You could say anything and I would have no idea what you are talking about.”</em></p> </blockquote> <p>Professor Randall is a world leading physicist, the first woman to achieve tenure in physics at Princeton, Harvard, and MIT, and a member of the National Academy of Sciences.2 But when it comes to the science of fossil fuels, she is just an amateur. Her response to this question is just perfect - it shows that even brilliant scientists can just be interested amateurs on topics outside of their expertise. Despite Professor Randall’s over-the-top qualifications, she is an amateur on a whole range of scientific topics from medicine, to computer science, to nuclear engineering. Being an amateur isn’t a bad thing, and recognizing where you are an amateur may be the truest indicator of genius. That doesn’t mean Professor Randall can’t know a little bit about fossil fuels or be curious about why we don’t all have nuclear-powered hovercrafts yet. It just means she isn’t the authority.</p> <p>Stewart’s response is particularly telling and indicative of what a lot of people think about scientists. It takes years of experience to become an expert in a scientific field - some have suggested as many as 10,000 hours of dedicated time. Professor Randall is a scientist - so she must have more information about any scientific problem than an informed amateur like Jon Stewart. But of course this isn’t true, Jon Stewart (and you) could quickly learn as much about fossil fuels as a scientist if the scientist wasn’t already an expert in the area. Sure a background in physics would help, but there are a lot of moving parts in our dependence on fossil fuels, including social, political, economic problems in addition to the physics involved.</p> <p>This is an example of “residual expertise” - when people without deep scientific training are willing to attribute expertise to scientists even if it is outside their primary area of focus. It is closely related to the logical fallacy behind the <a href="http://en.wikipedia.org/wiki/Argument_from_authority">argument from authority</a>:</p> <blockquote> <p>A is an authority on a particular topic</p> <p>A says something about that topic</p> <p>A is probably correct</p> </blockquote> <p>the difference is that with residual expertise you assume that since A is an authority on a particular topic, if they say something about another, potentially related topic, they will probably be correct. This idea is critically important, it is how quacks make their living. The logical leap of faith from “that person is a doctor” to “that person is a doctor so of course they understand epidemiology, or vaccination, or risk communication” is exactly the leap empowered by the idea of residual expertise. It is also how you can line up scientific experts against any well established doctrine like evolution or climate change. Experts in the field will know all of the relevant information that supports key ideas in the field and what it would take to overturn those ideas. But experts outside of the field can be lined up and their residual expertise used to call into question even the most supported ideas.</p> <p>What does this have to do with you?</p> <p>Most people aren’t necessarily experts in scientific disciplines they care about. But becoming a successful amateur requires a much smaller time commitment than becoming an expert, but can still be incredibly satisfying, fun, and useful. This book is designed to help you become a fired-up amateur in the science of your choice. Think of it like a hobby, but one where you get to learn about some of the coolest new technologies and ideas coming out in the scientific literature. If you can ignore the way residual expertise makes you feel silly for reading scientific papers you don’t fully understand - you can still learn a ton and have a pretty fun time doing it.</p> <p> </p> <p> </p> The tyranny of the idea in science 2015-05-08T11:58:51+00:00 http://simplystats.github.io4014 <p>There are a lot of analogies between <a href="http://simplystatistics.org/2012/09/20/every-professor-is-a-startup/">startups and academic science labs</a>. One thing that is definitely very different is the relative value of ideas in the startup world and in the academic world. For example, <a href="http://simplystatistics.org/2012/09/20/every-professor-is-a-startup/">Paul Graham has said:</a></p> <blockquote> <p>Actually, startup ideas are not million dollar ideas, and here’s an experiment you can try to prove it: just try to sell one. Nothing evolves faster than markets. The fact that there’s no market for startup ideas suggests there’s no demand. Which means, in the narrow sense of the word, that startup ideas are worthless.</p> </blockquote> <p>In academics, almost the opposite is true. There is huge value to being first with an idea, even if you haven’t gotten all the details worked out or stable software in place. Here are a couple of extreme examples illustrated with Nobel prizes:</p> <ol> <li><strong>Higgs Boson</strong> - Peter Higgs <a href="http://journals.aps.org/pr/abstract/10.1103/PhysRev.145.1156">postulated the Boson in 1964</a>, <a href="http://www.symmetrymagazine.org/article/october-2013/nobel-prize-in-physics-honors-prediction-of-higgs-boson">he won the Nobel Prize in 2013 for that prediction</a>, in between tons of people did follow on work, someone convinced Europe to build one of the <a href="http://en.wikipedia.org/wiki/Large_Hadron_Collider">most expensive pieces of scientific equipment ever built</a> and conservatively thousands of scientists and engineers had to do a ton of work to get the equipment to (a) work and (b) confirm the prediction.</li> <li><strong>Human genome</strong> - <a href="http://en.wikipedia.org/wiki/Molecular_Structure_of_Nucleic_Acids:_A_Structure_for_Deoxyribose_Nucleic_Acid">Watson and Crick postulated the structure of DNA</a> in 1953, <a href="http://www.nobelprize.org/nobel_prizes/medicine/laureates/1962/">they won the Nobel Prize in  medicine in 1962</a> for this work. But the real value of the human genome was realized when the <a href="http://en.wikipedia.org/wiki/Human_Genome_Project">largest biological collaboration in history sequenced the human genome</a>, along with all of the subsequent work in the genomics revolution.</li> </ol> <p>These are two large scale examples where the academic scientific community (as represented by the Nobel committee, mostly because it is a concrete example) rewards the original idea and not the hard work to achieve that idea. I call this, “the tyranny of the idea.” I notice a similar issue on a much smaller scale, for example when people <a href="http://ivory.idyll.org/blog/2015-software-as-a-primary-product-of-science.html">don’t recognize software as a primary product of science</a>. I feel like these decisions devalue the real work it takes to make any scientific idea a reality. Sure the ideas are good, but it isn’t clear that some ideas wouldn’t be discovered by someone else - but surely we aren’t going to build another large hadron collider. I’d like to see the scales correct back the other way a little bit so we put at least as much emphasis on the science it takes to follow through on an idea as on discovering it in the first place.</p> Mendelian randomization inspires a randomized trial design for multiple drugs simultaneously 2015-05-07T11:30:09+00:00 http://simplystats.github.io3991 <p>Joe Pickrell has an interesting new paper out about <a href="http://biorxiv.org/content/early/2015/04/16/018150.full-text.pdf+html">Mendelian randomization.</a> He discusses some of the interesting issues that come up with these studies and performs a mini-review of previously published studies using the technique.</p> <p>The basic idea behind Mendelian Randomization is the following. In a simple, randomly mating population Mendel’s laws tell us that at any genomic locus (a measured spot in the genome) the allele (genetic material you got) you get is assigned at random. At the chromosome level this is very close to true due to properties of meiosis (here is an example of how this looks in very cartoonish form in yeast). A very famous example of this was an experiment performed by Leonid Kruglyak’s group where they took two strains of yeast and repeatedly mated them, then measured genetics and gene expression data. The experimental design looked like this:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/05/Slide06.jpg"><img class="aligncenter wp-image-4009 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/05/Slide06-300x224.jpg" alt="Slide06" width="300" height="224" srcset="http://simplystatistics.org/wp-content/uploads/2015/05/Slide06-300x224.jpg 300w, http://simplystatistics.org/wp-content/uploads/2015/05/Slide06-260x194.jpg 260w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>If you look at the allele inherited from the two parental strains (BY, RM)  at two separate genes on different chromsomes in each of the 112 segregants (yeast offspring)  they do appear to be random and independent:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/05/Screen-Shot-2015-05-07-at-11.20.46-AM.png"><img class="aligncenter wp-image-4010 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/05/Screen-Shot-2015-05-07-at-11.20.46-AM-235x300.png" alt="Screen Shot 2015-05-07 at 11.20.46 AM" width="235" height="300" /></a></p> <p> </p> <p> </p> <p>So this is a randomized trial in yeast where the yeast were each randomized to many many genetic “treatments” simultaneously. Now this isn’t strictly true, since genes on the same chromosomes near each other aren’t exactly random and in humans it is definitely not true since there is population structure, non-random mating and a host of other issues. But you can still do cool things to try to infer causality from the genetic “treatments” to downstream things like gene expression ( <a href="http://genomebiology.com/2007/8/10/r219">and even do a reasonable job in the model organism case</a>).</p> <p>In my mind this raises a potentially interesting study design for clinical trials. Suppose that there are 10 treatments for a disease that we know about. We design a study where each of the patients in the trial was randomized to receive treatment or placebo for each of the 10 treatments. So on average each person would get 5 treatments.  Then you could try to tease apart the effects using methods developed for the Mendelian randomization case. Of course, this is ignoring potential interactions, side effects of taking multiple drugs simultaneously, etc. But I’m seeing lots of <a href="http://www.nature.com/news/personalized-medicine-time-for-one-person-trials-1.17411">interesting proposals</a> for new trial designs (<a href="http://notstatschat.tumblr.com/post/118102423391/precise-answers-but-not-necessarily-to-the-right">which may or may not work</a>), so I thought I’d contribute my own interesting idea.</p> Rafa's citations above replacement in statistics journals is crazy high. 2015-05-01T11:18:47+00:00 http://simplystats.github.io3989 <p><em>Editor’s note:  I thought it would be fun to do some bibliometrics on a Friday. This is super hacky and the CAR/Y stat should not be taken seriously. </em></p> <p>I downloaded data on the 400 most cited papers between 2000-2010 in some statistical journals from <a href="webofscience.com/">Web of Science</a>. Here is a boxplot of the average number of citations per year (from publication date - 2015) to these papers in the journals Annals of Statistics, Biometrics, Biometrika, Biostatistics, JASA, Journal of Computational and Graphical Statistics, Journal of Machine Learning Research, and Journal of the Royal Statistical Society Series B.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/05/journals.png"><img class="aligncenter wp-image-4001" src="http://simplystatistics.org/wp-content/uploads/2015/05/journals-300x300.png" alt="journals" width="500" height="500" srcset="http://simplystatistics.org/wp-content/uploads/2015/05/journals-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/05/journals-1024x1024.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/05/journals-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/05/journals.png 1050w" sizes="(max-width: 500px) 100vw, 500px" /></a></p> <p> </p> <p>There are several interesting things about this graph right away. One is that JASA has the highest median number of citations, but has fewer “big hits” (papers with 100+ citations/year) than Annals of Statistics, JMLR, or JRSS-B. Another thing is how much of a lottery developing statistical methods seems to be. Most papers, even among the 400 most cited, have around 3 citations/year on average. But a few lucky winners have 100+ citations per year. One interesting thing for me is the papers that get 10 or more citations per year but aren’t huge hits. I suspect these are the papers that <a href="http://simplystatistics.org/2014/07/25/academic-statisticians-there-is-no-shame-in-developing-statistical-solutions-that-solve-just-one-problem/">solve one problem well but don’t solve the most general problem ever</a>.</p> <p>Something that jumps out from that plot is the outlier for the journal Biostatistics. One of their papers is cited 367.85 times per year. The next nearest competitor is 67.75 and it is 19 standard deviations above the mean! The paper in question is: “Exploration, normalization, and summaries of high density oligonucleotide array probe level data”, which is the paper that introduced RMA, one of the most popular methods for pre-processing microarrays ever created. It was written by Rafa and colleagues. It made me think of the statistic “<a href="http://www.fangraphs.com/library/misc/war/">wins above replacement</a>” which quantifies how many extra wins a baseball team gets by playing a specific player in place of a league average replacement.</p> <p>What about a “citations /year above replacement” statistic where you calculate for each journal:</p> <blockquote> <p>Median number of citations to a paper/year with Author X - Median number of citations/year to an average paper in that journal</p> </blockquote> <p>Then average this number across journals. This attempts to quantify how many extra citations/year a person’s papers generate compared to the “average” paper in that journal. For Rafa the numbers look like this:</p> <ul> <li>Biostatistics: Rafa = 15.475, Journal = 1.855, CAR/Y =  13.62</li> <li>JASA: Rafa = 74.5, Journal = 5.2, CAR/Y = 69.3</li> <li>Biometrics: Rafa = 4.33, Journal = 3.38, CAR/Y = 0.95</li> </ul> <p>So Rafa’s citations above replacement is (13.62 + 69.3 + 0.95)/3 =  27.96! There are a couple of reasons why this isn’t a completely accurate picture. One is the low sample size, the second is the fact that I only took the 400 most cited papers in each journal. Rafa has a few papers that didn’t make the top 400 for journals like JASA - which would bring down his CAR/Y.</p> <p> </p> Figuring Out Learning Objectives the Hard Way 2015-04-30T11:10:06+00:00 http://simplystats.github.io3999 <p>When building the <a href="https://www.coursera.org/specialization/genomics/41" title="Genomic Data Science Specialization">Genomic Data Science Specialization</a> (which starts in June!) we had to figure out the learning objectives for each course. We initially set our ambitions high, but as you can see in this video below, Steven Salzberg brought us back to Earth.</p> Data analysis subcultures 2015-04-29T10:23:57+00:00 http://simplystats.github.io3992 <p>Roger and I responded to the controversy around the journal that banned p-values today <a href="http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412">in Nature.</a> A piece like this requires a lot of information packed into very little space but I thought one idea that deserved to be talked about more was the idea of data analysis subcultures. From the paper:</p> <blockquote> <p>Data analysis is taught through an apprenticeship model, and different disciplines develop their own analysis subcultures. Decisions are based on cultural conventions in specific communities rather than on empirical evidence. For example, economists call data measured over time ‘panel data’, to which they frequently apply mixed-effects models. Biomedical scientists refer to the same type of data structure as ‘longitudinal data’, and often go at it with generalized estimating equations.</p> </blockquote> <p>I think this is one of the least appreciated components of modern data analysis. Data analysis is almost entirely taught through an apprenticeship culture with completely different behaviors taught in different disciplines. All of these disciplines agree about the mathematical optimality of specific methods under very specific conditions. That is why you see <a href="http://psychclassics.yorku.ca/Peirce/small-diffs.htm">methods</a> like <a href="http://en.wikipedia.org/wiki/Statistical_Methods_for_Research_Workers">randomized trials</a> [Roger and I responded to the controversy around the journal that banned p-values today <a href="http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412">in Nature.</a> A piece like this requires a lot of information packed into very little space but I thought one idea that deserved to be talked about more was the idea of data analysis subcultures. From the paper:</p> <blockquote> <p>Data analysis is taught through an apprenticeship model, and different disciplines develop their own analysis subcultures. Decisions are based on cultural conventions in specific communities rather than on empirical evidence. For example, economists call data measured over time ‘panel data’, to which they frequently apply mixed-effects models. Biomedical scientists refer to the same type of data structure as ‘longitudinal data’, and often go at it with generalized estimating equations.</p> </blockquote> <p>I think this is one of the least appreciated components of modern data analysis. Data analysis is almost entirely taught through an apprenticeship culture with completely different behaviors taught in different disciplines. All of these disciplines agree about the mathematical optimality of specific methods under very specific conditions. That is why you see <a href="http://psychclassics.yorku.ca/Peirce/small-diffs.htm">methods</a> like <a href="http://en.wikipedia.org/wiki/Statistical_Methods_for_Research_Workers">randomized trials</a>](http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty?language=en) across <a href="http://www.badscience.net/category/evidence-based-policy/">multiple disciplines</a>.</p> <p>But any real data analysis is always a multi-step process involving data cleaning and tidying, exploratory analysis, model fitting and checking, summarization and communication. If you gave someone from economics, biostatistics, statistics, and applied math an identical data set they’d give you back <strong>very</strong> different reports on what they did, why they did it, and what it all meant. Here are a few examples I can think of off the top of my head:</p> <ul> <li>Economics calls longitudinal data panel data and uses mostly linear mixed effects models, while generalized estimating equations are more common in biostatistics (this is the example from Roger/my paper).</li> <li>In genome wide association studies the family wise error rate is the most common error rate to control. In gene expression studies people frequently use the false discovery rate.</li> <li>This is changing a bit, but if you learned statistics at Duke you are probably a Bayesian and if you learned at Berkeley you are probably a frequentist.</li> <li>Psychology has a history of using <a href="http://en.wikipedia.org/wiki/Psychological_statistics">parametric statistics</a>, genomics is big into <a href="http://www.bioconductor.org/packages/release/bioc/html/limma.html">empirical Bayes</a>, and you see a lot of Bayesian statistics in <a href="https://www1.ethz.ch/iac/people/knuttir/papers/meinshausen09nat.pdf">climate studies</a>.</li> <li>You see [Roger and I responded to the controversy around the journal that banned p-values today <a href="http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412">in Nature.</a> A piece like this requires a lot of information packed into very little space but I thought one idea that deserved to be talked about more was the idea of data analysis subcultures. From the paper:</li> </ul> <blockquote> <p>Data analysis is taught through an apprenticeship model, and different disciplines develop their own analysis subcultures. Decisions are based on cultural conventions in specific communities rather than on empirical evidence. For example, economists call data measured over time ‘panel data’, to which they frequently apply mixed-effects models. Biomedical scientists refer to the same type of data structure as ‘longitudinal data’, and often go at it with generalized estimating equations.</p> </blockquote> <p>I think this is one of the least appreciated components of modern data analysis. Data analysis is almost entirely taught through an apprenticeship culture with completely different behaviors taught in different disciplines. All of these disciplines agree about the mathematical optimality of specific methods under very specific conditions. That is why you see <a href="http://psychclassics.yorku.ca/Peirce/small-diffs.htm">methods</a> like <a href="http://en.wikipedia.org/wiki/Statistical_Methods_for_Research_Workers">randomized trials</a> [Roger and I responded to the controversy around the journal that banned p-values today <a href="http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412">in Nature.</a> A piece like this requires a lot of information packed into very little space but I thought one idea that deserved to be talked about more was the idea of data analysis subcultures. From the paper:</p> <blockquote> <p>Data analysis is taught through an apprenticeship model, and different disciplines develop their own analysis subcultures. Decisions are based on cultural conventions in specific communities rather than on empirical evidence. For example, economists call data measured over time ‘panel data’, to which they frequently apply mixed-effects models. Biomedical scientists refer to the same type of data structure as ‘longitudinal data’, and often go at it with generalized estimating equations.</p> </blockquote> <p>I think this is one of the least appreciated components of modern data analysis. Data analysis is almost entirely taught through an apprenticeship culture with completely different behaviors taught in different disciplines. All of these disciplines agree about the mathematical optimality of specific methods under very specific conditions. That is why you see <a href="http://psychclassics.yorku.ca/Peirce/small-diffs.htm">methods</a> like <a href="http://en.wikipedia.org/wiki/Statistical_Methods_for_Research_Workers">randomized trials</a>](http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty?language=en) across <a href="http://www.badscience.net/category/evidence-based-policy/">multiple disciplines</a>.</p> <p>But any real data analysis is always a multi-step process involving data cleaning and tidying, exploratory analysis, model fitting and checking, summarization and communication. If you gave someone from economics, biostatistics, statistics, and applied math an identical data set they’d give you back <strong>very</strong> different reports on what they did, why they did it, and what it all meant. Here are a few examples I can think of off the top of my head:</p> <ul> <li>Economics calls longitudinal data panel data and uses mostly linear mixed effects models, while generalized estimating equations are more common in biostatistics (this is the example from Roger/my paper).</li> <li>In genome wide association studies the family wise error rate is the most common error rate to control. In gene expression studies people frequently use the false discovery rate.</li> <li>This is changing a bit, but if you learned statistics at Duke you are probably a Bayesian and if you learned at Berkeley you are probably a frequentist.</li> <li>Psychology has a history of using <a href="http://en.wikipedia.org/wiki/Psychological_statistics">parametric statistics</a>, genomics is big into <a href="http://www.bioconductor.org/packages/release/bioc/html/limma.html">empirical Bayes</a>, and you see a lot of Bayesian statistics in <a href="https://www1.ethz.ch/iac/people/knuttir/papers/meinshausen09nat.pdf">climate studies</a>.</li> <li>You see](http://en.wikipedia.org/wiki/White_test) used a lot in econometrics, but that is hardly ever done through formal hypothesis testing in biostatistics.</li> <li>Training sets and test sets are used in machine learning for prediction, but rarely used for inference.</li> </ul> <p>This is just a partial list I thought of off the top of my head, there are a ton more. These decisions matter <strong>a lot</strong> in a data analysis.  The problem is that the behavioral component of a data analysis is incredibly strong, no matter how much we’d like to think of the process as mathematico-theoretical. Until we acknowledge that the most common reason a method is chosen is because, “I saw it in a widely-cited paper in journal XX from my field” it is likely that little progress will be made on resolving the statistical problems in science.</p> Why is there so much university administration? We kind of asked for it. 2015-04-13T17:13:16+00:00 http://simplystats.github.io3985 <p>The latest commentary on the rising cost of college tuition is by Paul F. Campos and is titled <a href="http://www.nytimes.com/2015/04/05/opinion/sunday/the-real-reason-college-tuition-costs-so-much.html">The Real Reason College Tuition Costs So Much</a>. There has been much debate about this article and whether Campos is right or wrong…and I don’t plan to add to that. However, I wanted to pick up on a major point of the article that I felt got left hanging out there: The rising levels of administrative personnel at universities.</p> <p>Campos argues that the reason college tuition is on the rise is not that colleges get less and less money from the government (mostly state government for state schools), but rather that there is an increasing number of administrators at universities that need to be paid in dollars and cents. He cites a study that shows that for the California State University system, in a 34 year period, the number of of faculty rose by about 3% whereas the number of administrators rose by 221%.</p> <p>My initial thinking when I saw the 221% number was “only that much?” I’ve been a faculty member at Johns Hopkins now for about 10 years, and just in that short period I’ve seen the amount of administrative work I need to do go up what feels like at least 221%. Partially, of course, that is a result of climbing up the ranks. As you get more qualified to do administrative work, you get asked to do it! But even adjusting for that, there are quite a few things that faculty need to do now that they weren’t required to do before.  Frankly, I’m grateful for the few administrators that we do have around here to help me out with various things.</p> <p>Campos seems to imply (but doesn’t come out and say) that the bulk of administrators are not necessary. And that if we were to cut these people from the payrolls, that we could reduce tuition down to what it was in the old days. Or at least, it would be cheaper. This argument reminds me about debates over the federal budget: Everyone thinks the budget is too big, but no one wants to suggest something to cut.</p> <p>My point here is that the reason there are so many administrators is that there’s actually quite a bit of administration to do. And the amount of administration that needs to be done has increased over the past 30 years.</p> <p>Just for fun, I decided to go to the <a href="http://webapps.jhu.edu/jhuniverse/administration/">Johns Hopkins University Administration</a> web site to see who all these administrators were.  This site shows the President’s Cabinet and the Deans of the individual schools, which isn’t everybody, but it represents a large chunk. I don’t know all of these people, but I have met and worked with a few of them.</p> <p>For the moment I’m going to skip over individual people because, as much as you might think they are overpaid, no individual’s salary is large enough to move the needle on college tuition. So I’ll stick with people who actually represent large offices with staff. Here’s a sample.</p> <ul> <li><strong>University President</strong>. Call me crazy, but I think the university needs a President. In the U.S. the university President tends to focus on outward facing activities like raising money from various sources, liasoning with the government(s), and pushing university initiatives around the world. This is not something I want to do (but I think it’s necessary), I’d rather have the President take care of it for me.</li> <li> <p><strong>University Provost</strong>. At most universities in the U.S. the Provost is the “senior academic officer”, which means that he/she runs the university. This is a big job, especially at big universities, and require coordinating across a variety of constituencies. Also, at JHU, the Provost’s office deals with a number of compliance related issues like Title IX, accreditation, Americans with Disabilities Act, and many others. I suppose we could save some money by violating federal law, but that seems short-sighted. The people in this office do tough work involving a ton of paper. One example involves online education. Most states in the U.S. say that if you’re going to run an education program in their state, it needs to be approved by some regulatory body. Some states have essentially a reciprocal agreement, so if it’s okay in your state, then it’s okay in their state. But many states require an entire approval process for a program to run in that state. And by “a program” I mean something like an M.S. in Mathematics. If you want to run an M.S. in English that’s another approval, etc. So someone has to go to all the 50 states and D.C. and get approval for every online program that JHU runs in order to enroll students into that program from that state. I think Arkansas actually requires that someone come to Arkansas and testify in person about a program asking for approval.</p> <p>I support online education programs, and I’m glad the Provost’s office is getting all those approvals for us.&lt;/li&gt;</p> <ul> <li><strong>Corporate Security</strong>. This may be a difficult one for some people to understand, but bear in mind that much of Johns Hopkins is located in East Baltimore. If you’ve ever seen the TV show <a href="http://en.wikipedia.org/wiki/The_Wire">The Wire</a>, then you know why we need corporate security.</li> <li><strong>Facilities and Real Estate</strong>. Johns Hopkins owns and deals with a lot of real estate; it’s a big organization. Who is supposed to take care of all that? For example, we just installed a brand new supercomputer jointly with the University of Maryland, called <a href="http://marcc.jhu.edu">MARCC</a>. I’m really excited to use this supercomputer for research, but systems like this require a bit of space. A lot of space actually. So we needed to get some land to put it on. If you’ve ever bought a house, you know how much paperwork is involved.</li> <li><strong>Development and Alumni Relations</strong>. I have a new appreciation for this office now that I co-direct a <a href="https://www.coursera.org/specialization/jhudatascience/1">program</a> that has enrolled over 1.5 million people in just over a year. It’s critically important that we keep track of our students for many reasons: tracking student careers and success, tapping them to mentor current students, developing relationships with organizations that they’re connected to are just a few.</li> <li><strong>General Counsel</strong>. I’m not he lawbreaking type, so I need lawyers to help me out.</li> <li><strong>Enterprise Development</strong>. This office involves, among other things, technology transfer, which I have recently been involved with quite a bit for my role in the Data Science Specialization offered through Coursera. This is just to say that I personally benefit from this office. I’ve heard people say that universities shouldn’t be involved in tech transfer, but Bayh-Dole is what it is and I think Johns Hopkins should play by the same rules as everyone else. I’m not interested in filing patents, trademarks, and copyrights, so it’s good to have people doing that for me.&lt;/ul&gt;</li> </ul> <p>Okay, that’s just a few offices, but you get the point. These administrators seem to be doing a real job (imagine that!) and actually helping out the university. Many of these people are actually helping <em>me</em> out. Some of these jobs are essentially required by the existence of federal laws, and so we need people like this.</p> <p>So, just to recap, I think there are in fact more administrators in universities than there used to be. Is this causing an increase in tuition? It’s possible, but it’s probably not the only cause. If you believe the CSU study, there was about a 3.5% annual increase in the number of administrators each year from 1975 to 2008. College tuition during that time period went up <a href="http://trends.collegeboard.org/college-pricing/figures-tables/average-rates-growth-published-charges-decade">around 4% per year</a> (inflation adjusted). But even so, much of this administration needs to be done (because faculty don’t want to do it), so this is a difficult path to go down if you’re looking for ways to lower tuition.</p> <p>Even if we’ve found the smoking gun, the question is what do we do about it?</p> </li> </ul> Genomics Case Studies Online Courses Start in Two Weeks (4/27) 2015-04-13T10:00:29+00:00 http://simplystats.github.io3973 <p>The last month of the <a href="http://genomicsclass.github.io/book/pages/classes.html">HarvardX Data Analysis for Genomics series</a> start on 4/27. We will cover case studies on RNAseq, Variant calling, ChipSeq and DNA methylation. Faculty includes Shirley Liu, Mike Love, Oliver Hoffman and the HSPH Bioinformatics Core. Although taking the previous courses on the series will help, the four case study courses were developed as stand alone and you can obtain a certificate for each one without taking any other course.</p> <p>Each course is presented over two weeks but will remain open until June 13 to give students an opportunity to take them all if they wish. For more information follow the links listed below.</p> <ol> <li><a href="https://www.edx.org/course/case-study-rna-seq-data-analysis-harvardx-ph525-5x">RNA-seq data analysis</a> will be lead by Mike Love</li> <li><a href="https://www.edx.org/course/case-study-variant-discovery-and-genotyping-harvardx-ph525-6x">Variant Discovery and Genotyping</a> will be taught by Shannan Ho Sui, Oliver Hofmann, Radhika Khetani and Meeta Mistry (from the The HSPH Bioinformatics Core)</li> <li><a href="https://www.edx.org/course/case-study-chip-seq-data-analysis-harvardx-ph525-7x">ChIP-seq data analysis</a> will be lead by Shirley Liu</li> <li><a href="https://www.edx.org/course/case-study-dna-methylation-data-analysis-harvardx-ph525-8x">DNA methylation data analysis</a> will be lead by Rafael Irizarry</li> </ol> A blessing of dimensionality often observed in high-dimensional data sets 2015-04-09T15:19:13+00:00 http://simplystats.github.io3967 <p><a href="http://www.jstatsoft.org/v59/i10/paper"></a> have one observation per row and one variable per column.  Using this definition, big data sets can be either:</p> <ol> <li><strong>Wide</strong> - a wide data set has a large number of measurements per observation, but fewer observations. This type of data set is typical in neuroimaging, genomics, and other biomedical applications.</li> <li><strong>Tall</strong> - a tall data set has a large number of observations, but fewer measurements. This is the typical setting in a large clinical trial or in a basic social network analysis.</li> </ol> <p>The <a href="http://en.wikipedia.org/wiki/Curse_of_dimensionality">curse of dimensionality</a> tells us that estimating some quantities gets harder as the number of dimensions of a data set increases - as the data gets taller or wider. An example of this was <a href="http://simplystatistics.org/2014/10/24/an-interactive-visualization-to-teach-about-the-curse-of-dimensionality/">nicely illustrated</a> by my student Prasad (although it looks like his quota may be up on Rstudio).</p> <p>For wide data sets there is also a blessing of dimensionality. The basic reason for the blessing of dimensionality is that:</p> <blockquote> <p>No matter how many new measurements you take on a small set of observations, the number of observations and all of their characteristics are fixed.</p> </blockquote> <p>As an example, suppose that we make measurements on 10 people. We start out by making one measurement (blood pressure), then another (height), then another (hair color) and we keep going and going until we have one million measurements on those same 10 people. The blessing occurs because the measurements on those 10 people will all be related to each other. If 5 of the people are women and 5 or men, then any measurement that has a relationship with sex will be highly correlated with any other measurement that has a relationship with sex. So by knowing one small bit of information, you can learn a lot about many of the different measurements.</p> <p>This blessing of dimensionality is the key idea behind many of the statistical approaches to wide data sets whether it is stated explicitly or not. I thought I’d make a very short list of some of these ideas:</p> <p><strong>1. Idea: </strong><a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3841439/">De-convolving mixed observations from high-dimensional data. </a></p> <p><strong>How the blessing plays a role: </strong>The measurements for each observation are assumed to be a mixture of values measured from different observation types. The proportion of each observation type is assumed to be fixed across measurements, so you can take advantage of the multiple measurements to estimate the mixing percentage and perform the deconvolution. (<a href="http://odin.mdacc.tmc.edu/~wwang7/">Wenyi Wang</a> came and gave an excellent seminar on this idea at JHU a couple of days ago, which inspired this post).</p> <p><strong>2. Idea:</strong> <a href="http://biostatistics.oxfordjournals.org/content/5/2/155.short">The two groups model for false discovery rates</a>.</p> <p><strong>How the blessing plays a role: </strong> The models assume that a hypothesis test is performed for each observation and that the probability any observation is drawn from the null, the null distribution, and the alternative distributions are common across observations. If the null is assumed known, then it is possible to use the known null distribution to estimate the common probability that an observation is drawn from the null.</p> <p> </p> <p><strong>3. Idea: </strong><a href="http://www.degruyter.com/view/j/sagmb.2004.3.issue-1/sagmb.2004.3.1.1027/sagmb.2004.3.1.1027.xml">Empirical Bayes variance shrinkage for linear models</a></p> <p><strong>How the blessing plays a role: </strong> A linear model is fit for each observation and the means and variances of the log ratios calculated from the model are assumed to follow a common distribution across observations. The method estimates the hyper-parameters of these common distributions and uses them to adjust any individual measurement’s estimates.</p> <p> </p> <p><strong>4. Idea: </strong><a href="http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.0030161">Surrogate variable analysis</a></p> <p><strong>How the blessing plays a role: </strong> Each observation is assumed to be influenced by a single variable of interest (a primary variable) and multiple unmeasured confounders. Since the observations are fixed, the values of the unmeasured confounders are the same for each measurement and a supervised PCA can be used to estimate surrogates for the confounders. (<a href="http://www.slideshare.net/jtleek/jhu-feb2009">see my JHU job talk for more on the blessing</a>)</p> <p> </p> <p>The blessing of dimensionality I’m describing here is related to the idea that <a href="http://andrewgelman.com/2004/10/27/the_blessing_of/">Andrew Gelman refers to in this 2004 post.</a>  Basically, since increasingly large number of measurements are made on the same observations there is an inherent structure to those observations. If you take advantage of that structure, then as the dimensionality of your problem increases you actually get <strong>better estimates</strong> of the structure in your high-dimensional data - a nice blessing!</p> How to Get Ahead in Academia 2015-04-09T13:38:01+00:00 http://simplystats.github.io3969 <p>This video on how to make it in academia was produced over 10 years ago by Steven Goodman for the ENAR Junior Researchers Workshop. Now the whole world can benefit from its wisdom.</p> <p>The movie features current and former JHU Biostatistics faculty, including Francesca Dominici, Giovanni Parmigiani, Scott Zeger, and Tom Louis. You don’t want to miss Scott Zeger’s secret formula for getting promoted!</p> Why You Need to Study Statistics 2015-04-02T21:42:06+00:00 http://simplystats.github.io3964 <p>The American Statistical Association is continuing its campaign to get you to study statistics, if you haven’t already. I have to agree with them that being a statistician is a pretty good job. Their latest video highlights a wide range of statisticians working in industry, government, and academia. You can check it out here:</p> Teaser trailer for the Genomic Data Science Specialization on Coursera 2015-03-26T10:06:43+00:00 http://simplystats.github.io3957 <p> </p> <p>We have been hard at work in the studio putting together our next specialization to launch on Coursera. It will be called the “Genomic Data Science Specialization” and includes a spectacular line up of instructors: <a href="http://salzberg-lab.org/">Steven Salzberg</a>, <a href="http://ccb.jhu.edu/people/mpertea/">Ela Pertea</a>, <a href="http://jamestaylor.org/">James Taylor</a>, <a href="http://ccb.jhu.edu/people/florea/">Liliana Florea</a>, <a href="http://www.hansenlab.org/">Kasper Hansen</a>, and me. The specialization will cover command line tools, statistics, Galaxy, Bioconductor, and Python. There will be a capstone course at the end of the sequence featuring an in-depth genomic analysis. If you are a grad student, postdoc, or principal investigator in a group that does genomics this specialization is for you. If you are a person looking to transition into one of the hottest areas of research with the new precision medicine initiative this is for you. Get pumped and share the teaser-trailer with your friends!</p> Introduction to Bioconductor HarvardX MOOC starts this Monday March 30 2015-03-24T09:24:27+00:00 http://simplystats.github.io3954 <p>Bioconductor is one of the most widely used open source toolkits for biological high-throughput data. In this four week course, co-taught with Vince Carey and Mike Love, we will introduce you to Bioconductor’s general infrastructure and then focus on two specific technologies: next generation sequencing and microarrays. The lectures and assessments will be annotated in case you want to focus only on one of these two technologies. Although if you plan to be a bioinformatician we recommend you learn both.</p> <p>Topics covered include:</p> <ul> <li>A short introduction to molecular biology and measurement technology</li> <li>An overview on how to leverage the platform and genome annotation packages and experimental archives</li> <li>GenomicsRanges: the infrastructure for storing, manipulating and analyzing next generation sequencing data</li> <li>Parallel computing and cloud concepts</li> <li>Normalization, preprocessing and bias correction.</li> <li>Statistical inference in practice: including hierarchical models and gene set enrichment analysis</li> <li>Building statistical analysis pipelines of genome-scale assays including the creation of reproducible reports</li> </ul> <p>Throughout the class we will be using data examples from both next generation sequencing and microarray experiments.</p> <p>We will assume <a href="https://www.edx.org/course/statistics-r-life-sciences-harvardx-ph525-1x">basic knowledge of Statistics and R</a>.</p> <p>For more information visit the <a href="https://www.edx.org/course/introduction-bioconductor-harvardx-ph525-4x">course website</a>.</p> A surprisingly tricky issue when using genomic signatures for personalized medicine 2015-03-19T13:06:32+00:00 http://simplystats.github.io3946 <p>My student Prasad Patil has a really nice paper that <a href="http://bioinformatics.oxfordjournals.org/content/early/2015/03/18/bioinformatics.btv157.full.pdf?keytype=ref&amp;ijkey=loVpUJfJxG2QjoE">just came out in Bioinformatics</a> (<a href="http://biorxiv.org/content/early/2014/06/06/005983">preprint</a> in case paywalled). The paper is about a surprisingly tricky normalization issue with genomic signatures. Genomic signatures are basically statistical/machine learning functions applied to the measurements for a set of genes to predict how long patients will survive, or how they will respond to therapy. The issue is that usually when building and applying these signatures, people normalize across samples in the training and testing set.</p> <p>An example of this normalization is to mean-center the measurements for each gene in the testing/application stage, then apply the prediction rule. The problem is that if you use a different set of samples when calculating the mean you can get a totally different prediction function. The basic problem is illustrated in this graphic.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-12.58.03-PM.png"><img class="aligncenter wp-image-3947 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-12.58.03-PM-300x227.png" alt="Screen Shot 2015-03-19 at 12.58.03 PM" width="300" height="227" srcset="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-12.58.03-PM-300x227.png 300w, http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-12.58.03-PM-260x197.png 260w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p>This seems like a pretty esoteric statistical issue, but it turns out that this one simple normalization problem can dramatically change the results of the predictions. In particular, we show that the predictions for the same patient, with the exact same data, can change dramatically if you just change the subpopulations of patients within the testing set. In this plot, Prasad made predictions for the exact same set of patients two times when the patient population varied in ER status composition. As many as 30% of the predictions were different for the same patient with the same data if you just varied who they were being predicted with.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-1.02.25-PM.png"><img class="aligncenter wp-image-3948 size-full" src="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-1.02.25-PM.png" alt="Screen Shot 2015-03-19 at 1.02.25 PM" width="494" height="277" srcset="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-1.02.25-PM-300x168.png 300w, http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-19-at-1.02.25-PM.png 494w" sizes="(max-width: 494px) 100vw, 494px" /></a></p> <p> </p> <p>This paper highlights how tricky statistical issues can slow down the process of translating ostensibly really useful genomic signatures into clinical practice and lends even more weight to the idea that precision medicine is a statistical field.</p> A simple (and fair) way all statistics journals could drive up their impact factor. 2015-03-18T16:32:10+00:00 http://simplystats.github.io3941 <p>Hypothesis:</p> <blockquote> <p>If every method in every stats journal was implemented in a corresponding R package (<a href="http://hilaryparker.com/2014/04/29/writing-an-r-package-from-scratch/">easy</a>), was required to have a  companion document that was a tutorial on how to use the software (<a href="http://www.bioconductor.org/help/package-vignettes/">easy</a>), included a reference to how to cite the paper if you used the software (<a href="http://www.inside-r.org/r-doc/utils/citation">easy</a>) and the paper/tutorial was posted to the relevant message boards for the communities of interest (<a href="http://seqanswers.com/forums/showthread.php?t=42018">easy</a>) that journal would see a dramatic bump in its impact factor.</p> </blockquote> Data science done well looks easy - and that is a big problem for data scientists 2015-03-17T10:47:12+00:00 http://simplystats.github.io3923 <p>Data science has a ton of different definitions. For the purposes of this post I’m going to use the definition of data science we used when creating our Data Science program online. Data science is:</p> <blockquote> <p>Data science is the process of formulating a quantitative question that can be answered with data, collecting and cleaning the data, analyzing the data, and communicating the answer to the question to a relevant audience.</p> </blockquote> <p>In general the data science process is iterative and the different components blend together a little bit. But for simplicity lets discretize the tasks into the following 7 steps:</p> <ol> <li>Define the question of interest</li> <li>Get the data</li> <li>Clean the data</li> <li>Explore the data</li> <li>Fit statistical models</li> <li>Communicate the results</li> <li>Make your analysis reproducible</li> </ol> <p>A good data science project answers a real scientific or business analytics question. In almost all of these experiments the vast majority of the analyst’s time is spent on getting and cleaning the data (steps 2-3) and communication and reproducibility (6-7). In most cases, if the data scientist has done her job right the statistical models don’t need to be incredibly complicated to identify the important relationships the project is trying to find. In fact, if a complicated statistical model seems necessary, it often means that you don’t have the right data to answer the question you really want to answer. One option is to spend a huge amount of time trying to tune a statistical model to try to answer the question but serious data scientist’s usually instead try to go back and get the right data.</p> <p>The result of this process is that most well executed and successful data science projects don’t (a) use super complicated tools or (b) fit super complicated statistical models. The characteristics of the most successful data science projects I’ve evaluated or been a part of are: (a) a laser focus on solving the scientific problem, (b) careful and thoughtful consideration of whether the data is the right data and whether there are any lurking confounders or biases and (c) relatively simple statistical models applied and interpreted skeptically.</p> <p>It turns out doing those three things is actually surprisingly hard and very, very time consuming. It is my experience that data science projects take a solid 2-3 times as long to complete as a project in theoretical statistics. The reason is that inevitably the data are a mess and you have to clean them up, then you find out the data aren’t quite what you wanted to answer the question, so you go find a new data set and clean it up, etc. After a ton of work like that, you have a nice set of data to which you fit simple statistical models and then it looks <strong>super easy </strong>to someone who either doesn’t know about the data collection and cleaning process or doesn’t care.</p> <p>This poses a major public relations problem for serious data scientists. When you show someone a good data science project they almost invariably think “oh that is easy” or “that is just a trivial statistical/machine learning model” and don’t see all of the work that goes into solving the real problems in data science. A concrete example of this is in academic statistics. It is customary for people to show theorems in their talks and maybe even some of the proof. This gives people working on theoretical projects an opportunity to “show their stuff” and demonstrate how good they are. The equivalent for a data scientist would be showing how they found and cleaned multiple data sets, merged them together, checked for biases, and arrived at a simplified data set. Showing the “proof” would be equivalent to showing how they matched IDs. These things often don’t look nearly as impressive in talks, particularly if the audience doesn’t have experience with how incredibly delicate real data analysis is. I imagine versions of this problem play out in industry as well (candidate X did a good analysis but it wasn’t anything special, candidate Y used Hadoop to do BIG DATA!).</p> <p>The really tricky twist is that bad data science looks easy too. You can scrape a data set off the web and slap a machine learning algorithm on it no problem. So how do you judge whether a data science project is really “hard” and whether the data scientist is an expert? Just like with anything, there is no easy shortcut to evaluating data science projects. You have to ask questions about the details of how the data were collected, what kind of biases might exist, why they picked one data set over another, etc.  In the meantime, don’t be fooled by what looks like simple data science - <a href="http://fivethirtyeight.com/interactives/senate-forecast/">it can often be pretty effective</a>.</p> <p> </p> <p><em>Editor’s note: If you like this post, you might like my pay-what-you-want book Elements of Data Analytic Style: <a href="https://leanpub.com/datastyle">https://leanpub.com/datastyle</a></em></p> <p> </p> π day special: How to use Bioconductor to find empirical evidence in support of π being a normal number 2015-03-14T10:15:10+00:00 http://simplystats.github.io3928 <p><em>Editor’s note: Today 3/14/15 at some point between  9:26:53 and 9:26:54 it was the most π day of them all. Below is a repost from last year. </em></p> <p>Happy π day everybody!</p> <p>I wanted to write some simple code (included below) to the test parallelization capabilities of my  new cluster. So, in honor of  π day, I decided to check for <a href="http://www.davidhbailey.com/dhbpapers/normality.pdf">evidence that π is a normal number</a>. A <a href="http://en.wikipedia.org/wiki/Normal_number">normal number</a> is a real number whose infinite sequence of digits has the property that picking any given random m digit pattern is 10<sup>−m</sup>. For example, using the Poisson approximation, we can predict that the pattern “123456789” should show up between 0 and 3 times in the <a href="http://stuff.mit.edu/afs/sipb/contrib/pi/">first billion digits of π</a> (it actually shows up twice starting, at the 523,551,502-th and  773,349,079-th decimal places).</p> <p>To test our hypothesis, let Y<sub>1</sub>, …, Y<sub>100</sub> be the number of “00”, “01”, …,”99” in the first billion digits of  π. If  π is in fact normal then the Ys should be approximately IID binomials with N=1 billon and p=0.01.  In the qq-plot below I show Z-scores (Y - 10,000,000) /  √9,900,000) which appear to follow a normal distribution as predicted by our hypothesis. Further evidence for π being normal is provided by repeating this experiment for 3,4,5,6, and 7 digit patterns (for 5,6 and 7 I sampled 10,000 patterns). Note that we can perform a chi-square test for the uniform distribution as well. For patterns of size 1,2,3 the p-values were 0.84, <del>0.89,</del> 0.92, and 0.99.</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/pi-3/" rel="attachment wp-att-2792"><img class="alignnone size-full wp-image-2792" src="http://simplystatistics.org/wp-content/uploads/2014/03/pi2.png" alt="pi" width="4800" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/pi2-300x187.png 300w, http://simplystatistics.org/wp-content/uploads/2014/03/pi2-1024x640.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/pi2.png 4800w" sizes="(max-width: 4800px) 100vw, 4800px" /></a></p> <p>Another test we can perform is to divide the 1 billion digits into 100,000 non-overlapping segments of length 10,000. The vector of counts for any given pattern should also be binomial. Below I also include these qq-plots.</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/pi2/" rel="attachment wp-att-2793"><img class="alignnone size-full wp-image-2793" src="http://simplystatistics.org/wp-content/uploads/2014/03/pi21.png" alt="pi2" width="5600" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/pi21-1024x548.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/pi21.png 5600w" sizes="(max-width: 5600px) 100vw, 5600px" /></a></p> <p>These observed counts should also be independent, and to explore this we can look at autocorrelation plots:</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/piacf-2/" rel="attachment wp-att-2794"><img class="alignnone size-full wp-image-2794" src="http://simplystatistics.org/wp-content/uploads/2014/03/piacf1.png" alt="piacf" width="5600" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/piacf1-1024x548.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/piacf1.png 5600w" sizes="(max-width: 5600px) 100vw, 5600px" /></a></p> <p>To do this in about an hour and with just a few lines of code (included below), I used the <a href="http://www.bioconductor.org/">Bioconductor</a> <a href="http://www.bioconductor.org/packages/release/bioc/html/Biostrings.html">Biostrings</a> package to match strings and the <em>foreach</em> function to parallelize.</p> <p><em>Editor’s note: Today 3/14/15 at some point between  9:26:53 and 9:26:54 it was the most π day of them all. Below is a repost from last year. </em></p> <p>Happy π day everybody!</p> <p>I wanted to write some simple code (included below) to the test parallelization capabilities of my  new cluster. So, in honor of  π day, I decided to check for <a href="http://www.davidhbailey.com/dhbpapers/normality.pdf">evidence that π is a normal number</a>. A <a href="http://en.wikipedia.org/wiki/Normal_number">normal number</a> is a real number whose infinite sequence of digits has the property that picking any given random m digit pattern is 10<sup>−m</sup>. For example, using the Poisson approximation, we can predict that the pattern “123456789” should show up between 0 and 3 times in the <a href="http://stuff.mit.edu/afs/sipb/contrib/pi/">first billion digits of π</a> (it actually shows up twice starting, at the 523,551,502-th and  773,349,079-th decimal places).</p> <p>To test our hypothesis, let Y<sub>1</sub>, …, Y<sub>100</sub> be the number of “00”, “01”, …,”99” in the first billion digits of  π. If  π is in fact normal then the Ys should be approximately IID binomials with N=1 billon and p=0.01.  In the qq-plot below I show Z-scores (Y - 10,000,000) /  √9,900,000) which appear to follow a normal distribution as predicted by our hypothesis. Further evidence for π being normal is provided by repeating this experiment for 3,4,5,6, and 7 digit patterns (for 5,6 and 7 I sampled 10,000 patterns). Note that we can perform a chi-square test for the uniform distribution as well. For patterns of size 1,2,3 the p-values were 0.84, <del>0.89,</del> 0.92, and 0.99.</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/pi-3/" rel="attachment wp-att-2792"><img class="alignnone size-full wp-image-2792" src="http://simplystatistics.org/wp-content/uploads/2014/03/pi2.png" alt="pi" width="4800" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/pi2-300x187.png 300w, http://simplystatistics.org/wp-content/uploads/2014/03/pi2-1024x640.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/pi2.png 4800w" sizes="(max-width: 4800px) 100vw, 4800px" /></a></p> <p>Another test we can perform is to divide the 1 billion digits into 100,000 non-overlapping segments of length 10,000. The vector of counts for any given pattern should also be binomial. Below I also include these qq-plots.</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/pi2/" rel="attachment wp-att-2793"><img class="alignnone size-full wp-image-2793" src="http://simplystatistics.org/wp-content/uploads/2014/03/pi21.png" alt="pi2" width="5600" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/pi21-1024x548.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/pi21.png 5600w" sizes="(max-width: 5600px) 100vw, 5600px" /></a></p> <p>These observed counts should also be independent, and to explore this we can look at autocorrelation plots:</p> <p><a href="http://simplystatistics.org/2014/03/14/using-bioconductor-to-find-some-empirical-evidence-in-support-of-%cf%80-being-a-normal-number/piacf-2/" rel="attachment wp-att-2794"><img class="alignnone size-full wp-image-2794" src="http://simplystatistics.org/wp-content/uploads/2014/03/piacf1.png" alt="piacf" width="5600" height="3000" srcset="http://simplystatistics.org/wp-content/uploads/2014/03/piacf1-1024x548.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/03/piacf1.png 5600w" sizes="(max-width: 5600px) 100vw, 5600px" /></a></p> <p>To do this in about an hour and with just a few lines of code (included below), I used the <a href="http://www.bioconductor.org/">Bioconductor</a> <a href="http://www.bioconductor.org/packages/release/bioc/html/Biostrings.html">Biostrings</a> package to match strings and the <em>foreach</em> function to parallelize.</p> <p>`</p> <p>NB: A normal number has the above stated property in any base. The examples above a for base 10.</p> De-weaponizing reproducibility 2015-03-13T10:24:05+00:00 http://simplystats.github.io3925 <div> A couple of weeks ago Roger and I went to a <a href="http://sites.nationalacademies.org/DEPS/BMSA/DEPS_153236">conference on statistical reproducibility </a>held at the National Academy of Sciences. The discussion was pretty wide ranging and I love that the thinking about reproducibility is coming back to statistics. There was pretty widespread support for the idea that prevention is the <a href="http://arxiv.org/abs/1502.03169">right way to approach reproducibility</a>. </div> <div> </div> <div> It turns out I was the last speaker of the whole conference. This is an unenviable position to be in with so many bright folks speaking first as they covered a huge amount of what I wanted to say. <a href="http://www.slideshare.net/jtleek/evidence-based-data-analysis">My talk focused on three key points:</a> </div> <div> </div> <ol> <li>The tools for reproducibility already exist, the barrier isn’t tools</li> <li>We need to de-weaponize reproducibility</li> <li>Prevention is the right approach to reproducibility</li> </ol> <p> </p> <p>In terms of the first point, <a href="http://simplystatistics.org/2014/09/04/why-the-three-biggest-positive-contributions-to-reproducible-research-are-the-ipython-notebook-knitr-and-galaxy/">tools like iPython, knitr, and Galaxy </a>can be used to all but the absolutely largest analysis reproducible right now.  Our group does this all the time with our papers and so do many others. The problem isn’t a lack of tools.</p> <p>Speaking to point two, I think many people would agree that part of the issue is culture change. One issue that is increasingly concerning to me is the “weaponization” of reproducibility.  I have been noticing is that some of us (like me, my students, other folks at JHU, and lots of particularly junior computational people elsewhere) are trying really hard to be reproducible. Most of the time this results in really positive reactions from the community. But when a co-author of mine and I wrote that paper about the <a href="http://biostatistics.oxfordjournals.org/content/early/2013/09/24/biostatistics.kxt007.abstract">science-wise false discovery rate</a>, one of the discussants used our code (great), improved on it (great), identified a bug (great), and then did his level best to humiliate us both in front of the editor and the general public because of that bug (<a href="http://simplystatistics.org/2013/09/26/how-could-code-review-discourage-code-disclosure-reviewers-with-motivation/">not so great</a>).</p> <div> </div> <div> I have seen this happen several times. Most of the time if a paper is reproducible the authors get a pat on the back and their code is either ignored, or used in a positive way. But for high-profile and important problems, people  largely use eproducibility to: </div> <div> </div> <ol> <li> Impose regulatory hurdles in the short term while people transition to reproducibility. One clear example of this is the <a href="https://www.congress.gov/bill/113th-congress/house-bill/4012">Secret Science Reform Act</a> which is a bill that imposes strict reproducibility conditions on all science before it can be used as evidence for regulation.</li> <li>Humiliate people who aren’t good coders or who make mistakes in their code. This is what happened in my paper when I produced reproducible code for my analysis, but has also happened <a href="http://simplystatistics.org/2014/01/28/marie-curie-says-stop-hating-on-quilt-plots-already/">to other people</a>.</li> <li>Take advantage of people’s code to plagiarize/straight up steal work. I have stories about this I’d rather not put on the internet</li> </ol> <p> </p> <p>Of the three, I feel like (1) and (2) are the most common. Plagiarism and scooping by theft I think are actually relatively rare based on my own anecdotal experience. But I think that the “weaponization” of reproducibility to block regulation or to humiliate folks who are new to computational sciences is more common than I’d like it to be. Until reproducibility is the standard for everyone - which I think is possible now and will happen as the culture changes -  the people who are the early adopters are at risk of being bludgeoned with their own reproducibility. As a community, if we want widespread reproducibility adoption we have to be ferocious about not allowing this to happen.</p> The elements of data analytic style - so much for a soft launch 2015-03-03T11:22:28+00:00 http://simplystats.github.io3910 <p><em>Editor’s note: I wrote a book called Elements of Data Analytic Style. Buy it on <a href="https://leanpub.com/datastyle">Leanpub</a> or <a href="http://www.amazon.com/Elements-Data-Analytic-Style-ebook/dp/B00U6D80YY/ref=sr_1_1?ie=UTF8&amp;qid=1425397222&amp;sr=8-1&amp;keywords=elements+of+data+analytic+style">Amazon</a>! If you buy it on Leanpub, you get all updates (there are likely to be some) for free and you can pay what you want (including zero) but the author would be appreciative if you’d throw a little scratch his way. </em></p> <p>So uh, I was going to soft launch my new book The Elements of Data Analytic Style yesterday. I figured I’d just quietly email my Coursera courses to let them know I created a new reference. It turns out that that wasn’t very quiet. First this happened:</p> <blockquote class="twitter-tweet" width="550"> <p> <a href="https://twitter.com/jtleek">@jtleek</a> <a href="https://twitter.com/albertocairo">@albertocairo</a> <a href="https://twitter.com/simplystats">@simplystats</a> Instabuy. And apparently not just for me: it looks like you just Slashdotted <a href="https://twitter.com/leanpub">@leanpub</a>'s website. </p> <p> &mdash; Andrew Janke (@AndrewJanke) <a href="https://twitter.com/AndrewJanke/status/572474567467401216">March 2, 2015</a> </p> </blockquote> <p> </p> <p>and sure enough the website was down:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-02-at-2.14.05-PM.png"><img class="aligncenter wp-image-3919 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-02-at-2.14.05-PM-300x202.png" alt="Screen Shot 2015-03-02 at 2.14.05 PM" width="300" height="202" srcset="http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-02-at-2.14.05-PM-300x202.png 300w, http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-02-at-2.14.05-PM-1024x690.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/03/Screen-Shot-2015-03-02-at-2.14.05-PM-260x175.png 260w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p> </p> <p>then overnight it did something like 6,000+ units:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/03/whoacoursera.png"><img class="aligncenter wp-image-3920 size-medium" src="http://simplystatistics.org/wp-content/uploads/2015/03/whoacoursera-300x300.png" alt="whoacoursera" width="300" height="300" srcset="http://simplystatistics.org/wp-content/uploads/2015/03/whoacoursera-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/03/whoacoursera-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/03/whoacoursera.png 480w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p> </p> <p> </p> <p>So lesson learned, there is no soft open with Coursera. Here is the post I was going to write though:</p> <p> </p> <p>### Post I was gonna write</p> <p>I have been doing data analysis for something like 10 years now (gulp!) and teaching data analysis in person for 6+ years. One of the things we do in <a href="https://github.com/jtleek/jhsph753and4">my data analysis class at Hopkins</a> is to perform a complete data analysis (from raw data to written report) every couple of weeks. Then I grade each assignment for everything from data cleaning to the written report and reproducibility. I’ve noticed over the course of teaching this class (and classes online) that there are many common elements of data analytic style that I don’t often see in textbooks, or when I do, I see them spread across multiple books.</p> <p>I’ve posted on some of these issues in some open source guides I’ve posted to Github like:</p> <ul> <li><a href="http://simplystatistics.org/2014/05/22/10-things-statistics-taught-us-about-big-data-analysis/" target="_self">10 things statistics taught us about big data analysis</a></li> <li><a href="https://github.com/jtleek/rpackages" target="_self">The Leek Group Guide to R packages</a></li> <li><a href="https://github.com/jtleek/datasharing" target="_self">How to share data with a statistician</a></li> </ul> <p>But I decided that it might be useful to have a more complete guide to the “art” part of data analysis. One goal is to summarize in a succinct way the most common difficulties encountered by practicing data analysts. It may be a useful guide for peer reviewers who could refer to section numbers when evaluating manuscripts, for instructors who have to grade data analyses, as a supplementary text for a data analysis class, or just as a useful reference. It is modeled loosely in format and aim on the <a href="http://www.bartleby.com/141/">Elements of Style</a> by William Strunk. Just as with the EoS, both the checklist and my book cover a small fraction of the field of data analysis, but my experience is that once these elements are mastered, data analysts benefit most from hands on experience in their own discipline of application, and that many principles may be non-transferable beyond the basics. But just as with writing, new analysts would do better to follow the rules until they know them well enough to violate them.</p> <ul> <li><a href="https://leanpub.com/datastyle/">Buy EDAS on Leanpub</a></li> <li><a href="http://www.amazon.com/Elements-Data-Analytic-Style-ebook/dp/B00U6D80YY/ref=sr_1_1?ie=UTF8&amp;qid=1425397222&amp;sr=8-1&amp;keywords=elements+of+data+analytic+style">Buy EDAS on Amazon</a></li> </ul> <p>The book includes a basic checklist that may be useful as a guide for beginning data analysts or as a rubric for evaluating data analyses. I’m reproducing it here so you can comment/hate/enjoy on it.</p> <p> </p> <p><em><strong>The data analysis checklis</strong>t</em></p> <p>This checklist provides a condensed look at the information in this book. It can be used as a guide during the process of a data analysis, as a rubric for grading data analysis projects, or as a way to evaluate the quality of a reported data analysis.</p> <p><strong>I Answering the question</strong></p> <ol> <li> <p>Did you specify the type of data analytic question (e.g. exploration, assocation causality) before touching the data?</p> </li> <li> <p>Did you define the metric for success before beginning?</p> </li> <li> <p>Did you understand the context for the question and the scientific or business application?</p> </li> <li> <p>Did you record the experimental design?</p> </li> <li> <p>Did you consider whether the question could be answered with the available data?</p> </li> </ol> <p><strong>II Checking the data</strong></p> <ol> <li> <p>Did you plot univariate and multivariate summaries of the data?</p> </li> <li> <p>Did you check for outliers?</p> </li> <li> <p>Did you identify the missing data code?</p> </li> </ol> <p><strong>III Tidying the data</strong></p> <ol> <li> <p>Is each variable one column?</p> </li> <li> <p>Is each observation one row?</p> </li> <li> <p>Do different data types appear in each table?</p> </li> <li> <p>Did you record the recipe for moving from raw to tidy data?</p> </li> <li> <p>Did you create a code book?</p> </li> <li> <p>Did you record all parameters, units, and functions applied to the data?</p> </li> </ol> <p><strong>IV Exploratory analysis</strong></p> <ol> <li> <p>Did you identify missing values?</p> </li> <li> <p>Did you make univariate plots (histograms, density plots, boxplots)?</p> </li> <li> <p>Did you consider correlations between variables (scatterplots)?</p> </li> <li> <p>Did you check the units of all data points to make sure they are in the right range?</p> </li> <li> <p>Did you try to identify any errors or miscoding of variables?</p> </li> <li> <p>Did you consider plotting on a log scale?</p> </li> <li> <p>Would a scatterplot be more informative?</p> </li> </ol> <p><strong>V Inference</strong></p> <ol> <li> <p>Did you identify what large population you are trying to describe?</p> </li> <li> <p>Did you clearly identify the quantities of interest in your model?</p> </li> <li> <p>Did you consider potential confounders?</p> </li> <li> <p>Did you identify and model potential sources of correlation such as measurements over time or space?</p> </li> <li> <p>Did you calculate a measure of uncertainty for each estimate on the scientific scale?</p> </li> </ol> <p><strong>VI Prediction</strong></p> <ol> <li> <p>Did you identify in advance your error measure?</p> </li> <li> <p>Did you immediately split your data into training and validation?</p> </li> <li> <p>Did you use cross validation, resampling, or bootstrapping only on the training data?</p> </li> <li> <p>Did you create features using only the training data?</p> </li> <li> <p>Did you estimate parameters only on the training data?</p> </li> <li> <p>Did you fix all features, parameters, and models before applying to the validation data?</p> </li> <li> <p>Did you apply only one final model to the validation data and report the error rate?</p> </li> </ol> <p><strong>VII Causality</strong></p> <ol> <li> <p>Did you identify whether your study was randomized?</p> </li> <li> <p>Did you identify potential reasons that causality may not be appropriate such as confounders, missing data, non-ignorable dropout, or unblinded experiments?</p> </li> <li> <p>If not, did you avoid using language that would imply cause and effect?</p> </li> </ol> <p><strong>VIII Written analyses</strong></p> <ol> <li> <p>Did you describe the question of interest?</p> </li> <li> <p>Did you describe the data set, experimental design, and question you are answering?</p> </li> <li> <p>Did you specify the type of data analytic question you are answering?</p> </li> <li> <p>Did you specify in clear notation the exact model you are fitting?</p> </li> <li> <p>Did you explain on the scale of interest what each estimate and measure of uncertainty means?</p> </li> <li> <p>Did you report a measure of uncertainty for each estimate on the scientific scale?</p> </li> </ol> <p><strong>IX Figures</strong></p> <ol> <li> <p>Does each figure communicate an important piece of information or address a question of interest?</p> </li> <li> <p>Do all your figures include plain language axis labels?</p> </li> <li> <p>Is the font size large enough to read?</p> </li> <li> <p>Does every figure have a detailed caption that explains all axes, legends, and trends in the figure?</p> </li> </ol> <p><strong>X Presentations</strong></p> <ol> <li> <p>Did you lead with a brief, understandable to everyone statement of your problem?</p> </li> <li> <p>Did you explain the data, measurement technology, and experimental design before you explained your model?</p> </li> <li> <p>Did you explain the features you will use to model data before you explain the model?</p> </li> <li> <p>Did you make sure all legends and axes were legible from the back of the room?</p> </li> </ol> <p><strong>XI Reproducibility</strong></p> <ol> <li> <p>Did you avoid doing calculations manually?</p> </li> <li> <p>Did you create a script that reproduces all your analyses?</p> </li> <li> <p>Did you save the raw and processed versions of your data?</p> </li> <li> <p>Did you record all versions of the software you used to process the data?</p> </li> <li> <p>Did you try to have someone else run your analysis code to confirm they got the same answers?</p> </li> </ol> <p><strong>XI R packages</strong></p> <ol> <li> <p>Did you make your package name “Googleable”</p> </li> <li> <p>Did you write unit tests for your functions?</p> </li> <li> <p>Did you write help files for all functions?</p> </li> <li> <p>Did you write a vignette?</p> </li> <li> <p>Did you try to reduce dependencies to actively maintained packages?</p> </li> <li> <p>Have you eliminated all errors and warnings from R CMD CHECK?</p> </li> </ol> <p> </p> Advanced Statistics for the Life Sciences MOOC Launches Today 2015-03-02T09:37:39+00:00 http://simplystats.github.io3915 <p>In <a href="https://www.edx.org/course/advanced-statistics-life-sciences-harvardx-ph525-3x#.VPRzYSnffwc">In</a> we will teach statistical techniques that are commonly used in the analysis of high-throughput data and their corresponding R implementations. In Week 1 we will explain inference in the context of high-throughput data and introduce the concept of error controlling procedures. We will describe the strengths and weakness of the Bonferroni correction, FDR and q-values. We will show how to implement these in cases in which  thousands of tests are conducted, as is typically done with genomics data. In Week 2 we will introduce the concept of mathematical distance and how it is used in exploratory data analysis, clustering, and machine learning. We will describe how techniques such as principal component analysis (PCA) and the singular value decomposition (SVD) can be used for dimension reduction in high dimensional data. During week 3 we will describe confounding, latent variables and factor analysis in the context of high dimensional data and how this relates to batch effects. We will show how to implement methods such as SVA to perform inference on data affected by batch effects. Finally, during week 4 we will show how statistical modeling, and empirical Bayes modeling in particular, are powerful techniques that greatly improve precision in high-throughput data. We will be using R code to explain concepts throughout the course. We will also be using exploratory data analysis and data visualization to motivate the techniques we teach during each week.</p> Navigating Big Data Careers with a Statistics PhD 2015-02-18T10:12:29+00:00 http://simplystats.github.io3904 <div> <em>Editor's note: This is a guest post by <a href="http://www.drsherrirose.com/" target="_blank">Sherri Rose</a>. She is an Assistant Professor of Biostatistics in the Department of Health Care Policy at Harvard Medical School. Her work focuses on nonparametric estimation, causal inference, and machine learning in health settings. Dr. Rose received her BS in statistics from The George Washington University and her PhD in biostatistics from the University of California, Berkeley, where she coauthored a book on <a href="http://drsherrirose.com/targeted-learning-book/" target="_blank">Targeted Learning</a>. She tweets <a href="https://twitter.com/sherrirose" target="_blank">@sherrirose</a>.</em> </div> <div> </div> <div> A quick scan of the science and technology headlines often yields two words: big data. The amount of information we collect has continued to increase, and this data can be found in varied sectors, ranging from social media to genomics. Claims are made that big data will solve an array of problems, from understanding devastating diseases to predicting political outcomes. There is substantial “big data” hype in the press, as well as business and academic communities, but how do upcoming, current, and recent statistical science PhDs handle the array of training opportunities and career paths in this new era? <a href="http://www.amstat.org/newsroom/pressreleases/2015-StatsFastestGrowingSTEMDegree.pdf" target="_blank">Undergraduate interest in statistics degrees is exploding</a>, bringing new talent to graduate programs and the post-PhD job pipeline.  Statistics training is diversifying, with students focusing on theory, methods, computation, and applications, or a blending of these areas. A few years ago, Rafa outlined the academic career options for statistics PhDs in <a href="http://simplystatistics.org/2011/09/12/advice-for-stats-students-on-the-academic-job-market/" target="_blank">two</a> <a href="http://simplystatistics.org/2011/09/15/another-academic-job-market-option-liberal-arts/" target="_blank">posts</a>, which cover great background material I do not repeat here. The landscape for statistics PhD careers is also changing quickly, with a variety of companies attracting top statistics students in new roles.  As a <a href="http://www.drsherrirose.com/" target="_blank">new faculty member</a> at the intersection of machine learning, causal inference, and health care policy, I've already found myself frequently giving career advice to trainees.  The choices have become much more nuanced than just academia vs. industry vs. government. </div> <div> </div> <div> </div> <div> So, you find yourself inspired by big data problems and fascinated by statistics. While you are a student, figuring out what you enjoy working on is crucial. This exploration could involve engaging in internship opportunities or collaborating with multiple faculty on different types of projects. Both positive and negative experiences can help you identify your preferences. </div> <div> </div> <div> </div> <div> Undergraduates may wish to spend a couple months at a <a href="http://www.nhlbi.nih.gov/research/training/summer-institute-biostatistics-t15" target="_blank">Summer Institute for Training in Biostatistics</a> or <a href="http://www.nsf.gov/crssprgm/reu/" target="_blank">National Science Foundation Research Experience for Undergraduates</a>. There are <a href="https://www.udacity.com/course/st101" target="_blank">also</a> <a href="https://www.coursera.org/course/casebasedbiostat" target="_blank">many</a> <a href="https://www.coursera.org/specialization/jhudatascience/1" target="_blank">MOOC</a> <a href="https://www.edx.org/course/statistics-r-life-sciences-harvardx-ph525-1x#.VJOhXsAAPe" target="_blank">options</a> <a href="https://www.coursera.org/course/maththink" target="_blank">to</a> <a href="https://www.udacity.com/course/ud120" target="_blank">get</a> <a href="https://www.udacity.com/course/ud359" target="_blank">a</a> <a href="https://www.udacity.com/course/ud651" target="_blank">taste</a> <a href="https://www.edx.org/course/foundations-data-analysis-utaustinx-ut-7-01x#.VNpQRd4bakA" target="_blank">of</a> <a href="https://www.edx.org/course/introduction-linear-models-matrix-harvardx-ph525-2x#.VNpQS94bakA" target="_blank">different</a> <a href="https://www.edx.org/course/scalable-machine-learning-uc-berkeleyx-cs190-1x#.VNpQU94bakA" target="_blank">areas</a> <a href="https://www.edx.org/course/introduction-computational-thinking-data-mitx-6-00-2x-0#.VNpQWd4bakA" target="_blank">of</a><a href="https://www.edx.org/course/fundamentals-clinical-trials-harvardx-hsph-hms214x#.VNpQt94bakA" target="_blank">statistics</a>. Selecting a graduate program for PhD study can be a difficult choice, especially when your interests within statistics have yet to be identified, as is often the case for undergraduates. However, if you know that you have interests in software and programming, it can be easy to sort which statistical science PhD programs have a curricular or research focus in this area by looking at department websites. Similarly, if you know you want to work in epidemiologic methods, genomics, or imaging, specific programs are going to jump right to the top as good fits. Getting advice from faculty in your department will be important. Competition for admissions into statistics and biostatistics PhD programs has continued to increase, and most faculty advise applying to as many relevant programs as is reasonable given the demands on your time and finances. If you end up sitting on multiple (funded) offers come April, talking to current students, student alums, and looking at alumni placement can be helpful. Don't hesitate to contact these people, selectively. Most PhD programs genuinely do want you to end up in the place that is best for you, even if it is not with them. </div> <div> </div> <div> </div> <div> Once you're in a PhD program, internship opportunities for graduate students are listed each year by the <a href="http://www.amstat.org/education/internships.cfm" target="_blank">American Statistical Association</a>. Your home department may also have ties with local research organizations and companies with openings. Internships can help you identify future positions and the types of environments where you will flourish in your career. <a href="https://www.linkedin.com/pub/lauren-kunz/a/aab/293" target="_blank">Lauren Kunz</a>, a recent PhD graduate in biostatistics from Harvard University, is currently a Statistician at the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health. Dr. Kunz said, "As a previous summer intern at the NHLBI, I was able to get a feel for the day to day life of a biostatistician at the NHLBI. I found the NHLBI Office of Biostatistical Research to be a collegial, welcoming environment, and I soon learned that NHLBI biostatisticians have the opportunity to work on a variety of projects, very often collaborating with scientists and clinicians. Due to the nature of these collaborations, the biostatisticians are frequently presented with scientifically interesting and important statistical problems. This work often motivates methodological research which in turn has immediate, practical applications. These factors matched well with my interest in collaborative research that is both methodological and applied." </div> <div> </div> <div> </div> <div> <span style="font-family: Helvetica;">Industry is also enticing to statistics PhDs, particularly those with an applied or computational focus, like <a href="http://www.stephaniesapp.com/" target="_blank">Stephanie Sapp</a> and</span> <a href="http://alyssafrazee.com/" target="_blank">Alyssa Frazee</a><span style="font-family: Helvetica;">. Dr. Sapp has a PhD in statistics from the University of California, Berkeley, and is currently a Quantitative Analyst at <a href="http://www.google.com/" target="_blank">Google</a>. She also completed an internship there the summer before she graduated. In commenting about her choice to join Google, Dr. Sapp said,  "</span>I really enjoy both academic research and seeing my work used in practice.  Working at Google allows me to continue pursuing new and interesting research topics, as well as see my results drive more immediate impact."  <span style="font-family: Helvetica;">Dr. Frazee just finished her PhD in biostatistics at Johns Hopkins University and previously spent a summer exploring her interests in <a href="https://www.hackerschool.com/" target="_blank">Hacker School</a>.  While she applied to both academic and industry positions, receiving multiple offers, she ultimately chose to go into industry and work for <a href="https://stripe.com/" target="_blank">Stripe</a>: "</span>I accepted a tech company's offer for many reasons, one of them being that I really like programming and writing code. There are tons of opportunities to grow as a programmer/engineer at a tech company, but building an academic career on that foundation would be more of a challenge. I'm also excited about seeing my statistical work have more immediate impact. At smaller companies, much of the work done there has visible/tangible bearing on the product. Academic research in statistics is operating a lot closer to the boundaries of what we know and discovering a lot of cool stuff, which means researchers get to try out original ideas more often, but the impact is less immediately tangible. A new method or estimator has to go through a lengthy peer review/publication process and be integrated into the community's body of knowledge, which could take several years, before its impact can be fully observed."  One of Dr. Frazee, Dr. Sapp, and Dr. Kunz's considerations in choosing a job reflects many of those in the early career statistics community: having an impact. </div> <div> </div> <div> </div> <div> <span style="font-family: Helvetica;">Interest in both developing methods </span><i>and</i> <span style="font-family: Helvetica;">translating statistical advances into practice is a common theme in the big data statistics world, but not one that always leads to an industry or government career. There are also academic opportunities in statistics, biostatistics, and interdisciplinary departments like my own where your work can have an impact on current science.  The <a href="http://www.hcp.med.harvard.edu/" target="_blank">Department of Health Care Policy</a> (HCP) at Harvard Medical School has 5 tenure-track/tenured statistics faculty members, including myself, among a total of about 20 core faculty members. The statistics faculty work on a range of theoretical and methodological problems while collaborating with HCP faculty (health economists, clinician <wbr />researchers, and sociologists) and leading our own substantive projects in health care policy (e.g., <a href="http://www.massdac.org/" target="_blank">Mass-DAC</a>). I find it to be a unique and exciting combination of roles, and love that the science truly informs my statistical research, giving it broader impact. Since joining the department a year and a half ago, I've worked in many new areas, such as plan payment risk adjustment methodology. I have also applied some of my previous work in machine learning to predicting adverse health outcomes in large datasets. Here, I immediately saw a need for new avenues of statistical research to make the optimal approach based on statistical theory align with an optimal approach in practice. My current research portfolio is diverse; example projects include the development of a double robust estimator for the study of chronic disease, leading an evaluation of a new state-wide health plan initiative, and collaborating with department colleagues on statistical issues in all-payer claims databases, physician prescribing intensification behavior, and predicting readmissions. The <a href="http://statistics.fas.harvard.edu/" target="_blank">larger</a> <a href="http://www.hsph.harvard.edu/biostatistics/" target="_blank">statistics</a> <a href="http://www.iq.harvard.edu/" target="_blank">community</a> <a href="http://bcb.dfci.harvard.edu/" target="_blank">at</a> Harvard also affords many opportunities to interact with statistics faculty across the campus, and <a href="http://www.faculty.harvard.edu/" target="_blank">university-wide junior faculty events</a> have connected me with professors in computer science and engineering. I feel an immense sense of research freedom to pursue my interests at HCP, which was a top priority when I was comparing job offers.</span> </div> <div> </div> <div> </div> <div> <a href="http://had.co.nz/" target="_blank">Hadley Wickam</a>, of <a href="http://www.amazon.com/dp/0387981403/" target="_blank">ggplot2</a> and <a href="http://www.amazon.com/dp/1466586966/" target="_blank">Advanced R</a> fame, took on a new role as Chief Scientist at <a href="http://www.rstudio.com/" target="_blank">RStudio</a> in 2013. Freedom was also a key component in his choice to move sectors: "For me, the driving motivation is freedom: I know what I want to work on, I just need the freedom (and support) to work on it. It's pretty unusual to find an industry job that has more freedom than academia, but I've been noticeably more productive at RStudio because I don't have any meetings, and I can spend large chunks of time devoted to thinking about hard problems. It's not possible for everyone to get that sort of job, but everyone should be thinking about how they can negotiate the freedom to do what makes them happy. I really like the thesis of Cal Newport's book <a href="http://www.amazon.com/dp/1455509124/" target="_blank"><i>So </i></a><a href="http://www.amazon.com/dp/1455509124/" target="_blank"><i>Good They Can't Ignore You</i></a> - the better you are at your job, the greater your ability to negotiate for what you want." </div> <div> </div> <div> </div> <div> There continues to be a strong emphasis in the work force on the vaguely defined field of “data science,” which incorporates the collection, storage, analysis, and interpretation of big data.  Statisticians not only work in and lead teams with other scientists (e.g., clinicians, biologists, computer scientists) to attack big data challenges, but with each other. Your time as a statistics trainee is an amazing opportunity to explore your strengths and preferences, and which sectors and jobs appeal to you. Do your due diligence to figure out which employers are interested in and supportive of the type of career you want to create for yourself. Think about how you want to spend your time, and remember that you're the only person who has to live your life once you get that job. Other people's opinions are great, but your values and instincts matter too. Your definition of "best" doesn't have to match someone else's. Ask questions! Try new things! The potential for breakthroughs with novel flexible methods is strong. Statistical science training has progressed to the point where trainees are armed with thorough knowledge in design, methodology, theory, and, increasingly, data collection, applications, and computation.  Statisticians working in data science are poised to continue making important contributions in all sectors for years to come. Now, you just need to decide where you fit. </div> Introduction to Linear Models and Matrix Algebra MOOC starts this Monday Feb 16 2015-02-13T09:00:11+00:00 http://simplystats.github.io3887 <p>Matrix algebra is the language of modern data analysis. We use it to develop and describe statistical and machine learning methods, and to code efficiently in languages such as R, matlab and python. Concepts such as principal component analysis (PCA) are best described with matrix algebra. It is particularly useful to describe linear models.</p> <p>Linear models are everywhere in data analysis. ANOVA, linear regression, limma, edgeR, DEseq, most smoothing techniques, and batch correction methods such as SVA and Combat are based on linear models. In this two week MOOC we well describe the basics of matrix algebra, demonstrate how linear models are used in the life sciences and show how to implement these efficiently in R.</p> <p>Update: Here is <a href="https://www.edx.org/course/introduction-linear-models-matrix-harvardx-ph525-2x">the link</a> to the class</p> Is Reproducibility as Effective as Disclosure? Let's Hope Not. 2015-02-12T10:21:35+00:00 http://simplystats.github.io3898 <p>Jeff and I just this week published a <a href="http://www.pnas.org/content/112/6/1645.full">commentary</a> in the <em>Proceedings of the National Academy of Sciences</em> on our latest thinking on reproducible research and its ability to solve the reproducibility/replication “crisis” in science (there’s a version on <a href="http://arxiv.org/abs/1502.03169">arXiv</a> too). In a nutshell, we believe reproducibility (making data and code available so that others can recompute your results) is an essential part of science, but it is not going to end the crisis of confidence in science. In fact, I don’t think it’ll even make a dent. The problem is that reproducibility, as a tool for preventing poor research, comes in at the wrong stage of the research process (the end). While requiring reproducibility may deter people from committing outright fraud (a small group), it won’t stop people who just don’t know what they’re doing with respect to data analysis (a much larger group).</p> <p>In an eerie coincidence, Jesse Eisinger of the investigative journalism non-profit ProPublica, has just published a piece on the New York Times Dealbook site discussing how <a href="http://dealbook.nytimes.com/2015/02/11/an-excess-of-sunlight-a-paucity-of-rules/">requiring disclosure rules in the financial industry has produced meager results</a>. He writes</p> <blockquote> <p class="story-body-text"> Over the last century, disclosure and transparency have become our regulatory crutch, the answer to every vexing problem. We require corporations and government to release reams of information on food, medicine, household products, consumer financial tools, campaign finance and crime statistics. We have a booming “report card” industry for a range of services, including hospitals, public schools and restaurants. </p> </blockquote> <p class="story-body-text"> The rationale for all this disclosure is that </p> <blockquote> <p class="story-body-text"> someone, somewhere reads the fine print in these contracts and keeps corporations honest. It turns out what we laymen intuit is true: <a href="http://www.law.nyu.edu/news/ideas/Marotta-Wurgler-standard-form-contracts-fine-print">No one reads them</a>, according to research by a New York University law professor, Florencia Marotta-Wurgler. </p> </blockquote> <p class="story-body-text"> But disclosure is nevertheless popular because how could you be against it? </p> <blockquote> <p class="story-body-text"> The disclosure bonanza is easy to explain. Nobody is against it. It’s politically expedient. Companies prefer such rules, especially in lieu of actual regulations that would curtail bad products or behavior. The opacity lobby — the <a href="http://en.wikipedia.org/wiki/Remora">remora fish</a> class of lawyers, lobbyists and consultants in New York and Washington — knows that disclosure requirements are no bar to dodgy practices. You just have to explain what you’re doing in sufficiently incomprehensible language, a task that earns those lawyers a hefty fee. </p> </blockquote> <p class="story-body-text"> In the now infamous <a href="http://simplystatistics.org/2012/02/27/the-duke-saga-starter-set/">Duke Saga</a>, Keith Baggerly was able to reproduce the work of Potti et al. after roughly 2,000 hours of work because the data were publicly available (although the code was not). It's not clear how much time would have been saved if the code had been available, but it seems reasonable to assume that it would have taken some amount of time to <em>understand</em> the analysis, if not reproduce it. Once the errors in Potti's work were discovered, it took 5 years for the original Nature Medicine paper to be retracted. </p> <p class="story-body-text"> Although you could argue that the process worked in some sense, it came at tremendous cost of time and money. Wouldn't it have been better if the analysis had been done right in the first place? </p> The trouble with evaluating anything 2015-02-09T19:24:22+00:00 http://simplystats.github.io3889 <p>It is very hard to evaluate people’s productivity or work in any meaningful way. This problem is the source of:</p> <ol> <li><a href="http://simplystatistics.org/2013/09/26/how-could-code-review-discourage-code-disclosure-reviewers-with-motivation/">Consternation about peer review</a></li> <li><a href="http://simplystatistics.org/2014/02/21/heres-why-the-scientific-publishing-system-can-never-be-fixed/">The reason why post publication peer review doesn’t work</a></li> <li><a href="http://simplystatistics.org/2012/05/24/how-do-we-evaluate-statisticians-working-in-genomics/">Consternation about faculty evaluation</a></li> <li>Major problems at companies like <a href="http://www.bloomberg.com/bw/articles/2013-11-12/yahoos-latest-hr-disaster-ranking-workers-on-a-curve">Yahoo</a> and <a href="http://www.bloomberg.com/bw/articles/2013-11-13/microsoft-kills-its-hated-stack-rankings-dot-does-anyone-do-employee-reviews-right">Microsoft</a>.</li> </ol> <p>Roger and I were just talking about this problem in the context of evaluating the impact of software as a faculty member and Roger suggested the problem is that:</p> <blockquote> <p>Evaluating people requires real work and so people are always looking for shortcuts</p> </blockquote> <p>To evaluate a person’s work or their productivity requires three things:</p> <ol> <li>To be an expert in what they do</li> <li>To have absolutely no reason to care whether they succeed or not</li> <li>To have time available to evaluate them</li> </ol> <p>These three fundamental things are at the heart of why it is so hard to get good evaluations of people and why peer review and other systems are under such fire. The main source of the problem is the conflict between 1 and 2. The group of people in any organization or on any scale that is truly world class at any given topic from software engineering to history is small. It has to be by definition. This group of people inevitably has some reason to care about the success of the other people in that same group. Either they work with the other world class people and want them to succeed or they  either intentionally or unintentionally are competing with them.</p> <p>The conflict between being and expert and having no say wouldn’t be such a problem if it wasn’t for issue number 3: the time to evaluate people. To truly get good evaluations what you need is for someone who <em>isn’t an expert in a field and so has no stake</em> to take the time to become an expert and then evaluate the person/software. But this requires a huge amount of effort on the part of a reviewer who has to become expert in a new field. Given that reviewing is often considered the least important task in people’s workflow, evidenced by the value we put on people acting as peer reviewers for journals, or the value people get for doing a good job in people’s evaluation for promotion in companies, it is no wonder people don’t take the time to become experts.</p> <p>I actually think that tenure review committees at forward thinking places may be the best at this (<a href="http://simplystatistics.org/2012/12/20/the-nih-peer-review-system-is-still-the-best-at-identifying-innovative-biomedical-investigators/">Rafa said the same thing about NIH study section</a>). They at least attempt to get outside reviews from people who are unbiased about the work that a faculty member is doing before they are promoted. This system, of course, has large and well-document problems, but I think it is better than having a person’s direct supervisor - who clearly has a stake - being the only person evaluating them.It is also better than only using the quantifiable metrics like number of papers and impact factor of the corresponding journals. I also think that most senior faculty who evaluate people take the job very seriously despite the only incentive being good citizenship.</p> <p>Since real evaluation requires hard work and expertise, most of the time people are looking for a short cut. These short cuts typically take the form of quantifiable metrics. In the academic world these shortcuts are things like:</p> <ol> <li>Number of papers</li> <li>Citations to academic papers</li> <li>The impact factor of a journal</li> <li>Downloads to a person’s software</li> </ol> <p>I think all of these things are associated with quality but none define quality. You could try to model the relationship, but it is very hard to come up with a universal definition for the outcome you are trying to model. In academics, some people have suggested that <a href="http://www.michaeleisen.org/blog/?p=694">open review or post-publication review</a> solves the problem. But this is only true for a very small subset of cases that violate rule number 2. The only papers that get serious post-publication review are where people have an incentive for the paper to go one way or the other. This means that papers in Science will be post-pub reviewed much much more often than equally important papers in discipline specific journals - just because people care more about Science. This will leave the vast majority of papers unreviewed - as evidenced by the relatively modest number of papers reviewed by <a href="https://pubpeer.com/">PubPeer</a> or <a href="http://www.ncbi.nlm.nih.gov/pubmedcommons/">Pubmed Commons.</a></p> <p>I’m beginning to think that the only way to do evaluation well is to hire people whose <em>only job is to evaluate something well</em>. In other words, peer reviewers who are paid to review papers full time and are only measured by how often those papers are retracted or proved false. Or tenure reviewers who are paid exclusively to evaluate tenure cases and are measured by how well the post-tenure process goes for the people they evaluate and whether there is any measurable bias in their reviews.</p> <p>The trouble with evaluating anything is that it is hard work and right now we aren’t paying anyone to do it.</p> <p> </p> Johns Hopkins Data Science Specialization Top Performers 2015-02-05T10:40:14+00:00 http://simplystats.github.io3866 <p><em>Editor’s note: The Johns Hopkins Data Science Specialization is the largest data science program in the world.  <a href="http://www.bcaffo.com/">Brian</a>, <a href="http://www.biostat.jhsph.edu/~rpeng/">Roger</a>, and <a href="http://jtleek.com/">myself </a> conceived the program at the beginning of January 2014 , then built, recorded, and launched the classes starting in April 2014 with the help of <a href="https://twitter.com/iragooding">Ira</a>.  Since April 2014 we have enrolled 1.76 million student and awarded 71,589 Signature Track verified certificates. The first capstone class ran in October - just 7 months after the first classes launched and 4 months after all classes were running. Despite this incredibly short time frame 917 students finished all 9 classes and enrolled in the Capstone Course. 478 successfully completed the course.</em></p> <p>When we first announced the the Data Science Specialization, we said that the top performers would be profiled here on Simply Statistics. Well, that time has come, and we’ve got a very impressive group of participants that we want to highlight. These folks have successfully completed all nine MOOCs in the specialization and earned top marks in our first capstone session with <a href="http://swiftkey.com/en/">SwiftKey</a>. We had the pleasure of meeting some of them last week in a video conference, and we were struck by their insights and expertise. Check them out below.</p> <h2 id="sasa-bogdanovic"><strong>Sasa Bogdanovic</strong></h2> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/Sasa-Bogdanovic.jpg"><img class="size-thumbnail wp-image-3874 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/Sasa-Bogdanovic-120x90.jpg" alt="Sasa-Bogdanovic" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>Sasa Bogdanovic is passionate about everything data. For the last 6 years, he’s been working in the iGaming industry, providing data products (integrations, data warehouse architectures and models, business intelligence tools, analyst reports and visualizations) for clients, helping them make better, data-driven, business decisions.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>Although I’ve been working with data for many years, I wanted to take a different perspective and learn more about data science concepts and get insights into the whole pipeline from acquiring data to developing final data products. I also wanted to learn more about statistical models and machine learning.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>I am very happy to have discovered the data science field. It is a whole new world that I find fascinating and inspiring to explore. I am looking forward to my new career in data science. This will allow me to combine all my previous knowledge and experience with my new insights and methods. I am very proud of every single quiz, assignment and project. For sure, the capstone project was a culmination, and I am very proud and happy to have succeeded to make a solid data product and to be a one of the top performers in the group. For this I am very grateful to the instructors, community TAs, all other peers for their contributions in the forums, and Coursera for putting it all together and making it possible.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>I have already put the certificate in motion. My company is preparing new projects, and I expect the certificate to add weight to our proposals.</p> <h2 id="alejandro-morales-gallardo">Alejandro Morales Gallardo</h2> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/Alejandro.png"><img class="size-thumbnail wp-image-3875 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/Alejandro-120x90.png" alt="Alejandro" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>I’m a trained physicist with strong coding skills. I have a passion for dissecting datasets to find the hidden stories in data and produce insights through creative visualizations. A hackathon and open-data aficionado, I have an interest in using data (and science) to improve our lives.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-1"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>I wanted to close a gap in my skills and transition into to becoming a full blown Data Scientist by learning key concepts and practices in the field. Learning R, an industry relevant language, while creating a portfolio to showcase my abilities in the entire data science pipeline seemed very attractive.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-1"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>I’m most proud of the Predictive Text App I developed. With the Capstone Project, it was extremely rewarding to be able to tackle a brand new data type and learn about text mining and natural language processing while building a fun and attractive data product. I was particularly proud that the accuracy of my app was not that far off from SwiftKey smartphone app. I’m also proud of being a top performer!</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-1"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>I want to apply my new set of skills to develop other products, analyze new datasets and keep growing my portfolio. It is also helpful to have Verified Certificates to show prospective employers.</p> <h2 id="nitin-gupta">Nitin Gupta</h2> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/NitinGupta.jpg"><img class="size-thumbnail wp-image-3876 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/NitinGupta-120x90.jpg" alt="NitinGupta" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>Nitin is an independent trader and quant strategist with over 13 years of multi-faceted experience in the investment management industry. In the past he worked for a leading investment management firm where he built automated trading and risk management systems and gained complete life-cycle expertise in creating systematic investment products. He has a background in computer science with a strong interest in machine learning and its applications in quantitative modeling.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-2"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>I was fortunate to have done the first Machine Learning course taught by Prof. Andrew Ng at the launch of Coursera in 2012, which really piqued my interest in the topic. The next course I did on Coursera was Prof. Roger Peng’s Computing For Data Analysis which introduced me to R. I realized that R was ideally suited for the quantitative modeling work I was doing. When I learned about the range of topics that the JHU DSS would cover - from the best practices in tidying and transforming data to modeling, analysis and visualization - I did not hesitate to sign up. Learning how to do all of this in an ecosystem built around R has been a huge plus.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-2"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>I am quite pleased with the web apps I built which utilize the concepts learned during the track. One of my apps visualizes and compares historical stock performance with other stocks and market benchmarks after querying the data directly from web resources. Another one showcases a predictive typing engine that dynamically predicts the next few words to use and append, as the user types a sentence. The process of building these apps provided a fantastic learning experience. Also, for the first time I built something that even my near and dear ones could use and appreciate, which is terrific.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-2"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>The broad skill set developed through this specialization could be applied across multiple domains. My current focus is on building robust quantitative models for systematic trading strategies that could learn and adapt to changing market environments. This would involve the application of machine learning techniques among other skills learned during the specialization. Using R and Shiny to interactively analyze the results would be tremendously useful.</p> <h2 id="marc-kreyer">Marc Kreyer</h2> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/Marc-Kreyer.jpeg"><img class="size-thumbnail wp-image-3877 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/Marc-Kreyer-120x90.jpeg" alt="Marc Kreyer" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>Marc Kreyer is an expert business analyst and software engineer with extensive experience in financial services in Austria and Liechtenstein. He successfully finishes complex projects by not only using broad IT knowledge but also outstanding comprehension of business needs. Marc loves combining his programming and database skills with his affinity for mathematics to transform data into insight.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-3"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>There are many data science MOOCs, but usually they are independent 4-6 week courses. The JHU Data Science Specialization was the first offering of a series of courses that build upon each other.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-3"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>Creating a working text prediction app without any prior NLP knowledge and only minimal assistance from instructors.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-3"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>Knowledge and experience are the most valuable things gained from the Data Science Specialization. As they can’t be easily shown to future employers, the certificate can be a good indicator for them. Unfortunately there is neither an issue data nor a verification link on the certificate, therefore it will be interesting to see how valuable it really will be.</p> <h2 id="hsing-liu">Hsing Liu</h2> <p> </p> <p style="text-align: left;"> <a href="http://simplystatistics.org/wp-content/uploads/2015/02/Paul_HsingLiu.jpeg"><img class="size-thumbnail wp-image-3878" src="http://simplystatistics.org/wp-content/uploads/2015/02/Paul_HsingLiu-120x90.jpeg" alt="Paul_HsingLiu" width="120" height="90" /></a> </p> <p>I studied in the U.S. for a number of years, and received my M.S. in mathematics from NYU before returning to my home country, Taiwan. I’m most interested in how people think and learn, and education in general. This year I’m starting a new career as an iOS app engineer.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-4"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>In my brief past job as an instructional designer, I read a lot about the new wave of online education, and was especially intrigued by how Khan Academy’s data science division is using data to help students learn. It occurred to me that to leverage my math background and make a bigger impact in education (or otherwise), data science could be an exciting direction to take.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-4"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>It may sound boring, but I’m proud of having done my best for each course in the track, going beyond the bare requirements when I’m able. The parts of the Specialization fit into a coherent picture of the discipline, and I’m glad to have put in the effort to connect the dots and gained a new perspective.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-4"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>I’m listing the certificate on my resume and LinkedIn, and I expect to be applying what I’ve learned once my company’s e-commence app launch.</p> <h2 id="yichen-liu">Yichen Liu</h2> <p> </p> <p>Yichen Liu is a business analyst at Toyota Western Australia where he is responsible for business intelligence development, data analytics and business improvement. His prior experience includes working as a sessional lecturer and tutor at Curtin University in finance and econometrics units.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-5"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>Recognising the trend that the world is more data-driven than before, I felt it was necessary to gain further understanding in data analysis to tackle both current and future challenges at work.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-5"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>The most proud thing as part of the program is that I have gained some basic knowledge in a totally new area, natural language processing. Though its connection with my current working area is limited, I see the future of data analysis to be more unstructured-data-drive and am willing to develop more knowledge in this area.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-5"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>I see the certificate as a stepping stone into the data science world, and would like to conduct more advanced studies in data science especially for unstructured data analysis.</p> <h2 id="johann-posch">Johann Posch</h2> <p style="text-align: left;"> <a href="http://simplystatistics.org/wp-content/uploads/2015/02/PictureJohannPosch.png"><img class="size-thumbnail wp-image-3879" src="http://simplystatistics.org/wp-content/uploads/2015/02/PictureJohannPosch-120x90.png" alt="PictureJohannPosch" width="120" height="90" /></a> </p> <p>After graduating form Vienna University of Technology with a specialization in Artificial Intelligence I joined Microsoft. There I worked as a developer on various products but the majority of the time as a Windows OS developer. After venturing into start-ups for a few years I joined GE Research to work on the Predix Big Data Platform and recently I joined on the Industrial Data Science team.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-6"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>Ever since I wrote my masters thesis in Neural Networks I have been intrigued with machine learning. I see data science as a field where great advances will happen over the next decade and as an opportunity to positively impact millions of lives. I like how JHU structured the course series.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-6">What are you most proud of doing as part of the JHU Data Science Specialization?</h3> <p>Being able to complete the JHU Data Science Specialization in 6 months and to get an distinction on every one of the courses was a great success. However, the best moment was probably the way my capstone project (next word prediction) turned out. The model could be trained in incremental steps and how it was able to provide meaningful options in real time.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-6"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>The course covered the concepts and tools needed to successfully address data science problems. It gave me the confidence and knowledge to apply for data science position. I am now working in the field at GE Research. I am grateful to all who made this Specialization happen!</p> <h2 id="jason-wilkinson">Jason Wilkinson</h2> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/JasonWilkinson.jpg"><img class="size-thumbnail wp-image-3880 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/JasonWilkinson-120x90.jpg" alt="JasonWilkinson" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>Jason Wilkinson is a trader of commodity futures and other financial securities at a small proprietary trading firm in New York City. He and his wife, Katie, and dog, Charlie, can frequently be seen at the Jersey shore. And no, it’s nothing like the tv show, aside from the fist pumping.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-7"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>The JHU Data Science Specialization helped me to prepare as I begin working on a Masters of Computer Science specializing in Machine Learning at Georgia Tech and also in researching algorithmic trading ideas. I also hope to find ways of using what I’ve learned in philanthropic endeavors, applying data science for social good.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-7"><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></h3> <p>I’m most proud of going from knowing zero R code to being able to apply it in the capstone and other projects in such a short amount of time.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-7"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>The knowledge gained in pursuing the specialization certificate alone was worth the time put into it. A certificate is just a piece of paper. It’s what you can do with the knowledge gained that counts.</p> <h2 id="uli-zellbeck">Uli Zellbeck</h2> <p> </p> <p style="text-align: left;"> <a href="http://simplystatistics.org/wp-content/uploads/2015/02/Uli.jpg"><img class="size-thumbnail wp-image-3881" src="http://simplystatistics.org/wp-content/uploads/2015/02/Uli-120x90.jpg" alt="Uli" width="120" height="90" /></a> </p> <p> </p> <p>I studied economics in Berlin with focus on econometrics and business informatics. I am currently working as a Business Intelligence / Data Warehouse Developer in an e-commerce company. I am interested in recommender systems and machine learning.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-8"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>I wanted to learn about Data Science because it provides a different approach on solving business problems with data. I chose the JHU Data Science Specialization on Coursera because it promised a wide range of topics and I like the idea of online courses. Also, I had experience with R and I wanted to deepen my knowledge with this tool.</p> <h3 id="what-are-you-most-proud-of-doing-as-part-of-the-jhu-data-science-specialization-8">What are you most proud of doing as part of the JHU Data Science Specialization?</h3> <p>There are two things. I successfully took all nine courses in 4 months and the capstone project was really hard work.</p> <h3 id="how-are-you-planning-on-using-your-data-science-specialization-certificate-8"><strong>How are you planning on using your Data Science Specialization Certificate?</strong></h3> <p>I might get the chance to develop a Data Science department at my company. I like to use the certificate as basis to get a deeper knowledge in the many parts of Data Science.</p> <h2 id="fred-zhengzhenhao">Fred Zheng Zhenhao</h2> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/ZHENG-Zhenhao.jpeg"><img class="size-thumbnail wp-image-3882 alignleft" src="http://simplystatistics.org/wp-content/uploads/2015/02/ZHENG-Zhenhao-120x90.jpeg" alt="ZHENG Zhenhao" width="120" height="90" /></a></p> <p> </p> <p> </p> <p> </p> <p> </p> <p>By the time I enrolled in the JHU data science specialization, I was an undergraduate student in The Hong Kong Polytechnic university. Before that, I read some data mining books, feel excited about the content, but I never get to implement any of the algorithms because I barely have any programming skill. After taking this series of courses, now I am able to analyze the web content which is related to my research using R.</p> <h3 id="why-did-you-take-the-jhu-data-science-specialization-9"><strong>Why did you take the JHU Data Science Specialization?</strong></h3> <p>I took this series of courses as a challenge to me. I would like to see whether my interest can support me through 9 courses and 1 capstone project. And I do want to learn more in this field. This specialization is different from other data mining or machine learning class in that it covers the entire process including the Git, R, R-Markdown, shiny etc, and I think these are necessary skills too.</p> <p><strong>What are you most proud of doing as part of the JHU Data Science Specialization?</strong></p> <p>Getting my word prediction app to respond in 0.05 seconds is already exiting, and one of the reviewer says “congratulations your engine came up with the most correct prediction among those I reviewed: 3 out of 5, including one that stumped every one else : “child might stick her finger or a foreign object into an electrical (outlet)”. I guess that’s the part I am most proud of.</p> <p><strong>How are you planning on using your Data Science Specialization Certificate?</strong></p> <p>It definitely goes in my CV for future job hunting.</p> <p> </p> <p> </p> Early data on knowledge units - atoms of statistical education 2015-02-05T09:44:49+00:00 http://simplystats.github.io3862 <p>Yesterday I posted <a href="http://simplystatistics.org/2015/02/04/knowledge-units-the-atoms-of-statistical-education/">about atomizing statistical education into knowledge units</a>. You can try out the first knowledge unit here: <a href="https://jtleek.typeform.com/to/jMPZQe">https://jtleek.typeform.com/to/jMPZQe</a>. The early data is in and it is consistent with many of our hypotheses about the future of online education.</p> <p>Namely:</p> <ol> <li>Completion rates are high when segments are shorter</li> <li>You can learn something about statistics in a short amount of time (2 minutes to complete, many people got all questions right)</li> <li>People will consume educational material on tablets/smartphones more and more.</li> </ol> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/02/Screen-Shot-2015-02-05-at-9.34.51-AM.png"><img class="aligncenter wp-image-3863" src="http://simplystatistics.org/wp-content/uploads/2015/02/Screen-Shot-2015-02-05-at-9.34.51-AM.png" alt="Screen Shot 2015-02-05 at 9.34.51 AM" width="500" height="402" srcset="http://simplystatistics.org/wp-content/uploads/2015/02/Screen-Shot-2015-02-05-at-9.34.51-AM-300x241.png 300w, http://simplystatistics.org/wp-content/uploads/2015/02/Screen-Shot-2015-02-05-at-9.34.51-AM.png 1004w" sizes="(max-width: 500px) 100vw, 500px" /></a></p> <p> </p> Knowledge units - the atoms of statistical education 2015-02-04T16:45:21+00:00 http://simplystats.github.io3858 <p><em>Editor’s note: This idea is <a href="http://www.bcaffo.com/">Brian’s idea</a> and based on conversations with him and Roger, but I just executed it.</em></p> <p>The length of academic courses has traditionally ranged between a few days for a short course to a few months for a semester-long course.  Lectures are typically either 30 minutes or one hour. Term and lecture lengths have been dictated by tradition and the relative inconvenience of coordinating schedules of the instructors and students for shorter periods of time. As classes have moved online the barrier of inconvenience to varying the length of an academic course has been removed. Despite this flexibilty, most academic online courses adhere to the traditional semester-long format. For example, the first massive online open courses were simply semester-long courses directly recorded and offered online.</p> <p>Data collected from massive online open courses suggest that [<em>Editor’s note: This idea is <a href="http://www.bcaffo.com/">Brian’s idea</a> and based on conversations with him and Roger, but I just executed it.</em></p> <p>The length of academic courses has traditionally ranged between a few days for a short course to a few months for a semester-long course.  Lectures are typically either 30 minutes or one hour. Term and lecture lengths have been dictated by tradition and the relative inconvenience of coordinating schedules of the instructors and students for shorter periods of time. As classes have moved online the barrier of inconvenience to varying the length of an academic course has been removed. Despite this flexibilty, most academic online courses adhere to the traditional semester-long format. For example, the first massive online open courses were simply semester-long courses directly recorded and offered online.</p> <p>Data collected from massive online open courses suggest that](https://onlinelearninginsights.wordpress.com/2014/04/28/mooc-design-tips-maximizing-the-value-of-video-lectures/) and the [<em>Editor’s note: This idea is <a href="http://www.bcaffo.com/">Brian’s idea</a> and based on conversations with him and Roger, but I just executed it.</em></p> <p>The length of academic courses has traditionally ranged between a few days for a short course to a few months for a semester-long course.  Lectures are typically either 30 minutes or one hour. Term and lecture lengths have been dictated by tradition and the relative inconvenience of coordinating schedules of the instructors and students for shorter periods of time. As classes have moved online the barrier of inconvenience to varying the length of an academic course has been removed. Despite this flexibilty, most academic online courses adhere to the traditional semester-long format. For example, the first massive online open courses were simply semester-long courses directly recorded and offered online.</p> <p>Data collected from massive online open courses suggest that [<em>Editor’s note: This idea is <a href="http://www.bcaffo.com/">Brian’s idea</a> and based on conversations with him and Roger, but I just executed it.</em></p> <p>The length of academic courses has traditionally ranged between a few days for a short course to a few months for a semester-long course.  Lectures are typically either 30 minutes or one hour. Term and lecture lengths have been dictated by tradition and the relative inconvenience of coordinating schedules of the instructors and students for shorter periods of time. As classes have moved online the barrier of inconvenience to varying the length of an academic course has been removed. Despite this flexibilty, most academic online courses adhere to the traditional semester-long format. For example, the first massive online open courses were simply semester-long courses directly recorded and offered online.</p> <p>Data collected from massive online open courses suggest that](https://onlinelearninginsights.wordpress.com/2014/04/28/mooc-design-tips-maximizing-the-value-of-video-lectures/) and the](https://www.coursera.org/specialization/jhudatascience/1?utm_medium=courseDescripTop) leads to higher student retention. These results line up with data on other online activities such as Youtube video watching or form completion, which also show that shorter activities lead to higher completion rates.</p> <p>We have  some of the earliest and most highly subscribed massive online open courses through the Coursera platform: Data Analysis, Computing for Data Analysis, and Mathematical Biostatistics Bootcamp. Our original courses were translated from courses we offered locally and were therefore closer to semester long with longer lectures ranging from 15-30 minutes. Based on feedback from our students and the data we observed about completion rates, we made the decision to break our courses down into smaller, one-month courses with no more than two hours of lecture material per week. Since then, we have enrolled more than a million students in our MOOCs.</p> <p>The data suggest that the shorter you can make an academic unit online, the higher the completion percentage. The question then becomes “How short can you make an online course?” To answer this question requires a definition of a course. For our purposes we will define a course as an educational unit consisting of the following three components:</p> <p><strong>** </strong>**</p> <ul> <li> <p><strong>**Knowledge delivery</strong> -** the distribution of educational material through lectures, audiovisual materials, and course notes<strong>.</strong></p> </li> <li> <p><strong>Knowledge evaluation</strong> - the evaluation of how much of the knowledge delivered to a student is retained.</p> </li> <li> <p><strong>Knowledge certification</strong> - an independent claim or representation that a student has learned some set of knowledge.</p> </li> </ul> <p> </p> <p>A typical university class delivers 36 hours = 12 weeks x 3 hours/week of content knowledge, evaluates that knowledge based on the order of 10 homework assignments and 2 tests, and results in a certification equivalent to 3 university credits.With this definition, what is the smallest possible unit that satisfies all three definitions of a course? We will call this smallest possible unit one knowledge unit. The smallest knowledge unit that satisfies all three definitions is a course that:</p> <ul> <li> <p><strong>**Delivers a single unit of content</strong> -** We will define a single unit of content as a text, image, or video describing a single concept.</p> </li> <li> <p><strong>Evaluates that single unit of content</strong> -  The smallest unit of evaluation possible is a single question to evaluate a student’s knowledge.</p> </li> <li> <p><strong>Certifies knowlege</strong> - Provides the student with a statement of successful evaluation of the knowledge in the knowledge unit.</p> </li> </ul> <p>An example of a knowledge unit appears here: <a href="https://jtleek.typeform.com/to/jMPZQe">https://jtleek.typeform.com/to/jMPZQe</a>. The knowledge unit consists of a short (less than 2 minute) video and 3 quiz questions. When completed, the unit sends the completer an email verifying that the quiz has been completed. Just as an atom is the smallest unit of mass that defines a chemical element, the knowledge unit is the smallest unit of education that defines a course.</p> <p>Shrinking the units down to this scale opens up some ideas about how you can connect them together into courses and credentials. I’ll leave that for a future post.</p> Precision medicine may never be very precise - but it may be good for public health 2015-01-30T14:24:17+00:00 http://simplystats.github.io3848 <p><em>Editor’s note: This post was originally titled: <a href="http://simplystatistics.org/2013/06/12/personalized-medicine-is-primarily-a-population-health-intervention/">Personalized medicine is primarily a population health intervention</a>. It has been updated with the graph of odds ratios/betas from GWAS studies.</em></p> <p>There has been a lot of discussion of <a href="http://en.wikipedia.org/wiki/Personalized_medicine">personalized medicine</a>, <a href="http://web.jhu.edu/administration/provost/initiatives/ihi/">individualized health</a>, and <a href="http://www.ucsf.edu/welcome-to-ome">precision medicine</a> in the news and in the medical research community and President Obama just <a href="http://www.whitehouse.gov/the-press-office/2015/01/30/fact-sheet-president-obama-s-precision-medicine-initiative">announced a brand new initiative in precision medicine</a> . Despite this recent attention, it is clear that healthcare has always been personalized to some extent. For example, men are rarely pregnant and heart attacks occur more often among older patients. In these cases, easily collected variables such as sex and age, can be used to predict health outcomes and therefore used to “personalize” healthcare for those individuals.</p> <p>So why the recent excitement around personalized medicine? The reason is that it is increasingly cheap and easy to collect more precise measurements about patients that might be able to predict their health outcomes. An example that <a href="http://www.nytimes.com/2013/05/14/opinion/my-medical-choice.html?_r=0">has recently been in the news</a> is the measurement of mutations in the BRCA genes. Angelina Jolie made the decision to undergo a prophylactic double mastectomy based on her family history of breast cancer and measurements of mutations in her BRCA genes. Based on these measurements, previous studies had suggested she might have a lifetime risk as high as 80% of developing breast cancer.</p> <p>This kind of scenario will become increasingly common as newer and more accurate genomic screening and predictive tests are used in medical practice. When I read these stories there are two points I think of that sometimes get obscured by the obviously fraught emotional, physical, and economic considerations involved with making decisions on the basis of new measurement technologies:</p> <ol> <li><strong>In individualized health/personalized medicine the “treatment” is information about risk</strong>. In <a href="http://en.wikipedia.org/wiki/Gleevec">some cases</a> treatment will be personalized based on assays. But in many other cases, we still do not (and likely will not) have perfect predictors of therapeutic response. In those cases, the healthcare will be “personalized” in the sense that the patient will get more precise estimates of their likelihood of survival, recurrence etc. This means that patients and physicians will increasingly need to think about/make decisions with/act on information about risks. But communicating and acting on risk is a notoriously challenging problem; personalized medicine will dramatically raise the importance of <a href="http://understandinguncertainty.org/">understanding uncertainty</a>.</li> <li><strong>Individualized health/personalized medicine is a population-level treatment.</strong> Assuming that the 80% lifetime risk estimate was correct for Angelina Jolie, it still means there is a 1 in 5 chance she was never going to develop breast cancer. If that had been her case, then the surgery was unnecessary. So while her decision was based on personal information, there is still uncertainty in that decision for her. So the “personal” decision may not always be the “best” decision for any specific individual. It may however, be the best thing to do for everyone in a population with the same characteristics.</li> </ol> <p>The first point bears serious consideration in light of President Obama’s new proposal. We have already collected a massive amount of genetic data about a large number of common diseases. In almost all cases, the amount of predictive information that we can glean from genetic studies is modest. One paper pointed this issue out in a rather snarky way by comparing two approaches to predicting people’s heights: (1) averaging their parents heights - an approach from the Victorian era and (2) combing the latest information on the best genetic markers at the time. It turns out, all the genetic information we gathered isn’t as good as <a href="http://www.nature.com/ejhg/journal/v17/n8/full/ejhg20095a.html">averaging parents heights</a>. Another way to see this is to download data on all genetic variants associated with disease from the <a href="http://www.genome.gov/gwastudies/">GWAS catalog</a> that have a P-value less than 1 x 10e-8. If you do that and look at the distribution of effect sizes, you see that 95% have an odds ratio or beta coefficient less than about 4. Here is a histogram of the effect sizes:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-overall.png"><img class="aligncenter size-full wp-image-3852" src="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-overall.png" alt="gwas-overall" width="480" height="480" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-overall-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/gwas-overall-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/01/gwas-overall.png 480w" sizes="(max-width: 480px) 100vw, 480px" /></a></p> <p> </p> <p> </p> <p>This means that nearly all identified genetic effects are small. The ones that are really large (effect size greater than 100) are not for common disease outcomes, they are for <a href="http://en.wikipedia.org/wiki/Birdshot_chorioretinopathy">Birdshot chorioretinopathy</a> and hippocampal volume. You can really see this if you look at the bulk of the distribution of effect sizes, which are mostly less than 2 by zooming the plot on the x-axis:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-zoomed.png"><img class="aligncenter size-full wp-image-3853" src="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-zoomed.png" alt="gwas-zoomed" width="480" height="480" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/gwas-zoomed-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/gwas-zoomed-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/01/gwas-zoomed.png 480w" sizes="(max-width: 480px) 100vw, 480px" /></a></p> <p> </p> <p> </p> <p>These effect sizes translate into very limited predictive capacity for most identified genetic biomarkers.  The implication is that personalized medicine, at least for common diseases, is highly likely to be inaccurate for any individual person. But if we can take advantage of the population-level improvements in health from precision medicine by increasing risk literacy, improving our use of uncertain markers, and understanding that precision medicine isn’t precise for any one person, it could be a really big deal.</p> Reproducible Research Course Companion 2015-01-26T16:22:36+00:00 http://simplystats.github.io3834 <p><a href="https://itunes.apple.com/us/book/reproducible-research/id961495566?ls=1&amp;mt=13" rel="https://itunes.apple.com/us/book/reproducible-research/id961495566?ls=1&amp;mt=13"><img class="alignright wp-image-3838" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-26-at-4.14.26-PM-779x1024.png" alt="Screen Shot 2015-01-26 at 4.14.26 PM" width="331" height="435" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-26-at-4.14.26-PM-228x300.png 228w, http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-26-at-4.14.26-PM-779x1024.png 779w, http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-26-at-4.14.26-PM-152x200.png 152w, http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-26-at-4.14.26-PM.png 783w" sizes="(max-width: 331px) 100vw, 331px" /></a>I’m happy to announce that you can now get a copy of the <a title="Reproducible Research Course Companion" href="https://itunes.apple.com/us/book/reproducible/id961495566?ls=1&amp;mt=13" target="_blank">Reproducible Research Course Companion</a> from the Apple iBookstore. The purpose of this e-book is pretty simple. The book provides all of the key video lectures from my <a title="JHU/Coursera Reproducible Research Course " href="https://www.coursera.org/course/repdata" target="_blank">Reproducible Research course</a> offered on Coursera, in a simple offline e-book format. The book can be viewed on a Mac, iPad, or iPad mini.</p> <p>If you’re interested in taking my Reproducible Research course on Coursera and would like a flavor of what the course will be like, then you can view the lectures through the book (the free sample contains three lectures). On the other hand, if you already took the course and would like access to the lecture material afterwards, then this might be a useful add-on. If you care currently enrolled in the course, then this could be a handy way for you to take the lectures on the road with you.</p> <p>Please note that all of the lectures are still available for free on YouTube via my <a href="https://www.youtube.com/channel/UCZA0RbbSK1IXeeJysKYRWuQ" target="_blank">YouTube channel</a>. Also, the book provides content only. If you wish to actually complete the course, you must take it through the Coursera web site.</p> Data as an antidote to aggressive overconfidence 2015-01-21T11:58:07+00:00 http://simplystats.github.io3803 <p>A recent <a href="http://www.nytimes.com/2014/12/07/opinion/sunday/adam-grant-and-sheryl-sandberg-on-discrimination-at-work.html?_r=0">NY Times op-ed</a> reminded us of the many biases faced by women at work. A [A recent <a href="http://www.nytimes.com/2014/12/07/opinion/sunday/adam-grant-and-sheryl-sandberg-on-discrimination-at-work.html?_r=0">NY Times op-ed</a> reminded us of the many biases faced by women at work. A ](http://time.com/3666135/sheryl-sandberg-talking-while-female-manterruptions/)  gave specific recommendations for how to conduct ourselves in meetings_. <em>In general, I found these very insightful, but don’t necessarily agree with the recommendations that women should “Practice Assertive Body Language”.  Instead, we should make an effort to judge ideas by their content and not be impressed by body language. More generally, it is a problem that many of the characteristics that help advance careers contribute nothing to intellectual output. One of these is what I call _aggressive overconfidence</em>.</p> <p>Here is an example (based on a true story). A data scientist finds a major flaw with the data analysis performed by a prominent data-producing scientist’s lab. Both are part of a large collaborative project. A meeting is held among the project leaders to discuss the disagreement. The data producer is very self-confident in defending his approach. The data scientist, who in not nearly as aggressive, is <a href="http://time.com/3666135/sheryl-sandberg-talking-while-female-manterruptions/">interrupted</a> so much that she barely gets her point across. The project leaders decide that this seems to be simply a difference of opinion and, for all practical purposes, ignore the data scientist. I imagine this story sounds familiar to many. While in many situations this story ends here, when the results are data driven we can actually fact check opinions that are pronounced as fact. In this example, the data is public and anybody with the right expertise can download the data and corroborate the flaw in the analysis. This is typically quite tedious, but it can be done. Because the key flaws are rather complex, the project leaders, lacking expertise in data analysis, can’t make this determination. But eventually, a chorus of fellow data analysts will be too loud to ignore.</p> <p>That aggressive overconfidence is generally rewarded in academia is a problem. And if this trait is <a href="http://scholar.google.com/scholar?hl=en&amp;as_sdt=0,22&amp;q=overconfidence+gender">highly correlated with being male</a>, then a manifestation of this is a worsened gender gap. My experience (including reading internet discussions among scientists on controversial topics) has convinced me that this trait is in fact correlated with gender. But the solution is not to help women become more aggressively overconfident. Instead we should continue to strive to judge work based on content rather than style. I am optimistic that more and more, data, rather than who sounds more sure of themselves, will help us decide who wins a debate.</p> <p> </p> Gorging ourselves on "free" health care: Harvard's dilemma 2015-01-20T09:00:56+00:00 http://simplystats.github.io3811 <p><em>Editor’s note: This is a guest post by <a href="http://www.hcp.med.harvard.edu/faculty/core/laura-hatfield-phd">Laura Hatfield</a>. Laura is an Assistant Professor of Health Care Policy at Harvard Medical School, with a specialty in Biostatistics. Her work focuses on understanding trade-offs and relationships among health outcomes. Dr. Hatfield received her BS in genetics from Iowa State University and her PhD in biostatistics from the University of Minnesota. She tweets <a href="https://twitter.com/bioannie">@bioannie</a></em></p> <p>I didn’t imagine when I joined Harvard’s Department of Health Care Policy that the New York Times would be <a href="http://www.nytimes.com/2015/01/06/us/health-care-fixes-backed-by-harvards-experts-now-roil-its-faculty.html">writing about my benefits package</a>. Then a vocal and aggrieved group of faculty <a href="http://www.thecrimson.com/article/2014/11/12/harvards-health-benefits-unfairness/">rebelled against health benefits changes</a> for 2015, and commentators responded by gleefully <a href="http://www.thefiscaltimes.com/2015/01/07/Harvards-Whiny-Profs-Could-Get-Obamacare-Bonus">skewering</a> entitled-sounding Harvard professors. But I’m a statistician, so I want to talk data.</p> <p>Health care spending is tremendously right-skewed. The figure below shows the annual spending distribution among people with any spending (~80% of the total population) in two data sources on people covered by employer-sponsored insurance, such as the Harvard faculty. Notice that the y axis is on the log scale. More than half of people spend 1000 or less, but a few very unfortunate folks top out near half a million.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/spending_distribution.jpg"><img class="alignnone size-full wp-image-3814" src="http://simplystatistics.org/wp-content/uploads/2015/01/spending_distribution.jpg" alt="spending_distribution" width="600" height="400" /></a></p> <p>Source: <a href="https://www.bea.gov/papers/working_papers.htm">Measuring health care costs of individuals with employer-sponsored health insurance in the US: A comparison of survey and claims data</a>. A. Aizcorbe, E. Liebman, S. Pack, D.M. Cutler, M.E. Chernew, A.B. Rosen. BEA working paper. WP2010-06. June 2010.</p> <p>If instead of contributing to my premiums, Harvard instead gave me the1000/month premium contribution in the form of wages, I would be on the hook for my own health care expenses. If I stay healthy, I pocket the money, minus income taxes. If I get sick, I have the extra money available to cover the expenses…provided I’m not one of the unlucky 10% of people spending more than $12,000/year. In that case, the additional wages would be insufficient to cover my health care expenses. This “every woman for herself” system lacks the key benefit of insurance: risk pooling. The sickest among us would be bankrupted by health costs. Another good reason for an employer to give me benefits is that I do not pay taxes on this part of my compensation (more on that later).</p> <p>At the opposite end of the spectrum is the Harvard faculty health insurance plan. Last year, the university paid ~$1030/month toward my premium and I put in ~$425 (tax-free). In exchange for this ~$17,000 of premiums, my family got first-dollar insurance coverage with very low co-pays. Faculty contributions to our collective expenses health care were distributed fairly evenly among all of us, with only minimal cost sharing to reflect how much care each person consumed. The sickest among us were in no financial peril. My family didn’t use much care and thus didn’t get our (or Harvard’s) money’s worth for all that coverage, but I’m ok with it. I still prefer risk pooling.</p> <p>Here’s the problem: moral hazard. It’s a word I learned when I started hanging out with health economists. It describes the tendency of people to over-consume goods that feel free, such as health care paid through premiums or desserts at an all-you-can-eat buffet. Just look at this array—how much cake do *you* want to eat for 9.99?</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/buffet.jpg"><img class="alignnone size-large wp-image-3815" src="http://simplystatistics.org/wp-content/uploads/2015/01/buffet-1024x768.jpg" alt="buffet" width="500" height="380" /></a></p> <p>Source: https://www.flickr.com/photos/jimmonk/5687939526/in/photostream/</p> <p>One way to mitigate moral hazard is to expose people to more of their cost of care at the point of service instead of through premiums. You might think twice about that fifth tiny cake if you were paying per morsel. This is what the new Harvard faculty plans do: our premiums actually go down, but now we face a modest deductible,250 per person or $750 max for a family. This is meant to encourage faculty to use their health care more efficiently, but it still affords good protection against catastrophic costs. The out-of-pocket max remains low at$1500 per individual or 4500 per family, with recent announcements to further protect individuals who pay more than 3% of salary in out-of-pocket health costs through a reimbursement program.</p> <p>The allocation of individuals’ contributions between premiums and point-of-service costs is partly a question of how we cross-subsidize each other. If Harvard’s total contribution remains the same and health care costs do not grow faster than wages (ha!), then increased cost sharing decreases the amount by which people who use less care subsidize those who use more. How you feel about the “right” level of cost sharing may depend on whether you’re paying or receiving a subsidy from your fellow employees. And maybe your political leanings.</p> <p>What about the argument that it is better for an employer to “pay” workers by health insurance premium contributions rather than wages because of the tax benefits? While we might prefer to get our compensation in the form of tax-free health benefits vs taxed wages, the university, like all employers, is looking ahead to the <a href="http://www.forbes.com/sites/sallypipes/2014/12/01/a-cadillac-tax-for-everyone/">Cadillac tax provision of the ACA</a>. So they have to do some re-balancing of our overall compensation. If Harvard reduces its health insurance contributions to avoid the tax, we might reasonably <a href="http://www.washingtonpost.com/blogs/wonkblog/wp/2013/08/30/youre-spending-way-more-on-your-health-benefits-than-you-think/">expect to make up that difference</a> in higher wages. The empirical evidence is <a href="http://www.hks.harvard.edu/fs/achandr/JLE_LaborMktEffectsRisingHealthInsurancePremiums_2006.pdf">complicated</a> and suggests that employers may not immediately return savings on health benefits dollar-for-dollar in the form of wages.</p> <p>As far as I can tell, Harvard is contributing roughly the same amount as last year toward my health benefits, but exact numbers are difficult to find. I switched plan types\footnote{into a high-deductible plan, but that’s a topic for another post!}, so I can’t find and directly compare Harvard’s contributions in the same plan type this year and last. Peter Ubel <a href="http://www.peterubel.com/health_policy/how-behavioral-economics-could-have-prevented-the-harvard-meltdown-over-healthcare-costs/">argues</a> that if the faculty *had* seen these figures, we might not have revolted. The actuarial value of our plans remains very high (91%, just a bit better than the expensive Platinum plans on the exchanges) and Harvard’s spending on health care has grown from 8% to 12% of the university’s budget over the past few years. Would these data have been sufficient to quell the insurrection? Good question.</p> If you were going to write a paper about the false discovery rate you should have done it in 2002 2015-01-16T10:58:04+00:00 http://simplystats.github.io3797 <p>People often talk about academic superstars as people who have written highly cited papers. Some of that has to do with people’s genius, or ability, or whatever. But one factor that I think sometimes gets lost is luck and timing. So I wrote a little script to get the first 30 papers that appear when you search Google Scholar for the terms:</p> <ul> <li>empirical processes</li> <li>proportional hazards model</li> <li>generalized linear model</li> <li>semiparametric</li> <li>generalized estimating equation</li> <li>false discovery rate</li> <li>microarray statistics</li> <li>lasso shrinkage</li> <li>rna-seq statistics</li> </ul> <p>Google Scholar sorts by relevance, but that relevance is driven to a large degree by citations. For example, if you look at the first 10 papers you get for searching for false discovery rate you get.</p> <ul> <li>Controlling the false discovery rate: a practical and powerful approach to multiple testing</li> <li>Thresholding of statistical maps in functional neuroimaging using the false discovery rate</li> <li>The control of the false discovery rate in multiple testing under dependency</li> <li>Controlling the false discovery rate in behavior genetics research</li> <li>Identifying differentially expressed genes using false discovery rate controlling procedures</li> <li>The positive false discovery rate: A Bayesian interpretation and the q-value</li> <li>On the adaptive control of the false discovery rate in multiple testing with independent statistics</li> <li>Implementing false discovery rate control: increasing your power</li> <li>Operating characteristics and extensions of the false discovery rate procedure</li> <li>Adaptive linear step-up procedures that control the false discovery rate</li> </ul> <p>People who work in this area will recognize that many of these papers are the most important/most cited in the field.</p> <p>Now we can make a plot that shows for each term when these 30 highest ranked papers appear. There are some missing values, because of the way the data are scraped, but this plot gives you some idea of when the most cited papers on these topics were published:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/citations-boxplot.png"><img class="aligncenter size-full wp-image-3798" src="http://simplystatistics.org/wp-content/uploads/2015/01/citations-boxplot.png" alt="citations-boxplot" width="600" height="400" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/citations-boxplot-300x200.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/citations-boxplot-260x173.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/citations-boxplot.png 600w" sizes="(max-width: 600px) 100vw, 600px" /></a></p> <p>You can see from the plot that the median publication year of the top 30 hits for “empirical processes” was 1990 and for “RNA-seq statistics” was 2010. The medians for the other topics were:</p> <ul> <li>Emp. Proc. 1990.241</li> <li>Prop. Haz. 1990.929</li> <li>GLM 1994.433</li> <li>Semi-param. 1994.433</li> <li>GEE 2000.379</li> <li>FDR 2002.760</li> <li>microarray 2003.600</li> <li>lasso 2004.900</li> <li>rna-seq 2010.765</li> </ul> <p>I think this pretty much matches up with the intuition most people have about the relative timing of fields, with a few exceptions (GEE in particular seems a bit late). There are a bunch of reasons this analysis isn’t perfect, but it does suggest that luck and timing in choosing a problem can play a major role in the “success” of academic work as measured by citations. It also suggests another reason for success in science than individual brilliance. Given the potentially negative consequences the <a href="http://www.sciencemag.org/content/347/6219/262.abstract">expectation of brilliance has on certain subgroups</a>, it is important to recognize the importance of timing and luck. The median most cited “false discovery rate” paper was 2002, but almost none of the 30 top hits were published after about 2008.</p> <p><a href="https://gist.github.com/jtleek/c5158965d77c21ade424">The code for my analysis is here</a>. It is super hacky so have mercy.</p> How to find the science paper behind a headline when the link is missing 2015-01-15T13:35:42+00:00 http://simplystats.github.io3785 <p>I just saw a pretty wild statistic on Twitter that less than 60% of university news releases link to the papers they are describing.</p> <p> </p> <blockquote class="twitter-tweet" width="550"> <p> Amazingly, less than 60% of university news releases link to the papers they're describing <a href="http://t.co/daN11xYvKs">http://t.co/daN11xYvKs</a> <a href="http://t.co/QtneZUAeFD">pic.twitter.com/QtneZUAeFD</a> </p> <p> &mdash; Justin Wolfers (@JustinWolfers) <a href="https://twitter.com/JustinWolfers/status/555782983429677056">January 15, 2015</a> </p> </blockquote> <p>Before you believe anything your read about science in the news, you need to go and find the original article. When the article isn’t linked in the press release, sometimes you need to do a bit of sleuthing. Here is an example of how I do it for a news article. In general the press-release approach is very similar, but you skip the first step I describe below.</p> <p><strong>Here is the news article (<a href="http://www.huffingtonpost.com/2015/01/14/online-avatar-personality_n_6463484.html?utm_hp_ref=science">link</a>):</strong></p> <p> </p> <p><img class="aligncenter wp-image-3787" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.11.22-PM.png" alt="Screen Shot 2015-01-15 at 1.11.22 PM" width="300" height="405" /></p> <p> </p> <p> </p> <p><strong>Step 1: Look for a link to the article</strong></p> <p>Usually it will be linked near the top or the bottom of the article. In this case, the article links to the press release about the paper. <em>This is not the original research article</em>. If you don’t get to a scientific journal you aren’t finished. In this case, the press release actually gives the full title of the article, but that will happen less than 60% of the time according to the statistic above.</p> <p> </p> <p><strong>Step 2: Look for names of the authors, scientific key words and journal name if available</strong></p> <p>You are going to use these terms to search in a minute. In this case the only two things we have are the journal name:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png"><img class="aligncenter size-full wp-image-3791" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png" alt="Untitled presentation (2)" width="949" height="334" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2-300x105.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2-260x91.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png 949w" sizes="(max-width: 949px) 100vw, 949px" /></a></p> <p> </p> <p>And some key words:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png"><img class="aligncenter size-full wp-image-3792" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png" alt="Untitled presentation (3)" width="933" height="343" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3-300x110.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png 933w" sizes="(max-width: 933px) 100vw, 933px" /></a></p> <p> </p> <p><strong>Step 3 Use Google Scholar</strong></p> <p>You could just google those words and sometimes you get the real paper, but often you just end up back at the press release/news article. So instead the best way to find the article is to go to <a href="https://scholar.google.com/">Google Scholar </a>then click on the little triangle next to the search box.</p> <p> </p> <p> </p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png"><img class="aligncenter size-full wp-image-3793" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png" alt="Untitled presentation (4)" width="960" height="540" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4-260x146.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png 960w" sizes="(max-width: 960px) 100vw, 960px" /></a></p> <p>Fill in information while you can. Fill in the same year as the press release, information about the journal, university and key words.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.31.38-PM.png"><img class="aligncenter size-full wp-image-3794" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.31.38-PM.png" alt="Screen Shot 2015-01-15 at 1.31.38 PM" width="509" height="368" /></a></p> <p> </p> <p><strong>Step 4 Victory</strong></p> <p>Often this will come up with the article you are looking for:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png"><img class="aligncenter size-full wp-image-3795" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png" alt="Screen Shot 2015-01-15 at 1.32.45 PM" width="813" height="658" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM-247x200.png 247w, http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png 813w" sizes="(max-width: 813px) 100vw, 813px" /></a></p> <p> </p> <p>Unfortunately, the article may be paywalled, so if you don’t work at a university or institute with a subscription, you can always tweet the article name with the hashtag [I just saw a pretty wild statistic on Twitter that less than 60% of university news releases link to the papers they are describing.</p> <p> </p> <blockquote class="twitter-tweet" width="550"> <p> Amazingly, less than 60% of university news releases link to the papers they're describing <a href="http://t.co/daN11xYvKs">http://t.co/daN11xYvKs</a> <a href="http://t.co/QtneZUAeFD">pic.twitter.com/QtneZUAeFD</a> </p> <p> &mdash; Justin Wolfers (@JustinWolfers) <a href="https://twitter.com/JustinWolfers/status/555782983429677056">January 15, 2015</a> </p> </blockquote> <p>Before you believe anything your read about science in the news, you need to go and find the original article. When the article isn’t linked in the press release, sometimes you need to do a bit of sleuthing. Here is an example of how I do it for a news article. In general the press-release approach is very similar, but you skip the first step I describe below.</p> <p><strong>Here is the news article (<a href="http://www.huffingtonpost.com/2015/01/14/online-avatar-personality_n_6463484.html?utm_hp_ref=science">link</a>):</strong></p> <p> </p> <p><img class="aligncenter wp-image-3787" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.11.22-PM.png" alt="Screen Shot 2015-01-15 at 1.11.22 PM" width="300" height="405" /></p> <p> </p> <p> </p> <p><strong>Step 1: Look for a link to the article</strong></p> <p>Usually it will be linked near the top or the bottom of the article. In this case, the article links to the press release about the paper. <em>This is not the original research article</em>. If you don’t get to a scientific journal you aren’t finished. In this case, the press release actually gives the full title of the article, but that will happen less than 60% of the time according to the statistic above.</p> <p> </p> <p><strong>Step 2: Look for names of the authors, scientific key words and journal name if available</strong></p> <p>You are going to use these terms to search in a minute. In this case the only two things we have are the journal name:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png"><img class="aligncenter size-full wp-image-3791" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png" alt="Untitled presentation (2)" width="949" height="334" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2-300x105.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2-260x91.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-2.png 949w" sizes="(max-width: 949px) 100vw, 949px" /></a></p> <p> </p> <p>And some key words:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png"><img class="aligncenter size-full wp-image-3792" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png" alt="Untitled presentation (3)" width="933" height="343" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3-300x110.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-3.png 933w" sizes="(max-width: 933px) 100vw, 933px" /></a></p> <p> </p> <p><strong>Step 3 Use Google Scholar</strong></p> <p>You could just google those words and sometimes you get the real paper, but often you just end up back at the press release/news article. So instead the best way to find the article is to go to <a href="https://scholar.google.com/">Google Scholar </a>then click on the little triangle next to the search box.</p> <p> </p> <p> </p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png"><img class="aligncenter size-full wp-image-3793" src="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png" alt="Untitled presentation (4)" width="960" height="540" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4-260x146.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/Untitled-presentation-4.png 960w" sizes="(max-width: 960px) 100vw, 960px" /></a></p> <p>Fill in information while you can. Fill in the same year as the press release, information about the journal, university and key words.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.31.38-PM.png"><img class="aligncenter size-full wp-image-3794" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.31.38-PM.png" alt="Screen Shot 2015-01-15 at 1.31.38 PM" width="509" height="368" /></a></p> <p> </p> <p><strong>Step 4 Victory</strong></p> <p>Often this will come up with the article you are looking for:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png"><img class="aligncenter size-full wp-image-3795" src="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png" alt="Screen Shot 2015-01-15 at 1.32.45 PM" width="813" height="658" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM-247x200.png 247w, http://simplystatistics.org/wp-content/uploads/2015/01/Screen-Shot-2015-01-15-at-1.32.45-PM.png 813w" sizes="(max-width: 813px) 100vw, 813px" /></a></p> <p> </p> <p>Unfortunately, the article may be paywalled, so if you don’t work at a university or institute with a subscription, you can always tweet the article name with the hashtag](https://twitter.com/hashtag/icanhazpdf) and your contact info. Then you just have to hope that someone will send it to you (they often do).</p> <p> </p> <p> </p> Statistics and R for the Life Sciences: New HarvardX course starts January 19 2015-01-12T10:30:08+00:00 http://simplystats.github.io3769 <p>The first course of our Biomedical Data Science online curriculum</p> <p>starts next week. You can sign up <a href="https://www.edx.org/course/statistics-r-life-sciences-harvardx-ph525-1x">here</a>. Instead of relying on</p> <p>mathematical formulas to teach statistical concepts, students can</p> <p>program along as we show computer code for simulations that illustrate</p> <p>the main ideas of exploratory data analysis and statistical inference</p> <p>(p-values, confidence intervals and power calculations for example).</p> <p>By doing this, students will learn Statistics and R simultaneously and</p> <p>will not be bogged down by having to memorize formulas. We have three types of learning modules: lectures (see picture below), screencasts and assessments. After each</p> <p>video students will have the opportunity to assess their understanding</p> <p>through homeworks involving coding in R. A big improvement over the</p> <p>first version is that we have added dozens of assessment.</p> <p>Note that this course is the first in an <a href="http://simplystatistics.org/2014/03/31/data-analysis-for-genomic-edx-course/">eight part series</a> on Data Analysis for Genomics. Updates will be provided via twitter <a href="https://twitter.com/rafalab">@rafalab</a>.</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2.png"><img class="alignnone size-large wp-image-3773" src="http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2-1024x603.png" alt="edx_screenshot_v2" width="495" height="291" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2-300x176.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2-1024x603.png 1024w, http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2-260x153.png 260w, http://simplystatistics.org/wp-content/uploads/2015/01/edx_screenshot_v2.png 1298w" sizes="(max-width: 495px) 100vw, 495px" /></a></p> Beast mode parenting as shown by my Fitbit data 2015-01-07T11:22:57+00:00 http://simplystats.github.io3758 <p>This weekend was one of those hardcore parenting weekends that any parent of little kids will understand. We were up and actively taking care of kids for a huge fraction of the weekend. (Un)fortunately I was wearing my Fitbit, so I can quantify exactly how little we were sleeping over the weekend.</p> <p>Here is Saturday:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/saturday.png"><img class="aligncenter wp-image-3762 size-full" src="http://simplystatistics.org/wp-content/uploads/2015/01/saturday.png" alt="saturday" width="500" height="500" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/saturday-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/saturday-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/01/saturday.png 500w" sizes="(max-width: 500px) 100vw, 500px" /></a></p> <p> </p> <p> </p> <p>There you can see that I rocked about midnight-4am without running around chasing a kid or bouncing one to sleep. But Sunday was the real winner:</p> <p> </p> <p><a href="http://simplystatistics.org/wp-content/uploads/2015/01/sunday.png"><img class="aligncenter wp-image-3763 size-full" src="http://simplystatistics.org/wp-content/uploads/2015/01/sunday.png" alt="sunday" width="500" height="500" srcset="http://simplystatistics.org/wp-content/uploads/2015/01/sunday-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2015/01/sunday-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2015/01/sunday.png 500w" sizes="(max-width: 500px) 100vw, 500px" /></a></p> <p>Check that out. I was totally asleep from like 4am-6am there. Nice.</p> <p>Stay tuned for much more from my Fitbit data over the next few weeks.</p> <p> </p> <p> </p> Sunday data/statistics link roundup (1/4/15) 2015-01-04T14:45:19+00:00 http://simplystats.github.io3755 <ol> <li>I am digging <a href="http://waitbutwhy.com/2014/05/life-weeks.html">this visualization of your life in weeks</a>. I might have to go so far as to actually make one for myself.</li> <li>I’m very excited about the new podcast <a href="http://www.thetalkingmachines.com/">TalkingMachines</a> and what an awesome name! I wish someone would do that same thing for applied statistics (Roger?)</li> <li>I love that they call Ben Goldacre the <a href="http://www.vox.com/2014/12/27/7423229/ben-goldacre">anti-Dr. Oz in this piece</a>, especially given how often <a href="http://www.bmj.com/content/349/bmj.g7346">Dr. Oz is telling the truth</a>.</li> <li>If you haven’t read it yet, <a href="http://www.economist.com/news/christmas-specials/21636589-how-statisticians-changed-war-and-war-changed-statistics-they-also-served">this piece in the Economist</a> on statisticians during the war is really good.</li> <li>The arXiv <a href="http://www.nature.com/news/the-arxiv-preprint-server-hits-1-million-articles-1.16643">celebrated it’s 1M paper upload</a>. It costs less to run than the <a href="https://twitter.com/joe_pickrell/status/549762678160625664">top 2 executives at PLoS make</a>. It is t<a href="http://simplystatistics.org/2011/11/03/free-access-publishing-is-awesome-but-expensive-how/">oo darn expensive</a> to publish open access right now.</li> </ol> Ugh ... so close to one million page views for 2014 2014-12-31T13:16:14+00:00 http://simplystats.github.io3751 <p>In my <a href="http://simplystatistics.org/2014/12/21/sunday-datastatistics-link-roundup-122114/">last Sunday Links roundup</a> I mentioned we were going to be really close to 1 million page views this year. Chris V. tried to rally the troops:</p> <p> </p> <blockquote class="twitter-tweet" width="550"> <p> Lets get them over the hump // “<a href="https://twitter.com/simplystats">@simplystats</a>: Sunday data/statistics link roundup (12/21/14) <a href="http://t.co/X1WDF9zZc1">http://t.co/X1WDF9zZc1</a> <a href="https://twitter.com/hashtag/simplystats1e6?src=hash">#simplystats1e6</a>” </p> <p> &mdash; Chris Volinsky (@statpumpkin) <a href="https://twitter.com/statpumpkin/status/546872078730010624">December 22, 2014</a> </p> </blockquote> <p> </p> <p>but alas we are probably not going to make it (unless by some miracle one of our posts goes viral in the next 12 hours):</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/12/soclose.png"><img class="aligncenter wp-image-3752" src="http://simplystatistics.org/wp-content/uploads/2014/12/soclose-1024x1024.png" alt="soclose" width="400" height="400" srcset="http://simplystatistics.org/wp-content/uploads/2014/12/soclose-300x300.png 300w, http://simplystatistics.org/wp-content/uploads/2014/12/soclose-1024x1024.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/12/soclose-200x200.png 200w, http://simplystatistics.org/wp-content/uploads/2014/12/soclose.png 1050w" sizes="(max-width: 400px) 100vw, 400px" /></a></p> <p> </p> <p>Stay tuned for a bunch of cool new stuff from Simply Stats in 2015, including a new podcasting idea, more interviews, another unconference, and <a href="https://github.com/jtleek/simplystats">a new plotting theme</a>!</p> On how meetings and conference calls are disruptive to a data scientist 2014-12-22T10:00:51+00:00 http://simplystats.github.io3738 <p><em>Editor’s note: The week of Xmas eve is usually my most productive of the year. This is because there is reduced emails and 0 meetings (I do take a break, but after this great week for work). Here is a repost of one of our first entries explaining how meetings and conference calls are particularly disruptive in data science. </em></p> <p>In <a href="http://www.ted.com/talks/jason_fried_why_work_doesn_t_happen_at_work.html" target="_blank">this</a> TED talk Jason Fried explains why work doesn’t happen at work. He describes the evils of meetings. Meetings are particularly disruptive for applied statisticians, especially for those of us that hack data files, explore data for systematic errors, get inspiration from visual inspection, and thoroughly test our code. Why? Before I become productive I go through a ramp-up/boot-up stage. Scripts need to be found, data loaded into memory, and most importantly, my brains needs to re-familiarize itself with the data and the essence of the problem at hand. I need a similar ramp up for writing as well. It usually takes me between 15 to 60 minutes before I am in full-productivity mode. But once I am in “the zone”, I become very focused and I can stay in this mode for hours. There is nothing worse than interrupting this state of mind to go to a meeting. I lose much more than the hour I spend at the meeting. A short way to explain this is that having 10 separate hours to work is basically nothing, while having 10 hours in the zone is when I get stuff done.</p> <p>Of course not all meetings are a waste of time. Academic leaders and administrators need to consult and get advice before making important decisions. I find lab meetings very stimulating and, generally, productive: we unstick the stuck and realign the derailed. But before you go and set up a standing meeting consider this calculation: a weekly one hour meeting with 20 people translates into 1 hour x 20 people x 52 weeks/year = 1040 person hours of potentially lost production per year. Assuming 40 hour weeks, that translates into six months. How many grants, papers, and lectures can we produce in six months? And this does not take into account the non-linear effect described above. Jason Fried suggest you cancel your next meeting, notice that nothing bad happens and enjoy the extra hour of work.</p> <p>I know many others that are like me in this regard and for you I have these recommendations: 1- avoid unnecessary meetings, especially if you are already in full-productivity mode. Don’t be afraid to use this as an excuse to cancel. If you are in a soft institution, remember who pays your salary.  2- Try to bunch all the necessary meetings all together into one day. 3- Separate at least one day a week to stay home and work for 10 hours straight. Jason Fried also recommends that every work place declare a day in which no one talks. No meetings, no chit-chat, no friendly banter, etc… No talk Thursdays anyone?</p> Sunday data/statistics link roundup (12/21/14) 2014-12-21T22:00:33+00:00 http://simplystats.github.io3729 <p>James Stewart, author of the most popular Calculus textbook in the world, <a href="http://classic.slashdot.org/story/14/12/20/0036210">passed away</a>. In case you wonder if there is any money in textbooks, he had a 32 million house in Toronto. Maybe I should get out of MOOCs and into textbooks.</p> <ol> <li><a href="https://medium.com/the-physics-arxiv-blog/cause-and-effect-the-revolutionary-new-statistical-test-that-can-tease-them-apart-ed84a988e">This post</a> on medium about a new test for causality is making the rounds. The authors <a href="http://arxiv.org/pdf/1412.3773v1.pdf">of the original paper</a> make clear their assumptions make the results basically unrealistic for any real analysis for example:”<a href="http://arxiv.org/pdf/1412.3773v1.pdf">We simplify the causal discovery problem by assuming no confounding, selection bias and feedback.</a>” The medium article is too bold and as I replied to an economist who tweeted there was a new test that could distinguish causality: “<a href="https://twitter.com/simplystats/status/545769855564398593">Nope</a>”.</li> <li>I’m excited that the Rafa + the ASA have started a section <a href="https://twitter.com/rafalab/status/543115692770607104">on Genomics and Genetics</a>. It is nice to have a place to belong within our community. I hope it can be a place where folks who aren’t into the hype (a lot of those in genomics), but really care about applications, can meet each other and work together.</li> <li><a href="https://medium.com/@hannawallach/big-data-machine-learning-and-the-social-sciences-927a8e20460d">Great essay</a> by Hanna W. about data, machine learning and fairness. I love this quote: “in order to responsibly articulate and address issues relating to bias, fairness, and inclusion, we need to stop thinking of big data sets as being homogeneous, and instead shift our focus to the many diverse data sets nested within these larger collections.” (via Hilary M.)</li> <li>Over at Flowing Data they ran down <a href="http://flowingdata.com/2014/12/19/the-best-data-visualization-projects-of-2014-2/">the best data visualizations</a> of the year.</li> <li><a href="http://dirk.eddelbuettel.com/blog/2014/12/21/#sorry_julia_2014-12">This rant</a> from Dirk E. perfectly encapsulates every annoying thing about the Julia versus R comparisons I see regularly.</li> <li>We are tantalizingly close to 1 million page views for the year for Simply Stats. Help get us over the edge, share your favorite simply stats article with all your friends using the hashtag <a href="https://twitter.com/search?f=realtime&amp;q=%23simplystats1e6&amp;src=typd">#simplystats1e6</a></li> </ol> Interview with Emily Oster 2014-12-19T09:39:38+00:00 http://simplystats.github.io3711 <div> <div class="nD"> <div dir="ltr"> <div> <a href="http://simplystatistics.org/wp-content/uploads/2014/12/Emily_Oster_Photo.jpg"><img class="aligncenter wp-image-3714 " src="http://simplystatistics.org/wp-content/uploads/2014/12/Emily_Oster_Photo-198x300.jpg" alt="Emily Oster" width="121" height="184" /></a> </div> <div> </div> <div> </div> <div> <em><a href="http://en.wikipedia.org/wiki/Emily_Oster">Emily Oster</a> is an Associate Professor of Economics at Brown University. She is a frequent and highly respected <a href="http://fivethirtyeight.com/contributors/emily-oster/">contributor to 538 </a>where she brings clarity to areas of interest to parents, pregnant woman, and the general public where empirical research is conflicting or difficult to interpret. She is also the author of the popular new book about pregnancy:<a href="http://www.amazon.com/Expecting-Better-Conventional-Pregnancy-Wrong/dp/0143125702"> Expecting Better: Why the Conventional Pregnancy Wisdom Is Wrong--and What You Really Need to Know</a><b>. </b>We interviewed Emily as part of our <a href="http://simplystatistics.org/interviews/">ongoing interview series</a> with exciting empirical data scientists. </em> </div> <div> <em> </em> </div> <div> </div> <div> <b>SS: Do you consider yourself an economist, econometrician, statistician, data scientist or something else?</b> </div> <div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> EO: I consider myself an empirical economist. I think my econometrics colleagues would have a hearty laugh at the idea that I'm an econometrician! The questions I'm most interested in tend to have a very heavy empirical component - I really want to understand what we can learn from data. In this sense, there is a lot of overlap with statistics. But at the end of the day, the motivating questions and the theories of behavior I want to test come straight out of economics. </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: You are a frequent contributor to 538. Many of your pieces are attempts to demystify often conflicting sets of empirical research (about concussions and suicide, or the dangers of water flouridation). What would you say are the issues that make empirical research about these topics most difficult?</b> </div> <div> <b> </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> <div> EO: In nearly all the cases, I'd summarize the problem as : "The data isn't good enough." Sometimes this is because we only see observational data, not anything randomized. A large share of studies using observational data that I discuss have serious problems with either omitted variables or reverse causality (or both). This means that the results are suggestive, but really not conclusive. A second issue is even when we do have some randomized data, it's usually on a particular population, or a small group, or in the wrong time period. In the flouride case, the studies which come closest to being "randomized" are from 50 years ago. How do we know they still apply now? This makes even these studies challenging to interpret. </div> </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: Your recent book "Expecting Better: Why the Conventional Pregnancy Wisdom Is Wrong--and What You Really Need to Know" takes a similar approach to pregnancy. Why do you think there are so many conflicting studies about pregnancy? Is it because it is so hard to perform randomized studies?</b> </div> <div> <b> </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> <div> EO: I think the inability to run randomized studies is a big part of this, yes. One area of pregnancy where the data is actually quite good is labor and delivery. If you want to know the benefits and consequences of pain medication in labor, for example, it is possible to point you to some reasonably sized randomized trials. For various reasons, there has been more willingness to run randomized studies in this area. When pregnant women want answers to less medical questions (like, "Can I have a cup of coffee?") there is typically no randomized data to rely on. Because the possible benefits of drinking coffee while pregnant are pretty much nil, it is difficult to conceptualize a randomized study of this type of thing. </div> <div> </div> <div> Another big issue I found in writing the book was that even in cases where the data was quite good, data often diverges from practice. This was eye-opening for me and convinced me that in pregnancy (and probably in other areas of health) people really do need to be their own advocates and know the data for themselves. </div> </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: Have you been surprised about the backlash to your book for your discussion of the zero-alcohol policy during pregnancy? </b> </div> <div> <b> </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> <div> EO: A little bit, yes. This backlash has died down a lot as pregnant women actually read the book and use it. As it turns out, the discussion of alcohol makes up a tiny fraction of the book and most pregnant women are more interested in the rest of it! But certainly when the book came out this got a lot of focus. I suspected it would be somewhat controversial, although the truth is that every OB I actually talked to told me they thought it was fine. So I was surprised that the reaction was as sharp as it was. I think in the end a number of people felt that even if the data were supportive of this view, it was important not to say it because of the concern that some women would over-react. I am not convinced by this argument. </div> </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: What are the three most important statistical concepts for new mothers to know? </b> </div> <div> <b> </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> <div> EO: I really only have two! </div> <div> </div> <div> I think the biggest thing is to understand the difference between randomized and non-randomized data and to have some sense of the pittfalls of non-randomized data. I reviewed studies of alcohol where the drinkers were twice as likely as non-drinkers to use cocaine. I think people (pregnant or not) should be able to understand why one is going to struggle to draw conclusions about alcohol from these data. </div> <div> </div> <div> A second issue is the concept of probability. It is easy to say, "There is a 10% chance of the following" but do we really understand that? If someone quotes you a 1 in 100 risk from a procedure, it is important to understand the difference between 1 in 100 and 1 in 400. For most of us, those seem basically the same - they are both small. But they are not, and people need to think of ways to structure decision-making that acknowledge these differences. </div> </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: What computer programming language is most commonly taught for data analysis in economics? </b> </div> <div> <b> </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> <div> EO: So, I think the majority of empirical economists use Stata. I have been seeing more R, as well as a variety of other things, but more commonly among people who do heavier computational fields. </div> </div> </div> </div> </div> <div> <div class="nD"> <div dir="ltr"> <div> <div> </div> <div> <b>SS: Do you have any advice for young economists/statisticians who are interested in empirical research? </b> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div dir="ltr"> <div> </div> <div> EO: </div> <div> 1. Work on topics that interest you. As an academic you will ultimately have to motivate yourself to work. If you aren't interested in your topic (at least initially!), you'll never succeed. </div> <div> 2. One project which is 100% done is way better than five projects at 80%. You need to actually finish things, something which many of us struggle with. </div> <div> 3. Presentation matters. Yes, the substance is the most important thing, but don't discount the importance of conveying your ideas well. </div> </div> </div> </div> Repost: Statistical illiteracy may lead to parents panicking about Autism 2014-12-18T12:09:24+00:00 http://simplystats.github.io3721 <p><em>Editor’s Note: This is a repost of a <a href="http://simplystatistics.org/2012/11/30/statistical-illiteracy-may-lead-to-parents-panicking-about-autism/">previous post on our blog from 2012</a>. The repost is inspired by similar issues with statistical illiteracy that are coming up in <a href="http://skybrudeconsulting.com/blog/2014/12/12/diagnostic-testing.html">allergy screening</a> and <a href="http://www.bostonglobe.com/metro/2014/12/14/oversold-and-unregulated-flawed-prenatal-tests-leading-abortions-healthy-fetuses/aKFAOCP5N0Kr8S1HirL7EN/story.html">pregnancy screening</a>. </em></p> <p>I just was doing my morning reading of a few news sources and stumbled across this <a href="http://www.huffingtonpost.com/2012/11/29/autism-risk-babies-cries_n_2211729.html">Huffington Post article</a> talking about research correlating babies cries to autism. It suggests that the sound of a babies cries may predict their future risk for autism. As the parent of a young son, this obviously caught my attention in a very lizard-brain, caveman sort of way. I couldn’t find a link to the research paper in the article so I did some searching and found out this result is also being covered by <a href="http://healthland.time.com/2012/11/28/can-a-babys-cry-be-a-clue-to-autism/">Time</a>, <a href="http://www.sciencedaily.com/releases/2012/11/121127111352.htm">Science Daily</a>, <a href="http://www.medicaldaily.com/articles/13324/20121129/baby-s-cry-reveal-autism-risk.htm">Medical Daily</a>, and a bunch of other news outlets.</p> <p>Now thoroughly freaked, I looked online and found the pdf of the <a href="https://www.ewi-ssl.pitt.edu/psychology/admin/faculty-publications/201209041019040.Sheinkopf%20in%20press.pdf">original research article</a>. I started looking at the statistics and took a deep breath. Based on the analysis they present in the article there is absolutely no statistical evidence that a babies’ cries can predict autism. Here are the flaws with the study:</p> <ol> <li><strong>Small sample size</strong>. The authors only recruited 21 at risk infants and 18 healthy infants. Then, because of data processing issues, only ended up analyzing 7 high autistic risk versus 5 low autistic-risk in one analysis and 10 versus 6 in another. That is no where near a representative sample and barely qualifies as a pilot study.</li> <li><strong>Major and unavoidable confounding</strong>. The way the authors determined high autistic risk versus low risk was based on whether an older sibling had autism. Leaving aside the quality of this metric for measuring risk of autism, there is a major confounding factor: the families of the high risk children all had an older sibling with autism and the families of the low risk children did not! It would not be surprising at all if children with one autistic older sibling might get a different kind of attention and hence cry differently regardless of their potential future risk of autism.</li> <li><strong>No correction for multiple testing</strong>. This is one of the oldest problems in statistical analysis. It is also one that is a consistent culprit of false positives in epidemiology studies. XKCD <a href="http://xkcd.com/882/">even did a cartoon</a> about it! They tested 9 variables measuring the way babies cry and tested each one with a statistical hypothesis test. They did not correct for multiple testing. So I gathered resulting p-values and did the correction <a href="https://gist.github.com/4177366">for them</a>. It turns out that after adjusting for multiple comparisons, nothing is significant at the usual P &lt; 0.05 level, which would probably have prevented publication.</li> </ol> <p>Taken together, these problems mean that the statistical analysis of these data do not show any connection between crying and autism.</p> <p>The problem here exists on two levels. First, there was a failing in the statistical evaluation of this manuscript at the peer review level. Most statistical referees would have spotted these flaws and pointed them out for such a highly controversial paper. A second problem is that news agencies report on this result and despite paying lip-service to potential limitations, are not statistically literate enough to point out the major flaws in the analysis that reduce the probability of a true positive. Should journalists have some minimal in statistics that allows them to determine whether a result is likely to be a false positive to save us parents a lot of panic?</p> <p> </p> A non-comprehensive list of awesome things other people did in 2014 2014-12-17T13:08:43+00:00 http://simplystats.github.io3696 <p><em>Editor’s Note: Last year</em> <em><a href="http://simplystatistics.org/2013/12/20/a-non-comprehensive-list-of-awesome-things-other-people-did-this-year/">_Editor’s Note: Last year_ _</a> off the top of my head of awesome things other people did. I loved doing it so much that I’m doing it again for 2014. Like last year, I have surely missed awesome things people have done. If you know of some, you should make your own list or add it to the comments! The rules remain the same. I have avoided talking about stuff I worked on or that people here at Hopkins are doing because this post is supposed to be about other people’s awesome stuff. I wrote this post because a blog often feels like a place to complain, but we started Simply Stats as a place to be pumped up about the stuff people were doing with data. Update: I missed pipes in R, now added!</em></p> <p> </p> <ol> <li>I’m copying everything about Jenny Bryan’s amazing <a href="http://stat545-ubc.github.io/">Stat 545 class</a> in my data analysis classes. It is one of my absolute favorite open online set of notes on data analysis.</li> <li>Ben Baumer, Mine Cetinkaya-Rundel, Andrew Bray, Linda Loi, Nicholas J. Horton wrote <a href="http://arxiv.org/abs/1402.1894">this awesome paper</a> on integrating R markdown into the curriculum. I love the stuff that Mine and Nick are doing to push data analysis into undergrad stats curricula.</li> <li>Speaking of those folks, the undergrad g<a href="file:///Users/jtleek/Downloads/Report%20on%20Undergrad%20Ed_final3.pdf">uidelines for stats programs put out by the ASA</a> do an impressive job of balancing the advantages of statistics and the excitement of modern data analysis.</li> <li>Somebody tell Hector Corrada Bravo to stop writing so many awesome papers. He is making us all look bad. His <a href="http://www.nature.com/nmeth/journal/v11/n9/abs/nmeth.3038.html">epiviz paper is great</a> and you should go start using the <a href="http://www.bioconductor.org/packages/release/bioc/html/epivizr.html">Bioconductor packag</a>e if you do genomics.</li> <li>Hilary Mason founded<a href="http://www.fastforwardlabs.com/"> fast forward labs</a>. I love the business model of translating cutting edge academic (and otherwise) knowledge to practice. I am really pulling for this model to work.</li> <li>As far as I can tell 2014 was the year that causal inference become the new hotness. One example of that is this awesome paper from the Google folks on trying to <a href="http://google.github.io/CausalImpact/CausalImpact.html">infer causality from related time series</a>. <a href="http://google.github.io/CausalImpact/CausalImpact.html">The R package</a> has some <a href="https://twitter.com/hspter/status/496689866953224192">cool features too</a>. I definitely am excited to see all the new innovation in this area.</li> <li><a href="http://r-pkgs.had.co.nz/">Hadley</a> was <a href="https://github.com/hadley/dplyr">Hadley</a>.</li> <li>Rafa and <a href="http://www.mike-love.net/">Mike </a>taught an awesome class on data analysis for genomics. They also created a <a href="http://genomicsclass.github.io/book/">book on Github</a> that I think is one of the best introductions to the statistics of genomics that exists so far.</li> <li>Hilary Parker [<em>Editor’s Note: Last year</em> <em><a href="http://simplystatistics.org/2013/12/20/a-non-comprehensive-list-of-awesome-things-other-people-did-this-year/">_Editor’s Note: Last year_ _</a> off the top of my head of awesome things other people did. I loved doing it so much that I’m doing it again for 2014. Like last year, I have surely missed awesome things people have done. If you know of some, you should make your own list or add it to the comments! The rules remain the same. I have avoided talking about stuff I worked on or that people here at Hopkins are doing because this post is supposed to be about other people’s awesome stuff. I wrote this post because a blog often feels like a place to complain, but we started Simply Stats as a place to be pumped up about the stuff people were doing with data. Update: I missed pipes in R, now added!</em></li> </ol> <p> </p> <ol> <li>I’m copying everything about Jenny Bryan’s amazing <a href="http://stat545-ubc.github.io/">Stat 545 class</a> in my data analysis classes. It is one of my absolute favorite open online set of notes on data analysis.</li> <li>Ben Baumer, Mine Cetinkaya-Rundel, Andrew Bray, Linda Loi, Nicholas J. Horton wrote <a href="http://arxiv.org/abs/1402.1894">this awesome paper</a> on integrating R markdown into the curriculum. I love the stuff that Mine and Nick are doing to push data analysis into undergrad stats curricula.</li> <li>Speaking of those folks, the undergrad g<a href="file:///Users/jtleek/Downloads/Report%20on%20Undergrad%20Ed_final3.pdf">uidelines for stats programs put out by the ASA</a> do an impressive job of balancing the advantages of statistics and the excitement of modern data analysis.</li> <li>Somebody tell Hector Corrada Bravo to stop writing so many awesome papers. He is making us all look bad. His <a href="http://www.nature.com/nmeth/journal/v11/n9/abs/nmeth.3038.html">epiviz paper is great</a> and you should go start using the <a href="http://www.bioconductor.org/packages/release/bioc/html/epivizr.html">Bioconductor packag</a>e if you do genomics.</li> <li>Hilary Mason founded<a href="http://www.fastforwardlabs.com/"> fast forward labs</a>. I love the business model of translating cutting edge academic (and otherwise) knowledge to practice. I am really pulling for this model to work.</li> <li>As far as I can tell 2014 was the year that causal inference become the new hotness. One example of that is this awesome paper from the Google folks on trying to <a href="http://google.github.io/CausalImpact/CausalImpact.html">infer causality from related time series</a>. <a href="http://google.github.io/CausalImpact/CausalImpact.html">The R package</a> has some <a href="https://twitter.com/hspter/status/496689866953224192">cool features too</a>. I definitely am excited to see all the new innovation in this area.</li> <li><a href="http://r-pkgs.had.co.nz/">Hadley</a> was <a href="https://github.com/hadley/dplyr">Hadley</a>.</li> <li>Rafa and <a href="http://www.mike-love.net/">Mike </a>taught an awesome class on data analysis for genomics. They also created a <a href="http://genomicsclass.github.io/book/">book on Github</a> that I think is one of the best introductions to the statistics of genomics that exists so far.</li> <li>Hilary Parker](http://hilaryparker.com/2014/04/29/writing-an-r-package-from-scratch/) that took the twitterverse by storm. It is perfectly written for people who are just at the point of being able to create their own R package. I think it probably generated 100+ R packages just by being so easy to follow.</li> <li>Oh you’re <a href="http://www.statschat.org.nz/2014/12/10/spin-and-manipulation-in-science-reporting/">not reading StatsChat yet</a>? <a href="http://www.statschat.org.nz/2014/12/13/blaming-mothers-again/">For real</a>?</li> <li>FiveThirtyEight launched. Despite <a href="http://fivethirtyeight.com/features/a-formula-for-decoding-health-news/">some early bumps</a> they have done some really cool stuff. Loved the recent <a href="http://fivethirtyeight.com/tag/beer-mile/">piece on the beer mile</a> and I read every piece that <a href="http://fivethirtyeight.com/contributors/emily-oster/">Emily Oster writes</a>. She does an amazing job of explaining pretty complicated statistical topics to a really broad audience.</li> <li>David Robinson’s <a href="https://github.com/dgrtwo/broom">broom package</a> is one of my absolute favorite R packages that was built this year. One of the most annoying things about R is the variety of outputs different models give and this tidy version makes it really easy to do lots of neat stuff.</li> <li>Chung and Storey <a href="http://bioinformatics.oxfordjournals.org/content/early/2014/10/21/bioinformatics.btu674.full.pdf">introduced the jackstraw</a> which is both a very clever idea and the perfect name for a method that can be used to identify variables associated with principal components in a statistically rigorous way.</li> <li>I rarely dig excel-type replacements, but the <a href="http://www.charted.co/">simplicity of charted.co</a> makes me love it. It does one thing and one thing really well.</li> <li>The <a href="http://kbroman.wordpress.com/2014/05/15/hipster-re-educating-people-who-learned-r-before-it-was-cool/">hipsteR package</a> for teaching old R dogs new tricks is one of the many cool things Karl Broman did this year. I read all of his tutorials and never cease to learn stuff. In related news if I was 1/10th as organized as that dude I’d actually you know, get stuff done.</li> <li>Whether I agree with them or not that they should be allowed to do unregulated human subjects research, statistics at tech companies, and in particular randomized experiments have never been hotter. The boldest of the bunch is OKCupid who writes blog posts with titles like, “<a href="http://blog.okcupid.com/index.php/we-experiment-on-human-beings/">We experiment on human beings</a>!”</li> <li>In related news, I love the <a href="https://facebook.github.io/planout/">PlanOut project</a> by the folks over at Facebook, so cool to see an open source approach to experimentation at web scale.</li> <li>No wonder <a href="http://www.cs.berkeley.edu/~jordan/">Mike Jordan </a>(no not that <a href="http://en.wikipedia.org/wiki/Michael_Jordan">Mike Jordan</a>) is such a superstar. His <a href="http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan">reddit AMA</a> raised my respect for him from already super high levels. First, its awesome that he did it, and second it is amazing how well he articulates the relationship between CS and Stats.</li> <li>I’m trying to figure out a way to get Matthew Stephens to <a href="http://stephens999.github.io/blog/">write more blog posts.</a> He teased us with the <a href="http://stephens999.github.io/blog/2014/11/dscr.html">Dynamic Statistical Comparisons</a> post and then left us hanging. The people demand more Matthew.</li> <li>Di Cook also <a href="http://dicook.github.io/blog.html">started a new blog</a> in 2014. She was also <a href="https://unite.un.org/techevents/eda">part of this cool exploratory data analysis event</a> for the UN. They have a monster program going over there at Iowa State, producing some amazing research and a bunch of students that are recognizable by one name (Yihui, Hadley, etc.).</li> <li>Love <a href="http://arxiv-web3.library.cornell.edu/pdf/1407.7819v1.pdf">this paper on sure screening of graphical models</a> out of Daniela Witten’s group at UW. It is so cool when a simple idea ends up being really well justified theoretically, it makes the world feel right.</li> <li>I’m sure this actually happened before 2014, but the Bioconductor folks are still the best open source data science project that exists in my opinion. My favorite development I started using in 2014 is the <a href="http://www.bioconductor.org/developers/how-to/git-svn/">git-subversion bridge</a> that lets me update my Bioc packages with pull requests.</li> <li>rOpenSci <a href="https://github.com/ropensci/hackathon">ran an awesome hackathon</a>. The lineup of people they invited was great and I loved the commitment to a diverse group of junior R programmers. I really, really hope they run it again.</li> <li>Dirk Eddelbuettel and Carl Boettiger continue to make bigtime contributions to R. This time it is <a href="http://dirk.eddelbuettel.com/blog/2014/10/23/">Rocker</a>, with Docker containers for R. I think this could be a reproducibility/teaching gamechanger.</li> <li>Regina Nuzzo <a href="http://www.nature.com/news/scientific-method-statistical-errors-1.14700">brought the p-value debate to the masses</a>. She is also incredible at communicating pretty complicated statistical ideas to a broad audience and I’m looking forward to more stats pieces by her in the top journals.</li> <li>Barbara Engelhardt keeps <a href="http://arxiv.org/abs/1411.2698">rocking out great papers</a>. But she is also one of the best AE’s I have ever had handle a paper for me at PeerJ. Super efficient, super fair, and super demanding. People don’t get enough credit for being amazing in the peer review process and she deserves it.</li> <li>Ben Goldacre and Hans Rosling continue to be two of the best advocates for statistics and the statistical discipline - I’m not sure either claims the title of statistician but they do a great job anyway. <a href="http://news.sciencemag.org/africa/2014/12/star-statistician-hans-rosling-takes-ebola?rss=1&amp;utm_source=dlvr.it&amp;utm_medium=twitter">This piece</a> about Professor Rosling in Science gives some idea about the impact a statistician can have on the most current problems in public health. Meanwhile, I think Dr. Goldacre <a href="http://www.bmj.com/content/348/bmj.g3306/rr/759401">does a great job</a> of explaining how personalized medicine is an information science in this piece on statins in the BMJ.</li> <li>Michael Lopez’s <a href="http://statsbylopez.com/2014/07/23/so-you-want-a-graduate-degree-in-statistics/">series of posts</a> on graduate school in statistics should be 100% required reading for anyone considering graduate school in statistics. He really nails it.</li> <li> Trey Causey has an equally awesome <a href="http://treycausey.com/getting_started.html">Getting Started in Data Science</a> post that I read about 10 times.</li> <li>Drop everything and <a href="http://www.pgbovine.net/writings.htm">go read all of Philip Guo’s posts</a>. Especially <a href="http://www.pgbovine.net/academia-industry-junior-employee.htm">this one</a> about industry versus academia or this one on <a href="http://www.pgbovine.net/practical-reason-to-pursue-PhD.htm">the practical reason to do a PhD</a>.</li> <li>The top new Twitter feed of 2014 has to be <a href="https://twitter.com/ResearchMark">@ResearchMark</a> (incidentally I’m still mourning the disappearance of <a href="https://twitter.com/STATSHULK">@STATSHULK</a>).</li> <li>Stephanie Hicks’ blog <a href="http://statisticalrecipes.blogspot.com/">combines recipes for delicious treats and statistics</a>, also I thought she had <a href="http://statisticalrecipes.blogspot.com/2014/05/inaugural-women-in-statistics-2014.html">a great summary</a> of the Women in Stats (<a href="https://twitter.com/search?q=%23WiS2014%20&amp;src=typd">#WiS2014</a>) conference.</li> <li>Emma Pierson is a Rhodes Scholar who wrote for 538, 23andMe, and a bunch of other major outlets as an undergrad. Her blog, <a href="http://obsessionwithregression.blogspot.com/">obsessionwithregression.blogspot.com</a> is another must read. <a href="http://qz.com/302616/see-how-red-tweeters-and-blue-tweeters-ignore-each-other-on-ferguson/">Here is an example</a> of her awesome work on how different communities ignored each other on Twitter during the Ferguson protests.</li> <li>The Rstudio crowd continues to be on fire. I think they are a huge part of the reason that R is gaining momentum. It wouldn’t be possible to list all their contributions (or it would be an Rstudio exclusive list) but I really like <a href="http://blog.rstudio.org/2014/07/22/announcing-packrat-v0-4/">Packrat</a> and <a href="http://blog.rstudio.org/2014/06/18/r-markdown-v2/">R markdown v2</a>.</li> <li>Another huge reason for the movement with R has been the outreach and development efforts of the <a href="http://www.revolutionanalytics.com/">Revolution Analytics folks.</a> The <a href="http://blog.revolutionanalytics.com/">Revolutions blog</a> has been a must read this year.</li> <li>Julian Wolfson and Joe Koopmeiners at University of Minnesota are straight up gamers. <a href="http://sph.umn.edu/site/docs/biostats/OpenHouseFlyer2014.pdf">They live streamed their recruiting event</a> this year. One way I judge good ideas is by how mad I am I didn’t think of it and this one had me seeing bright red.</li> <li>This is <a href="http://jmlr.org/papers/volume15/delgado14a/delgado14a.pdf">just an awesome paper</a> comparing lots of machine learning algorithms on lots of data sets. Random forests wins and this is a nice update of one of my favorite papers of all time: <a href="http://arxiv.org/pdf/math/0606441.pdf">Classifier technology and the illusion of progress</a>.</li> <li><a href="http://www.r-statistics.com/2014/08/simpler-r-coding-with-pipes-the-present-and-future-of-the-magrittr-package/">Pipes in R</a>! This stuff is for real. The piping functionality created by Stefan Milton and Hadley is one of the few inventions over the last several years that immediately changed whole workflows for me.</li> </ol> <p> </p> <p>I’ll let <a href="https://twitter.com/ResearchMark">@ResearchMark</a> take us out:</p> <p><a href="https://pbs.twimg.com/media/B2NC5c7IYAAt_j-.jpg"><img class="aligncenter" src="https://pbs.twimg.com/media/B2NC5c7IYAAt_j-.jpg" alt="" width="308" height="308" /></a></p> Sunday data/statistics link roundup (12/14/14) 2014-12-14T12:54:50+00:00 http://simplystats.github.io3687 <ol> <li><a href="http://www.motherjones.com/kevin-drum/2014/12/economists-are-almost-inhumanly-impartial"> 1.</a> suggests that economists are impartial when it comes to their liberal/conservative views. That being said, I’m not sure the regression line says what they think it does, particularly if you pay attention to the variance around the line (via Rafa).</li> <li>I am digging the simplicity of <a href="http://www.charted.co/">charted.co</a> from the folks at Medium. But I worry about spurious correlations everywhere. I guess I should just let that ship sail.</li> <li>FiveThirtyEight <a href="http://fivethirtyeight.com/features/beer-mile-chug-run-repeat/">does a run down of the beer mile</a>. If they set up a data crunchers beer mile, we are in.</li> <li>I love it when Thomas Lumley interviews himself about silly research studies and particularly their associated press releases. I can actually hear his voice in my head when I read them. This time the <a href="http://www.statschat.org.nz/2014/12/13/blaming-mothers-again/">lipstick/IQ silliness gets Lumleyed</a>.</li> <li><a href="http://fivethirtyeight.com/datalab/michael-jordan-kobe-bryant/">Jordan was better than Kobe</a>. Surprise. Plus <a href="http://simplystatistics.org/2014/12/12/kobe-data-says-stop-blaming-your-teammates/">Rafa always takes the Kobe bait</a>.</li> <li><a href="http://mathesaurus.sourceforge.net/matlab-python-xref.pdf">Matlab/Python/R translation cheat sheet</a> (via Stephanie H.).</li> <li>If I’ve said it once, I’ve said it a million times, statistical thinking is now as important as reading and writing. <a href="http://www.bostonglobe.com/metro/2014/12/14/oversold-and-unregulated-flawed-prenatal-tests-leading-abortions-healthy-fetuses/aKFAOCP5N0Kr8S1HirL7EN/story.html">The latest example</a> is parents not understanding the difference between sensitivity and the predictive value of a positive may be leading to unnecessary abortions (via Dan M./Rafa).</li> </ol> Kobe, data says stop blaming your teammates 2014-12-12T10:00:20+00:00 http://simplystats.github.io3663 <p>This year, Kobe leads the league in missed shots (<a href="http://ftw.usatoday.com/2014/11/kobe-bryant-lakers-shot-stats">by a lot</a>), has an abysmal FG% of 39 and his team plays better <a href="http://bleacherreport.com/articles/2292515-how-much-blame-does-kobe-bryant-deserve-for-los-angeles-lakers-pathetic-start">when he is on the bench</a>. Yet he <a href="http://espn.go.com/los-angeles/nba/story/_/id/12016979/los-angeles-lakers-star-kobe-bryant-critical-teammates-heated-scrimmage">This year, Kobe leads the league in missed shots ([by a lot](http://ftw.usatoday.com/2014/11/kobe-bryant-lakers-shot-stats)), has an abysmal FG% of 39 and his team plays better [when he is on the bench](http://bleacherreport.com/articles/2292515-how-much-blame-does-kobe-bryant-deserve-for-los-angeles-lakers-pathetic-start). Yet he</a> for the Lakers’ 6-16 record. Below is a plot showing that 2014 is not the first time the Lakers are mediocre during Kobe’s tenure. It shows the percentage points above .500 per season with the Shaq and twin towers eras highlighted. I include the same plot for Lebron as a control.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/12/Rplot.png"><img class="alignnone size-large wp-image-3679" src="http://simplystatistics.org/wp-content/uploads/2014/12/Rplot-1024x511.png" alt="Rplot" width="525" srcset="http://simplystatistics.org/wp-content/uploads/2014/12/Rplot-1024x511.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/12/Rplot.png 1106w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p> <p>So stop blaming your teammates!</p> <p>And here is my <a href="http://rafalab.jhsph.edu/simplystats/kobe2014.R">hastily written code</a> (don’t judge me!).</p> <p> </p> <p> </p> <pre></pre> Genéticamente, no hay tal cosa como la raza puertorriqueña 2014-12-08T09:09:59+00:00 http://simplystats.github.io3633 <p><em>Editor’s note: Last week the Latin American media picked up a blog post with the eye-catching title “<a href="http://liorpachter.wordpress.com/2014/12/02/the-perfect-human-is-puerto-rican/">The perfect human is Puerto Rican</a>”. More attention appears to have been given to the title than the post itself. The coverage and comments on social media have demonstrated the need for scientific education on the topic of genetics and race. Here I will try to explain, in layman’s terms, why the interpretations I read in the main Puerto Rican paper is scientifically incorrect and somewhat concerning. The post is in Spanish.</em></p> <p>En un artículo reciente titulado “<a href="[http://www.elnuevodia.com/serhumanoperfectoseriapuertorriqueno-1903858.html">Ser humano perfecto sería puertorriqueño</a>”, El Nuevo Día resumió una entrada en el blog (erróneamente llamado un estudio) del matemático Lior Pachter. El autor del blog, intentando ridiculizar comentarios racistas que escuchó decir a James Watson, describe un experimento mental en el cual encuentra que el humano “perfecto” (las comilla son importantes), de existir, pertenecería a un grupo genéticamente mezclado. De las personas estudiadas, la más genéticamente cercana a su humano “perfecto” resultó ser una mujer puertorriqueña. La motivación de este ejercicio era ridiculizar la idea de que una raza puede ser superior a otra. El Nuevo Día parece no captar este punto y nos dice que “el experto concluyó que en todo caso no es de sorprenderse que la persona más cercana a tal perfección sería una puertorriqueña, debido a la combinación de buenos genes que tiene la raza puertorriqueña.” Aquí describo por qué esta interpretación es científicamente errada.</p> <p><strong>¿Qué es el genoma?</strong></p> <p>El genoma humano codifica (en moléculas de <a href="http://es.wikipedia.org/wiki/%C3%81cido_desoxirribonucleico">ADN</a>) la información genética necesaria para nuestro desarrollo biológico. Podemos pensar en el genoma como dos series de 3,000,000,000 letras (A, T, C o G) concatenadas. Una la recibimos de nuestro padre y la otra de nuestra madre. Distintos pedazos (los genes) codifican proteínas necesarias para las miles de funciones que cumplen nuestras células y que conllevan a algunas de nuestras características físicas. Con unas pocas excepciones, todas las células en nuestro cuerpo contienen una copia exacta de estas dos series de letras. El esperma y el huevo tienen sólo una serie de letras, una mezcla de las otras dos. Cuando se unen el esperma y el huevo, la nueva célula, el cigoto, une las dos series y es así que heredamos características de cada progenitor.</p> <p><strong>¿Qué es la variación genética?</strong></p> <p>Si todos venimos del primer humano,¿cómo entonces es que somos diferentes? Aunque es muy raro, estas letras a veces mutan aleatoriamente. Por ejemplo, una C puede cambiar a una T. A través de cientos de miles de años suficientes mutaciones han ocurrido para crear variación entre los humanos. La teoría de selección natural nos dice que si esta mutación confiere una ventaja para la supervivencia, el que la posee tiene más probabilidad de pasarla a sus descendientes. Por ejemplo, en Europa la piel clara es más ventajosa, por su habilidad de absorber vitamina D cuando hay poco sol, que en África Occidental donde la melanina en la piel oscura protege del sol intenso. Se estima que las diferencias entre los humanos se pueden encontrar en por lo menos 10 millones de las 3 mil millones de letras (noten que es menos de 1%).</p> <p><strong>Genéticamente, ¿qué es una “raza” ?</strong></p> <p>Esta es un pregunta controversial. Lo que no es controversial es que si comparamos la serie de letras de los europeos del norte con los africanos occidentales o con los indígenas de las Américas, encontramos pedazos del código que son únicos a cada región. Si estudiamos las partes del código que cambian entre humanos, fácilmente podemos distinguir los tres grupos. Esto no nos debe sorprender dado que, por ejemplo, la diferencia en el color de ojos y la pigmentación de la piel se codifica con distintas letras en los genes asociados con estas características. En este sentido podríamos crear una definición genética de “raza” basada en las letras que distinguen a estos grupos. Ahora bien, ¿podemos hacer lo mismo para distinguir un puertorriqueño de un dominicano? ¿Podemos crear una definición genética que incluye a Carlos Delgado y a Mónica Puig, pero no a Robinson Canó y Juan Luis Guerra? La literatura científica nos dice que no.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/12/PCAfinal.png"><img class="alignnone wp-image-3636 size-large" src="http://simplystatistics.org/wp-content/uploads/2014/12/PCAfinal-914x1024.png" alt="PCAfinal" width="411" height="461" srcset="http://simplystatistics.org/wp-content/uploads/2014/12/PCAfinal-267x300.png 267w, http://simplystatistics.org/wp-content/uploads/2014/12/PCAfinal-914x1024.png 914w, http://simplystatistics.org/wp-content/uploads/2014/12/PCAfinal-178x200.png 178w" sizes="(max-width: 411px) 100vw, 411px" /></a></p> <p>En una <a href="http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1003925">serie</a> de <a href="http://www.pnas.org/content/107/Supplement_2/8954">artículos</a> , el genético Carlos Bustamante y sus colegas han estudiado los genomas de personas de varios grupos étnicos. Ellos definen una distancia genética que resumen con dos dimensiones en la gráfica arriba. Cada punto es una persona y el color presenta a su grupo. Noten los tres extremos de la gráfica con muchos puntos del mismo color amontonados. Estos son los europeos blancos (puntos rojo), africanos occidentales (verde) e indígenas americanos (azul). Los puntos más regados en el medio son las poblaciones mezcladas. Entre los europeos y los indígenas vemos a los mexicanos y entre los europeos y africanos a los afroamericanos. Los puertorriqueños son los puntos anaranjados. He resaltado con números a tres de ellos. El <strong>1</strong> está cerca del supuesto humano “perfecto”. El <strong>2</strong> es indistinguible de un europeo y el <strong>3</strong> es indistinguible de un afroamericano. Los demás cubrimos un espectro amplio. También resalto con el número <strong>4</strong> a un dominicano que está tan cerca a la “perfección” como la puertorriqueña. La observación principal es que hay mucha variación genética entre los puertorriqueños. En los que Bustamante estudió, la ascendencia africana varía de 5-60%, la europea de 35-95% y la taína de 0-20%. ¿Cómo entonces podemos hablar de una “raza” puertorriqueña cuando nuestros genomas abarcan un espacio tan grande que puede incluir, entre otros, europeos, afroamericanos y dominicanos ?</p> <p><strong>¿Qué son los genes “buenos”?</strong></p> <p>Algunas mutaciones son letales. Otras resultan en cambios a proteínas que causan enfermedades como la <a href="http://es.wikipedia.org/wiki/Fibrosis_qu%C3%ADstica">fibrosis quística</a> y requieren que ambos padres tengan la mutación. Por lo tanto la mezcla de genomas diferentes disminuye las probabilidades de estas enfermedades. Recientemente una serie de estudios ha encontrado ventajas de algunas combinaciones de letras relacionadas a enfermedades comunes como la hipertensión. Una mezcla genética que evita tener dos copias de estos genes con más riesgo puede ser ventajosa. Pero las supuestas ventajas son pequeñísimas y específicas a enfermedades, no a otras características que asociamos con la “perfección”. El concepto de “genes buenos” es un vestigio de la <a href="http://en.wikipedia.org/wiki/Eugenics">eugenesia</a>.</p> <p>A pesar de nuestros problemas sociales y económicos actuales, Puerto Rico tiene mucho de lo cual estar orgulloso. En particular, producimos buenísimos ingenieros, atletas y músicos. Atribuir su éxito a “genes buenos” de nuestra “raza” no sólo es un disparate científico, sino una falta de respeto a estos individuos que a través del trabajo duro, la disciplina y el esmero han logrado lo que han logrado. Si quieren saber si Puerto Rico tuvo algo que ver con el éxito de estos individuos, pregúntenle a un historiador, un antropólogo o un sociólogo y no a un genetista. Ahora, si quieren aprender del potencial de estudiar genomas para mejorar tratamientos médicos y la importancia de estudiar una diversidad de individuos, un genetista tendrá mucho que compartir.</p> Sunday data/statistics link roundup (12/7/14) 2014-12-07T10:00:43+00:00 http://simplystats.github.io3631 <ol> <li><a href="http://www.apa.org/news/press/releases/2014/11/airport-security.aspx">A randomized controlled trial</a> shows that using conversation to detect suspicious behavior is much more effective then just monitoring body language (via Ann L. on Twitter). This comes as a crushing blow to those of us who enjoyed the now-cancelled <a href="http://en.wikipedia.org/wiki/Lie_to_Me">Lie to Me</a> and assumed it was all real.</li> <li>Check out this awesome <a href="http://map.ipviking.com/">real-time visualization</a> of different types of network attacks. Rafa says if you watch long enough you will almost certainly observe a “storm” of attacks. A cool student project would be modeling the distribution of these attacks if you could collect the data (via David S.).</li> <li><a href="http://goodstrat.com/2014/12/03/consider-this-did-big-data-kill-the-statistician/">Consider this: Did Big Data Kill the Statistician?</a> I understand the sentiment, that statistical thinking and applied statistics has been around a long time and has <a href="http://simplystatistics.org/2014/05/22/10-things-statistics-taught-us-about-big-data-analysis/">produced some good ideas</a>. On the other hand, there is definitely a large group of statisticians who aren’t willing to expand their thinking beyond a really narrow set of ideas (via Rafa)</li> <li><a href="http://www.huffingtonpost.com/2014/12/03/gangnam-style-youtube_n_6261332.html">Gangnam Style viewership creates integers too big for Youtube</a> (via Rafa)</li> <li>A couple of interviews worth reading, [ 1. <a href="http://www.apa.org/news/press/releases/2014/11/airport-security.aspx">A randomized controlled trial</a> shows that using conversation to detect suspicious behavior is much more effective then just monitoring body language (via Ann L. on Twitter). This comes as a crushing blow to those of us who enjoyed the now-cancelled <a href="http://en.wikipedia.org/wiki/Lie_to_Me">Lie to Me</a> and assumed it was all real.</li> <li>Check out this awesome <a href="http://map.ipviking.com/">real-time visualization</a> of different types of network attacks. Rafa says if you watch long enough you will almost certainly observe a “storm” of attacks. A cool student project would be modeling the distribution of these attacks if you could collect the data (via David S.).</li> <li><a href="http://goodstrat.com/2014/12/03/consider-this-did-big-data-kill-the-statistician/">Consider this: Did Big Data Kill the Statistician?</a> I understand the sentiment, that statistical thinking and applied statistics has been around a long time and has <a href="http://simplystatistics.org/2014/05/22/10-things-statistics-taught-us-about-big-data-analysis/">produced some good ideas</a>. On the other hand, there is definitely a large group of statisticians who aren’t willing to expand their thinking beyond a really narrow set of ideas (via Rafa)</li> <li><a href="http://www.huffingtonpost.com/2014/12/03/gangnam-style-youtube_n_6261332.html">Gangnam Style viewership creates integers too big for Youtube</a> (via Rafa)</li> <li>A couple of interviews worth reading,](http://simplystatistics.org/2014/12/05/interview-with-cole-trapnell-of-uw-genome-sciences/) and <a href="http://samsiatrtp.wordpress.com/2014/11/18/samsi-postdoctoral-profile-jyotishka-datta/">SAMSI’s with Jyotishka Data</a> (via Jamie N.)</li> <li> <a href="http://www.theguardian.com/technology/2014/dec/05/when-data-gets-creepy-secrets-were-giving-away">A piece on the secrets we don’t know we are giving away</a> through giving our data to [companies/the government/the internet].</li> </ol> Interview with Cole Trapnell of UW Genome Sciences 2014-12-05T12:06:57+00:00 http://simplystats.github.io3623 <div id="mO" class=""> <div class="tNsA5e-nUpftc nUpftc ja xpv2f"> <div class="pf"> <div class="nXx3q"> <div class="cA"> <div class="cl ac"> <div class="yDSKFc viy5Tb"> <div class="rt"> <div class="DsPmj"> <div class="scroll-list-section-body scroll-list-section-body-0"> <div class="scroll-list-item top-level-item scroll-list-item-open scroll-list-item-highlighted" tabindex="0" data-item-id="Bs#gmail:thread-f:1463549268702220125" data-item-id-qs="qsBs-gmail-thread-f-1463549268702220125-0"> <div class="ah V T qX V-M"> <div class="af qX af-M"> <div class="fB qX"> <div class="ag qX" tabindex="0" data-msg-id="Bs#msg-f:1463577765776057801" data-msg-id-qs="qsBs-msg-f-1463577765776057801"> <div class="nI qX"> <div class="gm qX"> <div class="bK xJNT8d"> <div> <div class="nD"> <blockquote> <div dir="ltr"> <div> <a href="http://simplystatistics.org/wp-content/uploads/2014/12/cole_cropped.jpg"><img class="aligncenter wp-image-3624" src="http://simplystatistics.org/wp-content/uploads/2014/12/cole_cropped-278x300.jpg" alt="cole_cropped" width="186" height="200" /></a> </div> </div> </blockquote> <div dir="ltr"> </div> <div dir="ltr"> <div style="text-align: left;"> <em><a href="http://cole-trapnell-lab.github.io/">Cole Trapnell</a> is an Assistant Professor of Genome Sciences at the University of Washington. He is the developer of multiple incredibly widely used tools for genomics including Tophat, Cufflinks, and Monocle. His lab at UW studies cell differentiation, reprogramming, and other transitions between stable or metastable cellular states using a combination of computational and experimental techniques. We talked to Cole as part of our <a href="http://simplystatistics.org/interviews/">ongoing interview series</a> with exciting junior data scientists. </em> </div> <div style="text-align: left;"> </div> <div style="text-align: left;"> </div> <div style="text-align: left;"> <strong>SS: Do you consider yourself a computer scientist, a statistician, a computational biologist, or something else?</strong> </div> </div> </div> </div> <div> <div class="F3hlO"> <div> <p> CT: The questions that get me up and out of bed in the morning the fastest are biology questions. I work on cell differentiation - I want to know how to define the state of a cell and how to predict transitions between states. That said, my approach to these questions so far has been to use new technologies to look at previously hard to access aspects of gene regulation. For example, I’ve used RNA-Seq to look beyond gene expression into finer layers of regulation like splicing. Analyzing sequencing experiments often involves some pretty non-trivial math, computer science, and statistics. These data sets are huge, so you need fast algorithms to even look at them. They all involve transforming reads into a useful readout of biology, and the technical and biological variability in that transformation needs to be understood and controlled for, so you see cool mathematical and statistical problems all the time. So I guess you could say that I’m a biologist, both experimental and computational. I have to do some computer science and statistics in order to do biology. </p> <div> </div> </div> </div> </div> <div> <div class="nD"> <div> <div> <div> <div> <div> <div dir="ltr"> <div> <strong>SS: You got a Ph.D. in computer science but have spent the last several years in a wet lab learning to be a bench biologist - why did you make that choice?</strong> </div> </div> </div> </div> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div> <div> <p> CT: Three reasons, mainly: </p> <p> 1) I thought learning to do bench work would make me a better overall scientist. It has, in many ways, I think. It’s fundamentally changed the way I approach the questions I work on, but it’s also made me more effective in lots of tiny ways. I remember when I first got to John Rinn’s lab, we needed some way to track lots of libraries and other material. I came up with some scheme where each library would get an 8-digit alphanumeric code generated by a hash function or something like that (we’d never have to worry about collisions!). My lab mate handed me a marker and said, “OK, write that on the side of these 12 micro centrifuge tubes”. I threw out my scheme and came up with something like “JR_1”, “JR_2”, etc. That’s a silly example, but I mention it because it reminds me of how completely clueless I was about where biological data really comes from. </p> <p> 2) I wanted to establish an independent, long-term research program investigating differentiation, and I didn’t want to have to rely on collaborators to generate data. I knew at the end of grad school that I wanted to have my own wet lab, and I doubted that anyone would trust me with that kind of investment without doing some formal training. Despite the now-common recognition by experimental biologists that analysis is incredibly important, there’s still a perception out there that computational biologists aren’t “real biologists”, and that computational folks are useful tools, but not the drivers of the intellectual agenda. That's of course not true, but I didn’t want to fight the stigma. </p> <p> 3) It sounded fun. I had one or two friends who had followed the "dry to wet” training trajectory, and they were having a blast. Seeing a result live under the microscope is satisfying in a way that I’ve rarely experienced looking at a computer screen. </p> <div> </div> </div> </div> </div> </div> <div> <strong>SS: Do you plan to have both a wet lab and a dry lab when you start your new group? </strong> </div> <div> <div class="F3hlO"> <div> <div> <div> <p> CT: Yes. I’m going to be starting my lab at the University of Washington in the department of Genome Sciences this summer, and it’s going to be a roughly 50/50 operation, I hope. Many of the labs there are set up that way, and there’s a real culture of valuing both sides. As a postdoc, I’ve been extremely fortunate to collaborate with grad students and postdocs who were trained as cell or molecular biologists but wanted to learn sequencing analysis. We’d train each other, often at great cost in terms of time spent solving “somebody else’s problem”. I’m going to do my best to create an environment like that, the way John did for me and my lab mates. </p> <div> </div> <div> <strong>SS: You are frequently on the forefront of new genomic technologies. As data sets get larger and more complicated how do we ensure reproducibility and replicability of computational results? </strong> </div> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div> <div> <div> <div> <div> <p> CT: That’s a good question, and I don’t really have a good answer. You’ve talked a lot on this blog about the importance of making science more reproducible and how journals could change to make it so. I agree wholeheartedly with a lot of what you’ve said. I like the idea of "papers as packages”, but I don’t see it happening soon, because it’s a huge amount of extra work and there’s not a big incentive to do so. Doing so might make it easier to be attacked, so there could even a disincentive! Scientists do well when the publish papers and those papers are cited widely. We have lots of ways to quantify “impact” - h-index, total citation count, how many times your paper is shared via twitter on a given day, etc. (Say what you want about whether these are meaningful measures). </p> <p> We don’t have a good way to track who’s right and who’s wrong, or whose results are reproducible and whose aren’t, short of full blown paper retraction. Most papers aren’t even checked in a serious way. Worse, the papers that are checked are the ones that a lot of people see - few people spend precious time following up on tangential observations in low circulation journals. So there’s actually an incentive to publish “controversial" results in highly visible journals because at least you’re getting attention. </p> <p> Maybe we need a Yelp for papers and data sets? One where in order to dispute the reproducibility of the analysis, you’d have to provide the code *you* ran to generate a contradictory result? There needs to be a genuine and tangible *reward* (read: funding and career advancement) for putting up an analysis that others can dive into, verify, extend, and learn from. </p> <p> In any case, I think it’s worth noting that reproducibility is not a problem unique to computation - experimentalists have a hard time reproducing results they got last week, much less results that came from some other lab! There’s all kinds of harmless reasons for that. Experiments are hard. Reagents come in bad lots. You had too much coffee that morning and can’t steady your pipet hand to save your life. But I worry a bit that we could spend a lot of effort making our analysis totally automated and perfectly reproducible and still be faced with the same problem. </p> <div> </div> <div> <strong>SS: What are the interesting statistical challenges in single-cell RNA-sequencing? </strong> </div> </div> </div> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div> <div> <div> <div> <div> <div> <p> CT: </p> <p> Oh man, there are many. Here’s a few: </p> <p> 1) There some very interesting questions about variability in expression across cells, or within one cell across time. There’s clearly a lot of variability in the expression level of a given gene across cells. But there’s really no way right now to take “replicate” measurements of a single cell. What would that mean? With current technology, to make an RNA-Seq library form a cell, you have to lyse it. So that’s it for that cell. Even if you had a non-invasive way to measure the whole transcriptome, the cell is a living machine that’s always changing in ways large and small, even in culture. Would you consider repeated measurements “replicates”. Furthermore, how can you say that two different cells are “replicate” measurements of a single, defined cell state? Do such states even really exist? </p> <p> For that matter, we don’t have a good way of assessing how much variability stems from technical sources as opposed to biological sources. One common way of assessing technical variability is to spike some alien transcripts at known concentrations in to purified RNA before making the library, so you can see how variable your endpoint measurements are for those alien transcripts. But to do that for single-cell RNA-Seq, we’d have to actually spike transcripts *into* the nucleus of a cell before we lyse it and put it through the library prep process. Just doping it into the lysate’s not good enough, because the lysis itself might (and likely does) destroy a substantial fraction of the endogenous RNA in the cell. So there are some real barriers to overcome in order to get a handle on how much variability is really biological. </p> <p> 2) A second challenge is writing down what a biological process looks like at single cell resolution. I mean we want to write down a model that predicts the expression levels of each gene in a cell as it goes through some biological process. We want to be able to say this gene comes on first, then this one, then these genes, and so on. In genomics up until now, we’ve been in the situation where we are measuring many variables (P) from few measurements (N). That is, N &lt;&lt; P, typically, which has made this problem extremely difficult. With single cell RNA-Seq, that may no longer be the case. We can already easily capture hundreds of cells, and thousands of cells per capture is just around the corner, so soon, N will be close to P, and maybe someday greater. </p> <p> Assume for the moment that we are capturing cells that are either resting at or transiting between well defined states. You can think of each cell as a point in a high-dimensional geometric space, where each gene is a different dimension. We’d like to find those equilibrium states and figure out which genes are correlated with which other genes. Even better, we’d like to study the transitions between states and identify the genes that drive them. The curse of dimensionality is always going to be a problem (we’re not likely to capture millions or billions of cells anytime soon), but maybe we have enough data to make some progress. There’s interesting literature out there for tackling problems at this scale, but to my knowledge these methods haven’t yet been widely applied in biology. I guess you can think of cell differentiation viewed at whole-transcriptome, single-cell resolution as one giant manifold learning problem. Same goes for oncogenesis, tissue homeostasis, reprogramming, and on and on. It’s going to be very exciting to see the convergence of large scale statistical machine learning and cell biology. </p> <p> <strong>SS: If you could do it again would you do computational training then wet lab training or the other way around? </strong> </p> </div> </div> </div> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <p> CT: I’m happy with how I did things, but I’ve seen folks go the other direction very successfully. My labmates Loyal Goff and Dave Hendrickson started out as molecular biologists, but they’re wizards at the command line now. </p> <div> </div> </div> </div> <div> <div class="nD"> <div> <div> <div> <div> <div> <div> <div> <div> <div dir="ltr"> <div> <strong>SS: What is your programming language of choice? </strong> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> <div> <div class="F3hlO"> <div> <div> <div> <div> <div> <p> CT: Oh, I’d say I hate them all equally 😉 </p> <p> Just kidding. I’ll always love C++. I work in R a lot these days, as my work has veered away from developing tools for other people towards analyzing data I’ve generated. I still find lots of things about R to be very painful, but ggplot2, plyr, and a handful of other godsend packages make the juice worth the squeeze. </p> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> Repost: A deterministic statistical machine 2014-12-04T13:13:45+00:00 http://simplystats.github.io3619 <p><em>Editor’s note: This is a repost of our previous post about deterministic statistical machines. It is inspired by the <a href="https://gigaom.com/2014/12/02/google-is-funding-an-artificial-intelligence-for-data-science/">recent announcement</a> that the <a href="http://www.automaticstatistician.com/">Automatic Statistician </a>received funding from Google. In 2012 we also applied to Google for a small research award to study this same problem, but didn’t get it. In the interest of extreme openness like Titus Brown or Ethan White, <a href="https://docs.google.com/document/d/1ERL40_LYt4U_vYx2rUxPvIhCrxnpld3dcrtEiCeWn8U/edit">here is our application</a> we submitted to Google. I showed this to a friend who told me the reason we didn’t get it is because our proposal was missing two words: “artificial”, “intelligence”. </em></p> <p>As Roger pointed out the most recent batch of Y Combinator startups included a bunch of <a href="http://simplystatistics.org/post/29964925728/data-startups-from-y-combinator-demo-day" target="_blank">data-focused</a> companies. One of these companies, <a href="https://www.statwing.com/" target="_blank">StatWing</a>, is a web-based tool for data analysis that looks like an improvement on SPSS with more plain text, more visualization, and a lot of the technical statistical details “under the hood”. I first read about StatWing on TechCrunch, where the title, <a href="http://techcrunch.com/2012/08/16/how-statwing-makes-it-easier-to-ask-questions-about-data-so-you-dont-have-to-hire-a-statistical-wizard/" target="_blank">“How Statwing Makes It Easier To Ask Questions About Data So You Don’t Have To Hire a Statistical Wizard”</a>.</p> <p>StatWing looks super user-friendly and the idea of democratizing statistical analysis so more people can access these ideas is something that appeals to me. But, as one of the aforementioned statistical wizards, this had me freaked out for a minute. Once I looked at the software though, I realized it suffers from the same problem that most “user-friendly” statistical software suffers from. It makes it really easy to screw up a data analysis. It will tell you when something is significant and if you don’t like that it isn’t, you can keep slicing and dicing the data until it is. The key issue behind getting insight from data is knowing when you are fooling yourself with confounders, or small effect sizes, or overfitting. StatWing looks like an improvement on the UI experience of data analysis, but it won’t prevent false positives that plague science and cost business big.</p> <p>So I started thinking about what kind of software would prevent these sort of problems while still being accessible to a big audience. My idea is a “deterministic statistical machine”. Here is how it works, you input a data set and then specify the question you are asking (is variable Y related to variable X? can i predict Z from W?) then, depending on your question, it uses a deterministic set of methods to analyze the data. Say regression for inference, linear discriminant analysis for prediction, etc. But the method is fixed and deterministic for each question. It also performs a pre-specified set of checks for outliers, confounders, missing data, <a href="http://www.nature.com/news/the-data-detective-1.10937" target="_blank">maybe even data fudging</a>. It generates a report with a markdown tool and then immediately publishes the result to <a href="http://figshare.com/" target="_blank">figshare</a>.</p> <p>The advantage is that people can get their data-related questions answered using a standard tool. It does a lot of the “heavy lifting” in checking for potential problems and produces nice reports. But it is a deterministic algorithm for analysis so overfitting, fudging the analysis, etc. are harder. By publishing all reports to figshare, it makes it even harder to fudge the data. If you fiddle with the data to try to get a result you want, there will be a “multiple testing paper trail” following you around.</p> <p>The DSM should be a web service that is easy to use. Anybody want to build it? Any suggestions for how to do it better?</p> Thinking Like a Statistician: Social Media and the ‘Spiral of Silence’ 2014-12-02T10:00:39+00:00 http://simplystats.github.io3584 <p>A few months ago the Pew Research Internet Project published a <a href="http://www.pewinternet.org/2014/08/26/social-media-and-the-spiral-of-silence/">paper</a> on social media and the ‘<a href="http://en.wikipedia.org/wiki/Spiral_of_silence">spiral of silence</a>’. Their main finding is that people are less likely to discuss a controversial topic on social media than in person. Unlike others, I did not find this result surprising, perhaps because I think like a statistician.</p> <p>Shares or retweets of published opinions on controversial political topics - religion, abortion rights, gender inequality, immigration, income inequality, race relations, the role of government, foreign policy, education, climate change - are ubiquitous in social media. These are usually accompanied by passionate statements of strong support or outraged disagreement. Because these are posted by people we elect to follow, we generally agree with what we see on our feeds. Here is a statistical explanation for why many keep silent when they disagree.</p> <p>We will summarize the <em>political view</em> of an individual as their opinions on the 10 topics listed above. For simplicity I will assume these opinions can be quantified with a left (liberal) to right (conservative) scale. Every individual can therefore be defined by a point in a 10 dimensional space. Once quantified in this way, we can define a political distance between any pair of individuals. In the American landscape there are two clear clusters which I will call the Fox News and MSNBC clusters. As seen in the illustration below, the cluster centers are very far from each other and individuals within the clusters are very close. Each cluster has a very low opinion of the other. A glance through a social media feed will quickly reveal individuals squarely inside one of these clusters. Members of the clusters fearlessly post their opinions on controversial topics as this behavior is rewarded by likes, retweets or supportive comments from others in their cluster. Based on the uniformity of opinion inferred from the comments, one would think that everybody is in one of these two groups. But this is obviously not the case.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/12/plotforpost.png"><img class="aligncenter wp-image-3602 size-large" src="http://simplystatistics.org/wp-content/uploads/2014/12/plotforpost-1024x1007.png" alt="plotforpost" width="396" height="389" srcset="http://simplystatistics.org/wp-content/uploads/2014/12/plotforpost-300x295.png 300w, http://simplystatistics.org/wp-content/uploads/2014/12/plotforpost-1024x1007.png 1024w" sizes="(max-width: 396px) 100vw, 396px" /></a></p> <p>In the illustration above I include an example of an individual (the green dot) that is outside the two clusters. Although not shown, there are many of these <em>independent thinkers</em>. In our example, this individual is very close to the MSNBC cluster, but not in it. The controversial topic posts in this person’s feed are mostly posted by those in the cluster of closest proximity, and the spiral of silence is due in part to the fact that independent thinkers are uniformly adverse to disagreeing publicly. For the mathematical explanation of why, we introduce the concept of a <a href="http://en.wikipedia.org/wiki/Projection_%28mathematics%29"><em>projection</em></a>.</p> <p>In mathematics, a projection can map a multidimensional point to a smaller, simpler, subset. In our illustration, the independent thinker is very close to the MSNBC cluster on all dimensions except one. To use education as an example, let’s say this person supports <a href="http://www.foxnews.com/opinion/2014/10/10/florida-senator-why-am-fighting-for-school-choice-lifeline-for-poor-kids/">school choice</a>. As seen in the illustration, in the projection to the education dimension, that mostly liberal person is squarely in the Fox News cluster. Now imagine that a friend shares an article on <a href="http://www.huffingtonpost.com/diann-woodard/the-corporate-takeover_b_3397091.html">The Corporate Takeover of Public Education</a> along with a passionate statement of approval. Independent thinkers have a feeling that by voicing their dissent, dozens, perhaps hundreds, of strangers on social media (friends of friends for example) will judge them solely on this projection. To make matters worse, public shaming of the independent thinker, for supposedly being a member of the Fox News cluster, will then be rewarded by increased social standing among the MSNBC cluster as evidenced by retweets, likes and supportive comments. In a worse case scenario for this person, and best case scenario for the critics, this public shaming goes viral. While the short term rewards for preaching to the echo chamber are clear, there are no apparent incentives for dissent.</p> <p>The superficial and fast paced nature of social media is not amenable to nuances and subtleties. Disagreement with the groupthink on one specific topic can therefore get a person labeled as a “neoliberal corporate shill” by the MSNBC cluster or a “godless liberal” by the Fox News one. The irony is that in social media, those politically closest to you, will be the ones attaching the unwanted label.</p> HarvardX Biomedical Data Science Open Online Training Curriculum launches on January 19 2014-11-25T14:01:47+00:00 http://simplystats.github.io3575 <p>We recently received <a href="http://bd2k.nih.gov/FY14/COE/COE.html#sthash.ESkvsyrj.dpbs">We recently received </a> initiative to develop MOOCs for biomedical data science. Our first offering will be version 2 of my <a href="http://simplystatistics.org/2014/03/31/data-analysis-for-genomic-edx-course/">Data Analysis for Genomics course</a> which will launch on January 19. In this version, the course will be turned into an 8 course series and you can get a certificate in each one of them. The motivation for doing this is to go more in-depth into the different topics and to provide different entry points for students with different levels of expertise. We provide four courses on concepts and skills and four case-study based course. We basically broke the original class into the following eight parts:</p> <ol> <li><a href="https://www.edx.org/course/statistics-with-r-for-life-sciences-harvardx-ph525-1x#.VHTQgmTF86B">Statistics and R for the Life Sciences</a></li> <li><a href="https://www.edx.org/course/introduction-to-linear-models-and-matrix-algebra-harvardx-ph525-2x#.VHTQxGTF86B">Introduction to Linear Models and Matrix Algebra</a></li> <li><a href="https://www.edx.org/course/advanced-statistics-for-the-life-sciences-harvardx-ph525-3x#.VHTQ0GTF86B">Advanced Statistics for the Life Sciences</a></li> <li><a href="https://www.edx.org/course/introduction-to-bioconductor-harvardx-ph525-4x#.VHTQ22TF86B">Introduction to Bioconductor</a></li> <li><a href="https://www.edx.org/course/case-study-rna-seq-data-analysis-harvardx-ph525-5x#.VHTQ5mTF86B">Case study: RNA-seq data analysis</a></li> <li><a href="https://www.edx.org/course/case-study-variant-discovery-and-genotyping-harvardx-ph525-6x#.VHTQ-WTF86B">Case study: Variant Discovery and Genotyping</a></li> <li><a href="https://www.edx.org/course/case-study-chip-seq-data-analysis-harvardx-ph525-7x#.VHTRBWTF86B">Case study: ChIP-seq data analysis</a></li> <li><a href="https://www.edx.org/course/case-study-dna-methylation-data-analysis-harvardx-ph525-8x#.VHTREmTF86B">Case study: DNA methylation data analysis</a></li> </ol> <p>You can follow the links to enroll. While not required, some familiarity with R and Rstudio will serve you well so consider taking <a href="https://www.coursera.org/course/rprog">Roger’s R course</a> and Jeff’s <a href="https://www.coursera.org/course/datascitoolbox">Toolbox</a> course before delving into this class.</p> <p>In years 2 and 3 we plan to introduce several other courses covering topics such as python for data analysis, probability, software engineering, and data visualization which will be taught by a collaboration between the departments of Biostatistics, Statistics and Computer Science at Harvard.</p> <p>Announcements will be made here and on twitter: <a href="https://twitter.com/rafalab">@rafalab</a></p> <p> </p> Data Science Students Predict the Midterm Election Results 2014-11-12T13:37:36+00:00 http://simplystats.github.io3552 <p>As explained in an <a href="http://simplystatistics.org/2014/11/04/538-election-forecasts-made-simple/">earlier post</a>, one of the homework assignments of my <a href="http://cs109.github.io/2014/">CS109</a> class was to predict the results of the midterm election. We created a competition in which 49 students entered. The most interesting challenge was to provide intervals for the republican - democrat difference in each of the 35 senate races. Anybody missing more than 2 was eliminated. The average size of the intervals was the tie breaker.</p> <p>The main teaching objective here was to get students thinking about how to evaluate prediction strategies when chance is involved. To a naive observer, a biased strategy that favored democrats and correctly called, say, Virginia may look good in comparison to strategies that called it a toss-up. However, a look at the other 34 states would reveal the weakness of this biased strategy. I wanted students to think of procedures that can help distinguish lucky guesses from strategies that universally perform well.</p> <p>One of the concepts we discussed in class was the systematic bias of polls which we modeled as a random effect. One can’t infer this bias from polls until after the election passes. By studying previous elections students were able to estimate the SE of this random effect and incorporate it into the calculation of intervals. The realization of this random effect was <a href="http://fivethirtyeight.com/features/the-polls-were-skewed-toward-democrats/">very large</a> in these elections (about +4 for the democrats) which clearly showed the importance of modeling this source of variability. Strategies that restricted standard error measures to sample estimates from this year’s polls did very poorly. The <a href="http://fivethirtyeight.com/interactives/senate-forecast/">90% credible intervals</a> provided by 538, which I believe does incorporate this, missed 8 of the 35 races (23%). This suggests that they underestimated the variance. Several of our students compared favorably to 538:</p> <div class="table-responsive"> <table style="width:100%; " class="easy-table easy-table-default " border="0"> <tr> <th> name </th> <th> avg bias </th> <th> MSE </th> <th> avg interval size </th> <th> # missed </th> </tr> <tr> <td> Manuel Andere </td> <td> -3.9 </td> <td> 6.9 </td> <td> 24.1 </td> <td> 3 </td> </tr> <tr> <td> Richard Lopez </td> <td> -5.0 </td> <td> 7.4 </td> <td> 26.9 </td> <td> 3 </td> </tr> <tr> <td> Daniel Sokol </td> <td> -4.5 </td> <td> 6.4 </td> <td> 24.1 </td> <td> 4 </td> </tr> <tr> <td> Isabella Chiu </td> <td> -5.3 </td> <td> 9.6 </td> <td> 26.9 </td> <td> 6 </td> </tr> <tr> <td> Denver Mosigisi Ogaro </td> <td> -3.2 </td> <td> 6.6 </td> <td> 18.9 </td> <td> 7 </td> </tr> <tr> <td> Yu Jiang </td> <td> -5.6 </td> <td> 9.6 </td> <td> 22.6 </td> <td> 7 </td> </tr> <tr> <td> David Dowey </td> <td> -3.5 </td> <td> 6.2 </td> <td> 16.3 </td> <td> 8 </td> </tr> <tr> <td> Nate Silver </td> <td> -4.2 </td> <td> 6.6 </td> <td> 16.4 </td> <td> 8 </td> </tr> <tr> <td> Filip Piasevoli </td> <td> -3.5 </td> <td> 7.4 </td> <td> 22.1 </td> <td> 8 </td> </tr> <tr> <td> Yapeng Lu </td> <td> -6.5 </td> <td> 8.2 </td> <td> 16.5 </td> <td> 10 </td> </tr> <tr> <td> David Jacob Lieb </td> <td> -3.7 </td> <td> 7.2 </td> <td> 17.1 </td> <td> 10 </td> </tr> <tr> <td> Vincent Nguyen </td> <td> -3.8 </td> <td> 5.9 </td> <td> 11.1 </td> <td> 14 </td> </tr> </table> </div> <p>It is important to note that 538 would have probably increased their interval size had they actively participated in a competition requiring 95% of the intervals to cover. But all in all, students did very well. The majority correctly predicted the republican take over. The median mean square error across all 49 participantes was 8.2 which was not much worse that 538’s 6.6. Other example of strategies that I think helped some of these students perform well was the use of creative weighting schemes (based on previous elections) to average poll and the use of splines to estimate trends, which in this particular election were going in the republican’s favor.</p> <p>Here are some plots showing results from two of our top performers:</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot.png"><img class="alignnone wp-image-3560" src="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot.png" alt="Rplot" width="714" height="233" srcset="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot-300x98.png 300w, http://simplystatistics.org/wp-content/uploads/2014/11/Rplot-1024x334.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/11/Rplot.png 1674w" sizes="(max-width: 714px) 100vw, 714px" /></a> <a href="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot01.png"><img class="alignnone wp-image-3561" src="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot01.png" alt="Rplot01" width="714" height="233" srcset="http://simplystatistics.org/wp-content/uploads/2014/11/Rplot01-300x98.png 300w, http://simplystatistics.org/wp-content/uploads/2014/11/Rplot01-1024x334.png 1024w, http://simplystatistics.org/wp-content/uploads/2014/11/Rplot01.png 1674w" sizes="(max-width: 714px) 100vw, 714px" /></a></p> <p>I hope this exercise helped students realize that data science can be both fun and useful. I can’t wait to do this again in 2016.</p> <p> </p> <p> </p> <p> </p> Sunday data/statistics link roundup (11/9/14) 2014-11-10T01:30:00+00:00 http://simplystats.github.io3548 <p>So I’m a day late, but you know, I got a new kid and stuff…</p> <ol> <li><a href="http://www.newyorker.com/science/maria-konnikova/moocs-failure-solutions">The New Yorker hating on MOOCs</a>, they mention all the usual stuff. Including the <a href="http://simplystatistics.org/2013/07/19/the-failure-of-moocs-and-the-ecological-fallacy/">really poorly designed San Jose State experiment</a>. I think this deserves a longer post, but this is definitely a case where people are looking at MOOCs on the <a href="http://en.wikipedia.org/wiki/Hype_cycle">wrong part of the hype curve</a>. MOOCs won’t solve all possible education problems, but they are hugely helpful to many people and writing them off is a little silly (via Rafa).</li> <li>My colleague Dan S. is <a href="http://www.eventzilla.net/web/event?eventid=2139054537">teaching a missing data workshop</a> here at Hopkins next week (via Dan S.)</li> <li>A couple of cool Youtube videos explaining <a href="http://www.youtube.com/watch?v=YmOsDTczOFs">how the normal distribution sounds</a> and the <a href="http://www.youtube.com/watch?v=F-I-BVqMiNI">pareto principle with paperclips</a> (via Presh T., pair with the <a href="http://simplystatistics.org/2014/03/20/the-8020-rule-of-statistical-methods-development/">80/20 rule of statistical methods development</a>)</li> <li>If you aren’t following <a href="https://twitter.com/ResearchMark">Research Wahlberg</a>, you aren’t on academic twitter.</li> <li>I followed <a href="https://twitter.com/hashtag/biodata14?src=hash">#biodata14</a> closely. I think having a meeting on Biological Big Data is a great idea and many of the discussion leaders are people I admire a ton. I also am a big fan of Mike S. I have to say I was pretty bummed that more statisticians weren’t invited (we like to party too!).</li> <li>Our data science specialization generates <a href="http://rpubs.com/hadley/39122">almost 1,000 new R github repos a month</a>! Roger and I are in a neck and neck race to be the person who has taught the most people statistics/data science in the history of the world.</li> <li>The Rstudio guys have also put together what looks like a <a href="http://blog.rstudio.org/2014/11/06/introduction-to-data-science-with-r-video-workshop/">great course</a> similar in spirit to our Data Science Specialization. The Rstudio folks have been *super* supportive of the DSS and we assume anything they make will be awesome.</li> <li> <p><a href="http://datacarpentry.github.io/blog/2014/11/05/announce/">Congrats to Data Carpentry</a> and [So I’m a day late, but you know, I got a new kid and stuff…</p> </li> <li><a href="http://www.newyorker.com/science/maria-konnikova/moocs-failure-solutions">The New Yorker hating on MOOCs</a>, they mention all the usual stuff. Including the <a href="http://simplystatistics.org/2013/07/19/the-failure-of-moocs-and-the-ecological-fallacy/">really poorly designed San Jose State experiment</a>. I think this deserves a longer post, but this is definitely a case where people are looking at MOOCs on the <a href="http://en.wikipedia.org/wiki/Hype_cycle">wrong part of the hype curve</a>. MOOCs won’t solve all possible education problems, but they are hugely helpful to many people and writing them off is a little silly (via Rafa).</li> <li>My colleague Dan S. is <a href="http://www.eventzilla.net/web/event?eventid=2139054537">teaching a missing data workshop</a> here at Hopkins next week (via Dan S.)</li> <li>A couple of cool Youtube videos explaining <a href="http://www.youtube.com/watch?v=YmOsDTczOFs">how the normal distribution sounds</a> and the <a href="http://www.youtube.com/watch?v=F-I-BVqMiNI">pareto principle with paperclips</a> (via Presh T., pair with the <a href="http://simplystatistics.org/2014/03/20/the-8020-rule-of-statistical-methods-development/">80/20 rule of statistical methods development</a>)</li> <li>If you aren’t following <a href="https://twitter.com/ResearchMark">Research Wahlberg</a>, you aren’t on academic twitter.</li> <li>I followed <a href="https://twitter.com/hashtag/biodata14?src=hash">#biodata14</a> closely. I think having a meeting on Biological Big Data is a great idea and many of the discussion leaders are people I admire a ton. I also am a big fan of Mike S. I have to say I was pretty bummed that more statisticians weren’t invited (we like to party too!).</li> <li>Our data science specialization generates <a href="http://rpubs.com/hadley/39122">almost 1,000 new R github repos a month</a>! Roger and I are in a neck and neck race to be the person who has taught the most people statistics/data science in the history of the world.</li> <li>The Rstudio guys have also put together what looks like a <a href="http://blog.rstudio.org/2014/11/06/introduction-to-data-science-with-r-video-workshop/">great course</a> similar in spirit to our Data Science Specialization. The Rstudio folks have been *super* supportive of the DSS and we assume anything they make will be awesome.</li> <li><a href="http://datacarpentry.github.io/blog/2014/11/05/announce/">Congrats to Data Carpentry</a> and](https://twitter.com/tracykteal) on their funding from the Moore Foundation!</li> </ol> <blockquote class="twitter-tweet" width="550"> <p> Sup. Party's over. Keep moving. <a href="http://t.co/R8sTbKzpF8">pic.twitter.com/R8sTbKzpF8</a> </p> <p> &mdash; Research Wahlberg (@ResearchMark) <a href="https://twitter.com/ResearchMark/status/530109209543999489">November 5, 2014</a> </p> </blockquote> Time varying causality in n=1 experiments with applications to newborn care 2014-11-05T13:13:11+00:00 http://simplystats.github.io3543 <p>We just had our second son about a week ago and I’ve been hanging out at home with him and the rest of my family. It has reminded me of a few things from when we had our first son. First, newborns are tiny and super-duper adorable. Second, daylight savings time means gaining an extra hour of sleep for many people, but for people with young children it is more like (via Reddit):</p> <p><a href="http://www.reddit.com/r/funny/comments/2l25vx/gain_an_extra_hour_of_sleep_waityou_have_toddlers/"><img class="aligncenter" src="http://i.imgur.com/1HWQIPa.gif" alt="" width="480" height="270" /></a></p> <p> </p> <p>Third, taking care of a newborn is like performing a series of n=1 experiments where the causal structure of the problem changes every time you perform an experiment.</p> <p>Suppose, hypothetically, that your newborn has just had something to eat and it is 2am in the morning (again, just hypothetically). You are hoping he’ll go back down to sleep so you can catch some shut-eye yourself. But your baby just can’t sleep and seems uncomfortable. Here are a partial list of causes for this: (1) dirty diaper, (2) needs to burp, (3) still hungry, (4) not tired, (5) over tired, (6) has gas, (7) just chillin. So you start going down the list and trying to address each of the potential causes of late-night sleeplessness: (1) check diaper, (2) try burping, (3) feed him again, etc. etc. Then, miraculously, one works and the little guy falls asleep.</p> <p>It is interesting how the natural human reaction to this is to reorder the potential causes of sleeplessness and start with the thing that worked next time. Then often get frustrated when the same thing doesn’t work the next time. You can’t help it, you did an experiment, you have some data, you want to use it. But the reality is that the next time may have nothing to do with the first.</p> <p>I’m in the process of collecting some very poorly annotated data collected exclusively at night if anyone wants to write a dissertation on this problem.</p> 538 election forecasts made simple 2014-11-04T17:12:16+00:00 http://simplystats.github.io3528 <p>Nate Silver does a <a href="http://fivethirtyeight.com/features/how-the-fivethirtyeight-senate-forecast-model-works/">great job</a> of explaining his forecast model to laypeople. However, as a statistician I’ve always wanted to know more details. After preparing a “<span class="s2"><a href="http://cs109.github.io/2014/pages/homework.html">predict the midterm elections</a>“ </span>homework for my <a href="http://cs109.github.io/2014"><span class="s2">data science class</span></a> I have a better idea of what is going on.</p> <p><a href="http://rafalab.jhsph.edu/simplystats/midterm2012.html">Here</a> is my best attempt at explaining the ideas of 538 using formulas and data. And <a href="http://rafalab.jhsph.edu/simplystats/midterm2012.Rmd">here</a> is the R markdown.</p> <p> </p> <p> </p> <p> </p> Sunday data/statistics link roundup (11/2/14) 2014-11-02T19:16:22+00:00 http://simplystats.github.io3526 <p>Better late than never! If you have something cool to share, please continue to email it to me with subject line “Sunday links”.</p> <ol> <li>A <a href="http://www.drivendata.org/">DrivenData</a> is a Kaggle-like site but for social good. I like the principle of using data for societal benefit, since there are so many ways it seems to be used for nefarious purposes (via Rafa).</li> <li>This article <a href="http://www.nytimes.com/2014/11/02/opinion/sunday/academic-science-isnt-sexist.html?ref=opinion&amp;_r=2">claiming academic science isn’t sexist</a> has been widely panned Emily Willingham <a href="http://www.emilywillinghamphd.com/2014/11/academic-science-is-sexist-we-do-have.html">pretty much destroys it here</a> (via Sherri R.). The thing that is interesting about this article is the way that it tries to use data to give the appearance of empiricism, while using language to try to skew the results. Is it just me or is this totally bizarre in light of the NYT also <a href="http://www.nytimes.com/2014/11/02/us/handling-of-sexual-harassment-case-poses-larger-questions-at-yale.html?smid=tw-share">publishing this piece</a> about academic sexual harassment at Yale?</li> <li>Noah Smith, an economist, <a href="http://www.bloombergview.com/articles/2014-10-29/bad-data-can-make-us-smarter">tries to summarize</a> the problem with “most research being wrong”. It is an interesting take, I wonder if he read Roger’s piece <a href="http://simplystatistics.org/2014/10/15/dear-laboratory-scientists-welcome-to-my-world/">saying almost exactly the same thing </a> like the week before? He also mentions it is hard to quantify the rate of false discoveries in science, maybe he should <a href="http://biostatistics.oxfordjournals.org/content/early/2013/09/24/biostatistics.kxt007.abstract">read our paper</a>?</li> <li>Nature <a href="http://www.nature.com/news/code-share-1.16232">now requests</a> that code sharing occur “where possible” (via Steven S.)</li> <li> <p>Great [Better late than never! If you have something cool to share, please continue to email it to me with subject line “Sunday links”.</p> </li> <li>A <a href="http://www.drivendata.org/">DrivenData</a> is a Kaggle-like site but for social good. I like the principle of using data for societal benefit, since there are so many ways it seems to be used for nefarious purposes (via Rafa).</li> <li>This article <a href="http://www.nytimes.com/2014/11/02/opinion/sunday/academic-science-isnt-sexist.html?ref=opinion&amp;_r=2">claiming academic science isn’t sexist</a> has been widely panned Emily Willingham <a href="http://www.emilywillinghamphd.com/2014/11/academic-science-is-sexist-we-do-have.html">pretty much destroys it here</a> (via Sherri R.). The thing that is interesting about this article is the way that it tries to use data to give the appearance of empiricism, while using language to try to skew the results. Is it just me or is this totally bizarre in light of the NYT also <a href="http://www.nytimes.com/2014/11/02/us/handling-of-sexual-harassment-case-poses-larger-questions-at-yale.html?smid=tw-share">publishing this piece</a> about academic sexual harassment at Yale?</li> <li>Noah Smith, an economist, <a href="http://www.bloombergview.com/articles/2014-10-29/bad-data-can-make-us-smarter">tries to summarize</a> the problem with “most research being wrong”. It is an interesting take, I wonder if he read Roger’s piece <a href="http://simplystatistics.org/2014/10/15/dear-laboratory-scientists-welcome-to-my-world/">saying almost exactly the same thing </a> like the week before? He also mentions it is hard to quantify the rate of false discoveries in science, maybe he should <a href="http://biostatistics.oxfordjournals.org/content/early/2013/09/24/biostatistics.kxt007.abstract">read our paper</a>?</li> <li>Nature <a href="http://www.nature.com/news/code-share-1.16232">now requests</a> that code sharing occur “where possible” (via Steven S.)</li> <li>Great](http://imgur.com/gallery/ZpgQz) cartoons, I particularly like the one about replication (via Steven S.).</li> </ol> Why I support statisticians and their resistance to hype 2014-10-28T10:19:01+00:00 http://simplystats.github.io3501 <p>Despite Statistics being the most mature data related discipline, statisticians <a href="http://simplystatistics.org/2014/05/07/why-big-data-is-in-trouble-they-forgot-about-applied-statistics/">have not fared well</a> in terms of being selected for funding or leadership positions in the new initiatives brought about by the increasing interest in data. Just to give one example (<a href="http://simplystatistics.org/2014/05/07/why-big-data-is-in-trouble-they-forgot-about-applied-statistics/">Jeff</a> and <a href="http://www.chalmers.se/en/areas-of-advance/ict/calendar/Pages/Terry-Speed.aspx">Terry Speed</a> give many more) the <a href="http://www.nitrd.gov/nitrdgroups/index.php?title=White_House_Big_Data_Partners_Workshop">White House Big Data Partners Workshop</a> had 19 members of which 0 were statisticians. The statistical community is clearly worried about this predicament and there is widespread consensus that we need to be <a href="http://simplystatistics.org/2012/08/14/statistics-statisticians-need-better-marketing/">better at marketing</a>. Although I agree that only good can come from better communicating what we do, it is also important to continue doing one of the things we do best: resisting the hype and being realistic about data.</p> <p>This week, after reading Mike Jordan’s <a href="http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan">reddit ask me anything</a>, I was reminded of exactly how much I admire this quality in statisticians. From reading the interview one learns about instances where hype has led to confusion, how getting past this confusion helps us better understand and consequently appreciate the importance of his field. For the past 30 years, Mike Jordan has been one of the most prolific academics working in the areas that today are receiving increased attention_._ Yet, you won’t find a hyped-up press release coming out of his lab. In fact when a <a href="http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts">journalist tried to hype up Jordan’s critique of hype</a>, Jordan <a href="https://amplab.cs.berkeley.edu/2014/10/22/big-data-hype-the-media-and-other-provocative-words-to-put-in-a-title/">called out the author</a>.</p> <p>Assessing the current situation with data initiatives it is hard not to conclude that hype is being rewarded. Many statisticians have come to the sad realization that by being cautious and skeptical, we may be losing out on funding possibilities and leadership roles. However, I remain very much upbeat about our discipline. First, being skeptical and cautious has actually led to many important contributions. An important example is how randomized controlled experiments changed how medical procedures are evaluated. A more recent one is the concept of FDR, which helps control false discoveries in, for example, high-throughput experiments. Second, many of us continue to work in the interface with real world applications placing us in a good position to make relevant contributions. Third, despite the failures alluded to above, we continue to successfully find ways to fund our work. Although resisting the hype has cost us in the short term, we will continue to produce methods that will be useful in the long term, as we have been doing for decades. Our methods will still be used when today’s hyped up press releases are long forgotten.</p> <p> </p> <p> </p> Return of the sunday links! (10/26/14) 2014-10-26T10:00:31+00:00 http://simplystats.github.io3499 <p>New look for the blog and bringing back the links. If you have something that you’d like included in the Sunday links, email me and let me know. If you use the title of the message “Sunday Links” you’ll be more likely for me to find it when I search my gmail.</p> <ol> <li>Thomas L. does a more technical post on <a href="http://notstatschat.tumblr.com/post/100893932596/semiparametric-efficiency-and-nearly-true-models">semi-parametric efficiency</a>, normally I’m a data n’ applications guy, but I love these in depth posts, especially when the papers remind me of all the things I studied at my <a href="http://www.biostat.washington.edu/">alma mater</a>.</li> <li>I am one of those people who only knows a tiny bit about Docker, but hears about it all the time. That being said, after I read about <a href="http://dirk.eddelbuettel.com/blog/2014/10/23/#introducing_rocker">Rocker</a>, I got pretty excited.</li> <li>Hadley W.’s <a href="https://www.biostars.org/p/115481/">favorite tools</a>, seems like that dude likes R Studio for some reason….(me too)</li> <li><a href="http://priorprobability.com/2014/10/22/chess-piece-survival-rates/">A cool visualization</a> of chess piece survival rates.</li> <li><a href="http://espn.go.com/video/clip?id=11694550">A short movie by 538</a> about statistics and the battle between Deep Blue and Gary Kasparov. Where’s the popcorn?</li> <li>Twitter engineering released an R package for <a href="https://blog.twitter.com/2014/breakout-detection-in-the-wild">detecting outbreaks</a>. I wonder how <a href="http://www.bioconductor.org/packages/release/bioc/html/DNAcopy.html">circular binary segmentation</a> would do?</li> </ol> <p> </p> <p> </p> An interactive visualization to teach about the curse of dimensionality 2014-10-24T11:14:43+00:00 http://simplystats.github.io3486 <p>I recently was contacted for an interview about the curse of dimensionality. During the course of the conversation, I realized how hard it is to explain the curse to a general audience. One of the best descriptions I could come up with was trying to describe sampling from a unit line, square, cube, etc. and taking samples with side length fixed. You would capture fewer and fewer points. As I was saying this, I realized it is a pretty bad way to explain the curse of dimensionality in words. But there was potentially a cool data visualization that would illustrate the idea. I went to my student <a href="http://www.biostat.jhsph.edu/~prpatil/">Prasad</a>, our resident interactive viz design expert to see if he could build it for me. He came up with this cool Shiny app where you can simulate a number of points (n) and then fix a side length for 1-D, 2-D, 3-D, and 4-D and see how many points you capture in a cube of that length in that dimension. You can find the <a href="https://prpatil.shinyapps.io/cod_app/">full app here</a> or check it out on the blog here:</p> <p> </p> Vote on simply statistics new logo design 2014-10-22T10:38:10+00:00 http://simplystats.github.io3469 <p>As you can tell, we have given the Simply Stats blog a little style update. It should be more readable on phones or tablets now. We are also about to get a new logo. We are down to the last couple of choices and can’t decide. Since we are statisticians, we thought we’d collect some data. <a href="http://99designs.com/logo-design/vote-3datw8">Here is the link</a> to the poll. Let us know</p> Thinking like a statistician: don't judge a society by its internet comments 2014-10-20T13:59:03+00:00 http://simplystats.github.io3402 <p>In a previous <a href="http://simplystatistics.org/2014/01/17/missing-not-at-random-data-makes-some-facebook-users-feel-sad/">post</a> I explained how thinking like a statistician can help you avoid <a href="http://www.npr.org/2014/01/09/261108836/many-younger-facebook-users-unfriend-the-network">feeling sad after using Facebook.</a> The basic point was that <em>missing not at random</em> (MNAR) data on your friends’ profiles (showing only the best parts of their life) can result in the biased view that your life is boring and uninspiring in comparison. A similar argument can be made to avoid losing faith in humanity after reading internet comments or anonymous tweets, one of the most depressing activities that I have voluntarily engaged in. If you want to see proof that racism, xenophobia, sexism and homophobia are still very much alive, read the unfiltered comments sections of articles related to race, immigration, gender or gay rights. However, as a statistician, I remain optimistic about our society after realizing how extremely biased these particular MNAR data can be.</p> <p>Assume we could summarize an individual’s “righteousness<span class="star inactive">”</span> with a numerical index. I realize this is a gross oversimplification, but bear with me. Below is my view on the distribution of this index across all members of our society.</p> <p><a href="http://simplystatistics.org/wp-content/uploads/2014/10/IMG_5842.jpg"><img class="aligncenter wp-image-3409" src="http://simplystatistics.org/wp-content/uploads/2014/10/IMG_5842.jpg" alt="IMG_5842" width="442" height="463" srcset="http://simplystatistics.org/wp-content/uploads/2014/10/IMG_5842-286x300.jpg 286w, http://simplystatistics.org/wp-content/uploads/2014/10/IMG_5842-977x1024.jpg 977w, http://simplystatistics.org/wp-content/uploads/2014/10/IMG_5842.jpg 2139w" sizes="(max-width: 442px) 100vw, 442px" /></a></p> <p>Note that the distribution is not bimodal. This means there is no gap between good and evil, instead we have a continuum. Although there is variability, and we do have some extreme outliers on both sides of the distribution, most of us are much closer to the median than we like to believe. The offending internet commentators represent a very small proportion (the “bad” tail shown in red). But in a large population, such as internet users, this extremely small proportion can be quite numerous and gives us a biased view.</p> <p>There is one more level of variability here that introduces biases. Since internet comments can be anonymous, we get an unprecedentedly large glimpse into people’s opinions and thoughts. We assign a “righteousness” index to our thoughts and opinion and include it in the scatter plot shown in the figure above. Note that this index exhibits variability within individuals: even the best people have the occasional bad thought. The points in red represent thoughts so awful that no one, not even the worst people, would ever express publicly. The red points give us an overly pessimistic estimate of the individuals that are posting these comments, which exacerbates our already pessimistic view due to a non-representative sample of individuals.</p> <p>I hope that thinking like a statistician will help the media and social networks put in statistical perspective the awful tweets or internet comments that represent the worst of the worst. These actually provide little to no information on humanity’s distribution of righteousness, that I think is moving consistently, albeit slowly, towards the good.</p> <p> </p> <p> </p> Bayes Rule in an animated gif 2014-10-17T10:00:41+00:00 http://simplystats.github.io3390 <table> <tbody> <tr> <td>Say Pr(A)=5% is the prevalence of a disease (% of red dots on top fig). Each individual is given a test with accuracy Pr(B</td> <td>A)=Pr(no B</td> <td>no A) = 90% . The O in the middle turns into an X when the test fails. The rate of Xs is 1-Pr(B</td> <td>A). We want to know the probability of having the disease if you tested positive: Pr(A</td> <td>B). Many find it counterintuitive that this probability is much lower than 90%; this animated gif is meant to help.</td> </tr> </tbody> </table> <p><img src="http://rafalab.jhsph.edu/simplystats/bayes.gif" alt="" width="600" /></p> <table> <tbody> <tr> <td>The individual being tested is highlighted with a moving black circle. Pr(B) of these will test positive: we put these in the bottom left and the rest in the bottom right. The proportion of red points that end up in the bottom left is the proportion of red points Pr(A) with a positive test Pr(B</td> <td>A), thus Pr(B</td> <td>A) x Pr(A). Pr(A</td> <td>B), or the proportion of reds in the bottom left, is therefore Pr(B</td> <td>A) x Pr(A) divided by Pr(B): Pr(A</td> <td>B)=Pr(B</td> <td>A) x Pr(A) / Pr(B)</td> </tr> </tbody> </table> <p>ps - Is this a <a href="http://simplystatistics.org/2014/10/13/as-an-applied-statistician-i-find-the-frequentists-versus-bayesians-debate-completely-inconsequential/">frequentist or Bayesian</a> gif?</p> Creating the field of evidence based data analysis - do people know what a p-value looks like? 2014-10-16T15:00:34+00:00 http://simplystats.github.io3382 <p>In the medical sciences, there is a discipline called “<a href="http://en.wikipedia.org/wiki/Evidence-based_medicine">evidence based medicine</a>”. The basic idea is to study the actual practice of medicine using experimental techniques. The reason is that while we may have good experimental evidence about specific medicines or practices, the global behavior and execution of medical practice may also matter. There have been some success stories from this approach and also backlash from physicians who <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1523-536X.1996.tb00491.x/abstract">don’t like to be told how to practice medicine.</a> However, on the whole it is a valuable and interesting scientific exercise.</p> <p>Roger introduced the idea of <a href="http://simplystatistics.org/2013/08/28/evidence-based-data-analysis-treading-a-new-path-for-reproducible-research-part-2/">evidence based data analysis</a> in a previous post. The basic idea is to study the actual practice and behavior of data analysts to identify how analysts behave. There is a strong history of this type of research within the data visualization community <a href="http://www.stat.purdue.edu/~wsc/">starting with Bill Cleveland</a> and extending forward to work by <a href="http://dicook.github.io/cv.html">Diane Cook</a>, , <a href="http://vis.stanford.edu/papers/crowdsourcing-graphical-perception">Jeffrey Heer</a>, and others.</p> <p><a href="https://peerj.com/articles/589/">Today we published</a> a large-scale evidence based data analysis randomized trial. Two of the most common data analysis tasks (for better or worse) are exploratory analysis and the identification of statistically significant results. Di Cook’s group calls this idea <a href="http://stat.wharton.upenn.edu/~buja/PAPERS/Wickham-Cook-Hofmann-Buja-IEEE-TransVizCompGraphics_2010-Graphical%20Inference%20for%20Infovis.pdf">“graphical inference” or “visual significance”</a> and they have studied human’s ability to detect significance in the context of [In the medical sciences, there is a discipline called “<a href="http://en.wikipedia.org/wiki/Evidence-based_medicine">evidence based medicine</a>”. The basic idea is to study the actual practice of medicine using experimental techniques. The reason is that while we may have good experimental evidence about specific medicines or practices, the global behavior and execution of medical practice may also matter. There have been some success stories from this approach and also backlash from physicians who <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1523-536X.1996.tb00491.x/abstract">don’t like to be told how to practice medicine.</a> However, on the whole it is a valuable and interesting scientific exercise.</p> <p>Roger introduced the idea of <a href="http://simplystatistics.org/2013/08/28/evidence-based-data-analysis-treading-a-new-path-for-reproducible-research-part-2/">evidence based data analysis</a> in a previous post. The basic idea is to study the actual practice and behavior of data analysts to identify how analysts behave. There is a strong history of this type of research within the data visualization community <a href="http://www.stat.purdue.edu/~wsc/">starting with Bill Cleveland</a> and extending forward to work by <a href="http://dicook.github.io/cv.html">Diane Cook</a>, , <a href="http://vis.stanford.edu/papers/crowdsourcing-graphical-perception">Jeffrey Heer</a>, and others.</p> <p><a href="https://peerj.com/articles/589/">Today we published</a> a large-scale evidence based data analysis randomized trial. Two of the most common data analysis tasks (for better or worse) are exploratory analysis and the identification of statistically significant results. Di Cook’s group calls this idea <a href="http://stat.wharton.upenn.edu/~buja/PAPERS/Wickham-Cook-Hofmann-Buja-IEEE-TransVizCompGraphics_2010-Graphical%20Inference%20for%20Infovis.pdf">“graphical inference” or “visual significance”</a> and they have studied human’s ability to detect significance in the context of](http://www.tandfonline.com/doi/abs/10.1080/01621459.2013.808157) and how it <a href="http://arxiv.org/abs/1408.1974">associates with demographics and visual characteristics of the plot.</a></p> <p>We performed a randomized study to determine if data analysts with basic training could identify statistically significant relationships. Or as the first author put it in a tweet:</p> <blockquote class="twitter-tweet" width="550"> <p> First paper just dropped!&#10;Can you tell the difference between these two plots?&#10;<a href="https://t.co/Lng0FWI0XY">https://t.co/Lng0FWI0XY</a> <a href="http://t.co/zFCwwcxaAX">pic.twitter.com/zFCwwcxaAX</a> </p> <p> &mdash; Aaron Fisher (@PrfFarnsworth) <a href="https://twitter.com/PrfFarnsworth/status/522790724774141952">October 16, 2014</a> </p> </blockquote> <p>What we found was that people were pretty bad at detecting statistically significant results, but that over multiple trials they could improve. This is a tentative first step toward understanding how the general practice of data analysis works. If you want to play around and see how good you are at seeing p-values we also built this interactive Shiny app. If you don’t see the app you can also go to the <a href="http://glimmer.rstudio.com/afisher/EDA/">Shiny app page here.</a></p> <p> </p> Dear Laboratory Scientists: Welcome to My World 2014-10-15T19:42:03+00:00 http://simplystats.github.io3377 <p>Consider the following question: Is there a reproducibility/replication crisis in epidemiology?</p> <p>I think there are only two possible ways to answer that question:</p> <ol> <li>No, there is no replication crisis in epidemiology because no one ever believes the result of an epidemiological study unless it has been replicated a minimum of 1,000 times in every possible population.</li> <li>Yes, there is a replication crisis in epidemiology, and it started in 1854 when <a href="http://www.ph.ucla.edu/epi/snow/snowbook2.html">John Snow</a> inferred, from observational data, that cholera was spread via contaminated water obtained from public pumps.</li> </ol> <p>If you chose (2), then I don’t think you are allowed to call it a “crisis” because I think by definition, a crisis cannot last 160 years. In that case, it’s more of a chronic disease.</p> <p>I had an interesting conversation last week with a prominent environmental epidemiologist over the replication crisis that has been reported about extensively in the scientific and popular press. In his view, he felt this was less of an issue in epidemiology because epidemiologists never really had the luxury of people (or at least fellow scientists) believing their results because of their general inability to conduct controlled experiments.</p> <p>Given the observational nature of most environmental epidemiological studies, it’s generally accepted in the community that no single study can be considered causal, and that many replications of a finding are need to establish a causal connection. Even the popular press knows now to include the phrase “correlation does not equal causation” when reporting on an observational study. The work of <a href="http://en.wikipedia.org/wiki/Bradford_Hill_criteria">Sir Austin Bradford Hill</a> essentially codifies the standard of evidence needed to draw causal conclusions from observational studies.</p> <p>So if “correlation does not equal causation”, it begs the question, what <em>does</em> equal causation? Many would argue that a controlled experiment, whether it’s a randomized trial or a laboratory experiment, equals causation. But people who work in this area have long known that while controlled experiments do assign the treatment or exposure, there are still many other elements of the experiment that are _not _controlled.</p> <p>For example, if subjects drop out of a randomized trial, you now essentially have an observational study (or at least a <a href="http://amstat.tandfonline.com/doi/abs/10.1198/016214503000071#.VD8EqL5DuoY">“broken” randomized trial</a>). If you are conducting a laboratory experiment and all of the treatment samples are measured with one technology and all of the control samples are measured with a different technology (perhaps because of a lack of blinding), then you still have confounding.</p> <p>The correct statement is not “correlation does not equal causation” but rather “no single study equals causation”, regardless of whether it was an observational study or a controlled experiment. Of course, a very tightly controlled and rigorously conducted controlled experiment will be more valuable than a similarly conducted observational study. But in general, all studies should simply be considered as further evidence for or against an hypothesis. We should not be lulled into thinking that any single study about an important question can truly be definitive.</p> I declare the Bayesian vs. Frequentist debate over for data scientists 2014-10-13T10:45:44+00:00 http://simplystats.github.io3340 <p>In a recent New York Times <a href="http://www.nytimes.com/2014/09/30/science/the-odds-continually-updated.html?_r=1">article</a> the “Frequentists versus Bayesians” debate was brought up once again. I agree with Roger:</p> <blockquote class="twitter-tweet" lang="en"> <p> NYT wants to create a battle b/w Bayesians and Frequentists but it's all crap. Statisticians develop techniques. <a href="http://t.co/736gbqZGuq">http://t.co/736gbqZGuq</a> </p> <p> — Roger D. Peng (@rdpeng) <a href="https://twitter.com/rdpeng/status/516739602024267776">September 30, 2014</a> </p> </blockquote> <p>Because the real story (or non-story) is way too boring to sell newspapers, the author resorted to a sensationalist narrative that went something like this: ”Evil and/or stupid frequentists were ready to let a fisherman die; the persecuted Bayesian heroes saved him.” This piece adds to the growing number of writings blaming frequentist statistics for the so-called reproducibility crisis in science. If there is something Roger, <a href="http://simplystatistics.org/2013/11/26/statistical-zealots/">Jeff</a> <a>and </a><a href="http://simplystatistics.org/2013/08/01/the-roc-curves-of-science/">I</a> <a>agree on is that this debate is </a><a href="http://noahpinionblog.blogspot.com/2013/01/bayesian-vs-frequentist-is-there-any.html">not constructive</a>. As &lt;/a&gt;<a href="http://arxiv.org/pdf/1106.2895v2.pdf">Rob Kass</a> <a>suggests it’s time to move on to pragmatism. Here I follow up Jeff’s <a href="http://simplystatistics.org/2014/09/30/you-think-p-values-are-bad-i-say-show-me-the-data/">recent post</a> by sharing related thoughts brought about by two decades of practicing applied statistics and hope it helps put this unhelpful debate to rest.&lt;/p&gt;</a></p> <p> Applied statisticians help answer questions with data. How should I design a roulette so my casino makes? Does this fertilizer increase crop yield? Does streptomycin cure pulmonary tuberculosis? Does smoking cause cancer? What movie would would this user enjoy? Which baseball player should the Red Sox give a contract to? Should this patient receive chemotherapy? Our involvement typically means analyzing data and designing experiments. To do this we use a variety of techniques that have been successfully applied in the past and that we have mathematically shown to have desirable properties. Some of these tools are frequentist, some of them are Bayesian, some could be argued to be both, and some don't even use probability. The Casino will do just fine with frequentist statistics, while the baseball team might want to apply a Bayesian approach to avoid overpaying for players that have simply been lucky. </p> <p> It is also important to remember that good applied statisticians also *think*. They don't apply techniques blindly or religiously. If applied statisticians, regardless of their philosophical bent, are asked if the sun just exploded, they would not design an experiment as the one depicted in this popular XKCD cartoon. </p> <p> <a href="http://xkcd.com/1132/"><img class="aligncenter" src="http://imgs.xkcd.com/comics/frequentists_vs_bayesians.png" alt="" width="234" height="355" /></a> </p> <p> Only someone that does not know how to think like a statistician would act like the frequentists in the cartoon. Unfortunately we do have such people analyzing data. But their choice of technique is not the problem, it's their lack of critical thinking. However, even the most frequentist-appearing applied statistician understands Bayes rule and will adapt the Bayesian approach when appropriate. In the above XCKD example, any respectful applied statistician would not even bother examining the data (the dice roll), because they would assign a probability of 0 to the sun exploding (the empirical prior based on the fact that they are alive). However, superficial propositions arguing for wider adoption of Bayesian methods fail to realize that using these techniques in an actual data analysis project is very different from simply thinking like a Bayesian. To do this we have to represent our intuition or prior knowledge (or whatever you want to call it) with mathematical formulae. When theoretical Bayesians pick these priors, they mainly have mathematical/computational considerations in mind. In practice we can't afford this luxury: a bad prior will render the analysis useless regardless of its convenient mathematically properties. </p> <p> Despite these challenges, applied statisticians regularly use Bayesian techniques successfully. In one of the fields I work in, Genomics, empirical Bayes techniques are widely used. In <a href="http://www.ncbi.nlm.nih.gov/pubmed/16646809">this</a> popular application of empirical Bayes we use data from all genes to improve the precision of estimates obtained for specific genes. However, the most widely used output of the software implementation is not a posterior probability. Instead, an empirical Bayes technique is used to improve the estimate of the standard error used in a good ol' fashioned t-test. This idea has changed the way thousands of Biologists search for differential expressed genes and is, in my opinion, one of the most important contributions of Statistics to Genomics. Is this approach frequentist? Bayesian? To this applied statistician it doesn't really matter. </p> <p> For those arguing that simply switching to a Bayesian philosophy will improve the current state of affairs, let's consider the smoking and cancer example. Today there is wide agreement that smoking causes lung cancer. Without a clear deductive biochemical/physiological argument and without<br /> the possibility of a randomized trial, this connection was established with a series of observational studies. Most, if not all, of the associated data analyses were based on frequentist techniques. None of the reported confidence intervals on their own established the consensus. Instead, as usually happens in science, a long series of studies supporting this conclusion were needed. How exactly would this have been different with a strictly Bayesian approach? Would a single paper been enough? Would using priors helped given the "expert knowledge" at the time (see below)? </p> <p> <img src="http://cdn.saveourbones.com/wp-content/uploads/smoking_doctor.jpg" width="234" height="355" class="aligncenter" alt="" /> </p> <p> And how would the Bayesian analysis performed by tabacco companies shape the debate? Ultimately, I think applied statisticians would have made an equally convincing case against smoking with Bayesian posteriors as opposed to frequentist confidence intervals. Going forward I hope applied statisticians continue to be free to use whatever techniques they see fit and that critical thinking about data continues to be what distinguishes us. Imposing Bayesian or frequentists philosophy on us would be a disaster. </p> Data science can't be point and click 2014-10-09T16:16:17+00:00 http://simplystats.github.io3338 <p>As data becomes cheaper and cheaper there are more people that want to be able to analyze and interpret that data.  I see more and more that people are creating tools to accommodate folks who aren’t trained but who still want to look at data _right now. _While I admire the principle of this approach - we need to democratize access to data - I think it is the most dangerous way to solve the problem.</p> <p>The reason is that, especially with big data, it is very easy to find things like this with point and click tools:</p> <div style="width: 670px" class="wp-caption aligncenter"> <a href="http://www.tylervigen.com/view_correlation?id=1597"><img class="" src="http://www.tylervigen.com/correlation_project/correl