The Johns Hopkins Data Science Lab has been teaching massive online open courses for more than 5 years now. During that time we’ve reached more than 5 million learners who want to break into the number one rated job in America. While we have been incredibly excited about the results of these training programs, we’ve also learned over the last 5+ years that there are still significant barriers to getting into data science.
Rolando Acosta and I recently posted a manuscript on bioRxiv describing the effects of Hurricane María, based on an analysis of mortality data provided by the Demographic Registry. I was also an author on a paper published in May based on a survey of 3,000 households. These are very different datasets. Assuming it is complete, the Demographic Registry dataset provides much more precise quantitative information. However, this dataset was not made publicly available until June 1, 2018, three days after the paper based on the survey data was released.
There are often discussions within the data science community about which tools are best for doing data science. The most recent iteration of this discussion is the so-called “First Notebook War”, which is well-summarized by Yihui Xie in his blog post (it is a great read). One thing that I have found missing from many discussions about tooling in data analysis is an acknowledgment that data analysis tends to advance through different phases and that different tools can be more or less useful in each of those phases.
Hilary Parker and I just released part 2 of our book club discussion of Nigel Cross’s book Design Thinking and it centers around a profile of designer Gordan Murray, who spent his career designing Formula One race cars. One of the aspects of his job as a designer is taking a “systems approach” to solving problems. Coupled with that approach is his role in balancing the various priorities of members of his team.
This week Hilary Parker and I have started our “Book Club” on Not So Standard Deviations where we will be discussing Nigel Cross’s book Design Thinking: Understanding How Designers Think and Work. We will be talking about how the work of designers parallels the work of data scientists and how many of the principles developed in design port over so well to data analysis. While data visualization has always taken cues from design, I think much broader aspects of data analysis could benefit from the work studying design.
One conversation I’ve had a few times revolves around the question, “What’s the difference between science and data science?” If I were to come up with a simple distinction, I might say that Science starts with a question; data science starts with the data. What makes data science so difficult is that it starts in the wrong place. As a result, a certain amount of extra work must be done to understand the context surrounding a dataset before we can do anything useful.
Recently, Apple’s stock price rose to the point where the company’s market valuation was above $1 trillion, the first U.S. company to reach that benchmark. Subsequently, numerous articles were published describing Apple’s journey to this point and why it got there. Most people describe Apple as a technology company. They make technology products: iPhones, iPads, Macs, etc. These are all computing devices. But there is another way to think of Apple and what kind of company they are as well as how they became so successful.
Jenny Bryan recently gave a wonderful talk at the Use R! 2018 meeting in Brisbane about “Code Smells and Feels” (I recommend you watch a video of that talk). Her talk covers various ways to detect when your code “smells” and how to fix those smells through refactoring. While there is quite a bit of literature on this with respect to other programming languages, it’s not well-covered with respect to R.
One of the fundamental questions that we can ask in any data analysis is, “Why do things vary?” Although I think this is fundamental, I’ve found that it’s not explicitly asked as often as I might think. The problem with not asking this question is that it can often lead to a lot of pointless and time-consuming work. Taking a moment to ask yourself, “What do I know that can explain why this feature or variable varies?
Abstract The intentional ambiguity of the R language, inherited from the S language, is one of its defining features. Is it an interactive system for data analysis or is it a sophisticated programming language for software developers? The ability of R to cater to users who do not see themselves as programmers, but then allow them to slide gradually into programming, is an enduring quality of the language and is what has allowed it to gain significance over time.