One conversation I’ve had a few times revolves around the question, “What’s the difference between science and data science?” If I were to come up with a simple distinction, I might say that Science starts with a question; data science starts with the data. What makes data science so difficult is that it starts in the wrong place. As a result, a certain amount of extra work must be done to understand the context surrounding a dataset before we can do anything useful.
Recently, Apple’s stock price rose to the point where the company’s market valuation was above $1 trillion, the first U.S. company to reach that benchmark. Subsequently, numerous articles were published describing Apple’s journey to this point and why it got there. Most people describe Apple as a technology company. They make technology products: iPhones, iPads, Macs, etc. These are all computing devices. But there is another way to think of Apple and what kind of company they are as well as how they became so successful.
Jenny Bryan recently gave a wonderful talk at the Use R! 2018 meeting in Brisbane about “Code Smells and Feels” (I recommend you watch a video of that talk). Her talk covers various ways to detect when your code “smells” and how to fix those smells through refactoring. While there is quite a bit of literature on this with respect to other programming languages, it’s not well-covered with respect to R.
One of the fundamental questions that we can ask in any data analysis is, “Why do things vary?” Although I think this is fundamental, I’ve found that it’s not explicitly asked as often as I might think. The problem with not asking this question is that it can often lead to a lot of pointless and time-consuming work. Taking a moment to ask yourself, “What do I know that can explain why this feature or variable varies?
Abstract The intentional ambiguity of the R language, inherited from the S language, is one of its defining features. Is it an interactive system for data analysis or is it a sophisticated programming language for software developers? The ability of R to cater to users who do not see themselves as programmers, but then allow them to slide gradually into programming, is an enduring quality of the language and is what has allowed it to gain significance over time.
I was listening to the podcast The West Wing Weekly recently and Episode 4.17 (“Red Haven’s on Fire”) featured former staff writer Lauren Schmidt Hissrich. In introducing her, the podcast co-hosts mentioned that Hissrich was a writer for the Netflix series Daredevil, based on the Marvel Comics character. She is also the showrunner for a new Netflix series called The Witcher, which is based on a book by Andrzej Sapkowski.
Matthew Panzarino had an interesting article in TechCrunch on Apple’s process for rebuilding their Maps app. While most of the article describes the laborious process of data collection, one part jumped out at me, which was the team that Panzarino describes as the “Department of Details.” They are responsible for a number of odds and ends regarding how maps are presented, but they are particularly concerned with presenting maps to people around the world.
I’ve often heard that there is a need for data analysts to be creative in their work. But why? Where and how exactly is that creativity exercised? On one extreme, it could be thought that a data analyst should be easily replaced by a machine. For various types of data and for various types of questions, there should be a deterministic approach to analysis that does not change. Presumably, this could be coded up into a computer program and the data could be fed into the program every time, with a result presented at the end.
When learning about data analysis in school, you don’t hear much about the role that resources—time, money, and technology—play in the development of analysis. This is a conversation that is often had “in the hallway” when talking to senior faculty or mentors. But the available resources do play a significant role in determining what can be done with a given question and dataset. It’s tempting to think that the situation is binary—either you have sufficient resources to do the “right” analysis, or you simply don’t do the analysis.
In my post about relationships in data analysis I got a little push back regarding whether human relationships would ever not be important in data analysis and whether that has anything to do with the “maturity” of the field. I believe human beings will always play a role in data analysis, but it’s possible that over time they will play different roles. I wanted to discuss in this post what I meant about “institutions” and “institutional knowledge” in the context of data analysis and when the specific person who does the analysis is critical to how the analysis is done.