Tag: finance

12
Aug

Sunday data/statistics link roundup (8/12/12)

  1. An interesting blog post about the top N reasons to do a Ph.D. in bioinformatics or computational biology. A couple of things that I find interesting and could actually be said of any program in biostatistics as well are: computing is the key skill of the 21st century and computational skills are highly transferrable. Via Andrew J. 
  2. Here is an interesting auto-complete map of the United States where the prompt was, “Why is [state] so”. It seems like using the Google auto-complete functions can lead to all sorts of humorous data, xkcd has used it as a data source a couple of times in the past. By the way, the person(s) who think Idaho is boring haven’t been to the right parts of Idaho. (via Rafa). 
  3. One of my all-time favorite statistics quotes appears in this column by David Brooks: “…what God hath woven together, even multiple regression analysis cannot tear asunder.” It seems like the perfect quote for any study that attempts to build a predictive model for a complicated phenomenon where only limited knowledge of the underlying mechanisms are known. 
  4. I’ve been reading up a lot on how to summarize and communicate risk. At the moment, I’ve been following a lot of David Spiegelhalter’s stuff, and really liked this 30,000 foot view summary.
  5. It is interesting how often you see R popping up in random places these days. Here is a blog post with some clearly R-created plots that appeared on Business Insider about predicting the stock-market. 
  6. Roger and I had a post on MOOC’s this week from the perspective of faculty teaching the courses. For a more departmental/administrative level view, be sure to re-read Rafa’s post on the future of graduate education
06
Feb

An R script for estimating future inflation via the Treasury market

One factor that is critical for any financial planning is estimating what future inflation will be. For example, if you’re saving money in an instrument that gains 3% per year, and inflation is estimated to be 4% per year, well then you’re losing money in real terms.

There are a variety of ways to estimate the rate of future inflation. You could, for example, use past rates as an estimate of future rates. However, the Treasury market provides an estimate of what the market thinks annual inflation will be over the next 5, 10, 20, and 30 years.

Basically, the Treasury issue two types of securities: nominal securities that pay a nominal interest rate (fixed percentage of your principal), and inflation-indexed securities (TIPS) that pay an interest rate that is applied to your principal adjusted by the consumer price index (CPI). As the CPI goes up and down, the payments for inflation-indexed securities go up and down (although they can’t go negative so you always get your principal back). As these securities trade throughout the day, their respective market-based interest rates go up and down continuously. The difference between the nominal interest rate and the real interest rate for a fixed period of time (5, 10, 20, years)  can be used as a rough estimate of annual inflation over that time period.

The Treasury publishes data for its auctions everyday on the yield for both nominal and inflation-indexed securities. There is an XML feed for nominal yields and for real yields. Using these, I used the XML R package and wrote an R script to scrape the data and calculate the inflation estimate.  

As of today, the market’s estimate of annual inflation is:

5-year Inflation: 1.88%
10-year Inflation: 2.18%
30-year Inflation: 2.38%

Basically, you just call the ‘inflation()’ function with no arguments and it produces the above print out.

30
Nov

Selling the Power of Statistics

A few weeks ago we learned that Warren Buffett is a big IBM fan (a $10 billion fan, that is). Having heard that I went over to the IBM web site to see what they’re doing these days. For starters, they’re not selling computers anymore! At least not the kind that I would use. One of the big things they do now is “Business Analytics and Optimization” (i.e. statistics), which is one of the reasons they bought SPSS and then later Algorithmics.

Roaming around the IBM web site, I found this little video on how IBM is involved with tennis matches like the US Open. It’s the usual promo video: a bit cheesy, but pretty interesting too. For example, they provide all the players an automatically generated post-game “match analysis DVD” that has summaries of all the data from their match with corresponding video.

It occurred to me that one of the challenges that a company like IBM faces is selling the “power of analytics” to other companies. They need to make these promo videos because, I guess, some companies are not convinced they need this whole analytics thing (or at least not from IBM). They probably need to do methods and software development too, but getting the deal in the first place is at least as important.

In contrast, here at Johns Hopkins, my experience has been that we don’t really need to sell the “power of statistics” to anyone. For the most part, researchers around here seem to be already “sold”. They understand that they are collecting a ton of data and they’re going to need statisticians to help them understand it. Maybe Hopkins is the exception, but I doubt it.

Good for us, I suppose, for now. But there is a danger that we take this kind of monopoly position for granted. Companies like IBM hire the same people we do (including one grad school classmate) and there’s no reason why they couldn’t become direct competitors. We need to continuously show that we can make sense of data in novel ways.