Jeff and I talk about the recent Reinhart-Rogoff reproducibility kerfuffle and how it turns out that data analysis is really hard no matter how big the dataset.
Good analysis by Jeffrey & Robert on data analysis and how the Reinhard-Rogoff paper was used to push the austerity agenda by other economists and politicians. It's clear the supporters of austerity surfaced only after they had the statistics to back it up. If they proposed austerity in the absence of "proof" they would have been deemed lunatic/mad by empiricists.
Like Jeffrey mentions, speaking about this manipulation of stats is important in the big picture as it can go undetected. Similarly, data reporting and transparency are increasingly important to help further the discussion and methods. I'm glad other economists and statisticians tried to duplicate the findings using the same dataset - it just seems like there was a hesitation to nullify the Reinhard-Rogoff findings for some reason.
I think the takeaway message made is, "be aware of the potential implications of reporting results, no matter what the method of analysis is."
I'm a very recent follower of your blog, and just ran across the podcasts, which I very much enjoyed. If I can weigh in with an opinion, in addition to a discussion of assumptions and choices made during the data analysis, it would be nice for the authors to show different scenarios as Jeff suggests - perhaps not a full sensitivity analysis, but at least showing some of the major scenarios that can result from different decisions. The final choice can, of course, be discussed and justified by the authors as they deem appropriate.
I agree with Roger that, given the nature of the analysis and the political nature of the research, showing alternate analysis choices and uncertainty is unlikely to influence the way in which the results would be used by those in favor of austerity. However, for other domains where an analysis will actually be used as the basis for a decision (and not a post-hoc justification), it is important to be transparent about decision choices and uncertainty, and find ways to deal with these issues as risks during the decision-making process.
For this reason I think it's important for the original authors to really error on the side of transparency with regards to these issues, even if it makes the results less straightforward and clear. Making the raw data available for for others to re-analyze is great, but not all of those individuals who are interested in or stakeholders to downstream decisions will have the time or technical knowledge to reproduce the analysis.
Even if people formed their political opinions before the paper was published, such a paper gives a lot of legitimacy to whoever supports the paper's thesis, so I don't think that the influence of the paper was minor at all.