A really nice example where epidemiological studies are later confirmed by a randomized trial. From a statistician’s point of view, this is the idealized way that science would work. First, data that are relatively cheap (observational/retrospective studies) are used to identify potential associations of interest. After a number of these studies show a similar effect, a randomized study is performed to confirm what we suspected from the cheaper studies.
Significance magazine has a writing contest. If you are a grad student in statistics/biostatistics this is an awesome way to (a) practice explaining your discipline to people who are not experts - a hugely important skill and (b) get your name out there, which will help when it comes time to look for jobs/apply for awards, etc.
A great post from David Spiegelhalter about the UK court’s interpretation of probability. It reminds me of the Supreme Court’s recent decision that also hinged on a statistical interpretation. This post brings up two issues I think are worth a more in-depth discussion. One is that it is pretty clear that many court decisions are going to hinge on statistical arguments. This suggests (among other things) that statistical training should be mandatory in legal education. The second issue is a minor disagreement I have with Spiegelhalter’s characterization that only Bayesians use epistemic uncertainty. I frequently discuss this type of uncertainty in my classes although I take a primarily frequentist/classical approach to teaching these courses.