Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok.
In the first-ever account of Bayes' rule for general readers and listeners, Sharon Bertsch McGrayne explores this controversial theorem and the human obsessions surrounding it. She traces its discovery by an amateur mathematician in the 1740s through its development into roughly its modern form by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years - at the same time that practitioners relied on it to solve crises involving great uncertainty and scanty information, even breaking Germany's Enigma code during World War II, and explains how the advent of off-the-shelf computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA decoding to Homeland Security.
Drawing on primary source material and interviews with statisticians and other scientists, The Theory That Would Not Die is the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.
©2011 Sharon Bertsch McGrayne (P)2012 Tantor
"If you are not thinking like a Bayesian, perhaps you should be." (New York Times Book Review)
Sharon McGrayne tackles Baye’s Rule in her volume The Theory that Would Not Die. Along the way she shows how the ‘rule’ has gone under only to reappear in different times, be used in different places, and gather influence under varied circumstances. I found the narrative engaging and the history she presents informative. I wish, however, that she had had an early chapter discussing what Baye’s Rule is, how it works, and what it means to users. Baye’s Rule is well available to those with simple math ability and it seems the book would have a wider audience had she made this allowance. So, if you are familiar with Baye’s Theorem pick up the book and turn some pages. If you are not familiar with the theorem, read up on it a little and then turn those pages. There are unexpected insights in every chapter. The narration of Laural Merlington is good.
Did a good job of constructing a story about a particular statistical technique. She overdoes it. Bayes theorm is not the same as the story of Seabiscuit.
At first I thought she was a computer generated voice. Her cadence was was odd, adding syllables at random. Many names were mispronounced.
I've taken two statistics classes in my life, and I remember being confused by Bayes in both classes. So I was hoping that this book would clarify matters for me. Sadly, it didn't. I fully realize that the fault might be my own -- maybe I just don't have a mind for statistics.
The book did have some interesting stories in it, such as the one about the massive search for a missing atomic bomb that fell into the ocean. However, I never did understand why Bayes' Rule was so controversial (if it works so well in practice, what's not to like about it?), and I'm just as confused as ever about the nuts & bolts of the theorem. I'm almost tempted to crack my old statistics textbooks. Almost.
Incidentally, the reader mispronounced a lot of names.
If you are looking to hear the math behind how Bayes' Rule did the things it did, you'll come away disappointed. However, this is an entertaining history of how this theorem formed the basis for much of the successes of applied mathematics and statistics over the last 100 years.Learning many of the details of the personal lives / personalities of the some of the founding fathers of modern statistics (Fisher, Pearson, et al) and their battles over the use of this rule was one highlight of this book.
Ms Merlingtons' performance was solid...very listenable especially given a relatively dry topic.
Yes, very definitely.
The book would totally baffle me if I didn't do statistics for a living because McGrayne doesn't even give an example of how Bayes' Rule works until about halfway through the book (using the cigarettes study as an example). She merely tells us that frequentists don't like it but don't explain the underly differences between their approaches. But even with all that assumed knowledge, she doesn't talk about any of the underlying math.
Thus the book assumes too much knowledge on the part of the reader for the book to be for the uninitiated but doesn't give enough information for the initiated. Who is the intended audience? I can't even tell.
Would you like to hear about warring academic camps, or would you rather understand in detail how Bayes was applied to solve the problems mentioned in the title? If it's the latter, you'll be disappointed.
The history of Bayesian statistics is fascinating, and this book ably tells the story of its twists and turns. I can understand why the author wants to insulate the reader from the mathematics, but I would have preferred a little more technical detail, especially as it applies to numerical methods. You'll come away from this book understanding how useful Bayesian inference is, but you probably won't learn very much about how it works.
I had no trouble understanding the narrator, but this is the first audiobook I've listened to in which some proper names (especially French names) were horribly mispronounced.
This work should be tagged clearly as primarily an historical treatment of the concepts attributed to Bayes, as well as their evolution. I was hoping for more exposition of the technical details involved in many of the controversies the author documents rather than highlighting the most outrageous position statements on the part of each party. Her treatment of "classified" research and the role of government secrecy in impeding progress that allowed extreme doctrinaire positions to be taken and held for long periods among the academics involved is yet another case for free and unimpeded scientific discourse.
For me, treatment of the ways in which application of Bayes Rule crossed so many disciplinary boundaries was enlightening. I had little idea how widespread so-called Bayesian approaches had become in Science outside the fields I'd chosen to study, Sociology and Anthropology. So-called "Simple Random Sample" or SRS designs were the orthodoxy of the day when I was a student. Some challenged this orthodoxy with so-called "purposive" sample designs which proved to be much more efficient in a wide variety of cases.
Back when I was a graduate student and Senior Research Associate at the University of Michigan, I was asked to help faciliate "brute force" repeated replications of the process of sampling from some large datasets we had obtained from the auto industry. We used multiply replicated samples to produce empirical assessments of five theoretically proposed measures of efficiency (Standard Error of Estimate) for a variety of sample designs used to perform multivariate regression analyses on the dataset. I implemented and optimized the Fortran code used to draw the samples and tabulate the resulting theoretical and acutal measures of efficiency for each sample. The resulting PhD dissertation "sold" over a thousand copies before it had been available for six months! The tables we printed were apparently extremely useful to a variety of practicioners who knew that the underlying distributions of the phenomena they had under study were not "normal."
Or one of Laplace's most potent contributions to science!
I found the narrative engaging and at times gripping. Sometimes the repetitive use of examples litter the historical story line, but the story itself is illuminating.
I've never heard as much info on RenTech, which was fascinating, but wish more time had been spent on Mercer.
great listen, great read
"New developments in statistics breathlessly told"
It's a pretty good history of Bayesian statistic, giving a good overview of the reasons why people are excited about it. Perhaps overly enthusiastic, both exaggerating the differences to other types of statistical reasoning and never making it entirely clear what distinguishes Bayesian from frequentist approaches, nor indeed what statistical reasoning is about to begin with.
The narrator is not the worst I have heard, and generally did a reasonable job of making an understandably modulated aural text. But as is often the case with scientific topics, no thought was given to finding a reader who is actually familiar with the vocabulary or the people. Thus, "theorist" consistently became "theororist", John von Neumann became "Newman", and Jerzy Neyman became "Neiman". Among others. It was still eminently listenable, but irritating.
Report Inappropriate Content