
Bernoulli's Fallacy
Statistical Illogic and the Crisis of Modern Science
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
3 meses gratis
Compra ahora por $29.95
No default payment method selected.
We are sorry. We are not allowed to sell this product with the selected payment method
-
Narrado por:
-
Tim H. Dixon
-
De:
-
Aubrey Clayton
There is a logical flaw in the statistical methods used across experimental science. This fault is not a minor academic quibble: It underlies a reproducibility crisis now threatening entire disciplines. In an increasingly statistics-reliant society, this same deeply rooted error shapes decisions in medicine, law, and public policy, with profound consequences. The foundation of the problem is a misunderstanding of probability and its role in making inferences from observations.
Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the 17th-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. Clayton recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. He highlights how influential 19th- and 20th-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics.
Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for listeners interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach - that is, to incorporate prior knowledge when reasoning with incomplete information - in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli’s Fallacy explains why something has gone wrong with how we use data - and how to fix it.
PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.
©2021 Aubrey Clayton (P)2021 Audible, Inc.Listeners also enjoyed...




















Las personas que vieron esto también vieron:


















My intellectual voyage through this domain was profoundly enriched by Clayton's work, which bestowed upon me the essential historical context of the Bayesian versus frequentist discourse, underscoring Jaynes' work as a pivotal intellectual achievement.
Entitled "Bernoulli’s Fallacy," the book adeptly traces the trajectory of statistical thought, journeying from Bernoulli's pioneering efforts to the unsettling application of statistics in the pursuit of eugenic agendas. It also confronts the contemporary "crisis of replication" afflicting various research fields, a crisis stemming from an excessive dependence on statistical significance and p-values in hypothesis evaluation.
In its initial chapters, the book articulates its core concepts, which, though not revolutionary, remain critical and frequently misunderstood in modern discussions. These concepts pivot around the idea of probability as a subjective belief informed by available knowledge, the imperative of articulating assumptions in probability statements, and the transformation of prior probabilities into posterior probabilities via observation. The book underscores that data alone cannot yield inferences; rather, it reshapes our existing narratives based on their plausibility.
A pivotal insight from the book is the acknowledgment that improbable events do indeed transpire. This realization challenges the practice of deducing the veracity or fallacy of hypotheses solely based on the likelihood of observations. Instead, it advocates for adjusting our subjective belief in the plausibility of a hypothesis in relation to other competing hypotheses.
Moreover, the book elucidates a critical distinction: Bayesian and frequentist methods are not merely two different perspectives but rather, the Bayesian approach forms the bedrock of probability understanding, with the frequentist method emerging as a historical aberration, a specific instance within the expansive Bayesian paradigm.
It was particularly enlightening to learn how a small cadre of British mathematics professors, namely Galton, Fisher, and Pearson, engineered an entire statistical school of thought. This school, founded on flawed and convenient principles, served to justify and rationalize their eugenic and racist viewpoints, reinforcing the Victorian-era racial supremacy of the British upper class through a veneer of mathematical rationalization. This review offered a fascinating glimpse into a quasi-scientific method employed by researchers who, standing on shaky ground, resort to limited group sampling and mathematical subterfuge to lend false precision and authority to their biased models and probability findings.
Statistical method based upon Racist Justification
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Eye-Opening
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Unhinged and thought provoking.
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Rigorously Bayesian
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Bernoulli’s Fallacy provides this context, starting with Bernoulli’s contributions to the field, working all the way through the development and use (rather, a perversion) of statistics to meet the eugenics agenda, and finally the present day “crisis of replication” that is plaguing research across a variety of fields due to their reliance on statistical significance and p-values as a measure of evaluating hypotheses.
As such, this book, in its initial chapters, presents its core set of ideas. These are not novel ideas, but they are nevertheless poorly understood by the community today, and this book does a great job explaining them in depth. I would summarize these ideas as follows:
- Probability represents a subjective belief in a hypothesis based on information / knowledge that you possess, it is not an objective fact. Any statement that the probability of an event IS some number is incomplete; you must always state your assumptions (knowledge that you possess). All probability is conditional on these assumptions. (Jaynes does a good job of making this explicit via notation.)
- You cannot draw inferences from data alone. What you CAN do is convert prior probabilities (existing degrees of belief) to posterior probabilities through the act of observation (incorporating new data). Data doesn’t ever tell you the whole story; it can only alter the story you already have in terms of its plausibility.
- Unlikely events happen. You cannot infer the truth or falsity of a hypothesis based on the likelihood of an observation. Rather, you can only use an observation to alter your subjective belief in the plausibility of a hypothesis, and that too, relative to OTHER hypotheses that support the same observation. Again, unlikely events do occur (e.g., someone always wins the lottery), and so it’s really the relative likelihood of different hypotheses that you adjust as you learn more (by making more observations). Of particular importance here is the idea that it is up to YOU (not the data) to exhaustively formulate the relevant hypotheses, and assign suitable priors. As Pierre-Simon Laplace supposedly put it (paraphrasing), “extraordinary claims merit extraordinary evidence”, and so new data should alter your belief one way or the other toward a hypothesis based on the RELATIVE priors associated with all potential hypotheses. The more you believe in a hypothesis relative to others, the harder it should be to displace.
One idea this book clarifies is that Bayesian and frequentist are not two “equally valid” schools of thought, but that the Bayesian method underpins the whole idea of probability, whereas the frequentist approach is simply a special case (a sort of unhappy accident of history).
Overall, a well-argued, interesting, and balanced book, despite the seemingly extraordinary conclusion. The evidence is extraordinary and well-presented, though occasionally repetitive and dense.
Excellent Intro to the Meaning of Probability
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
In the preface, Clayton describes the Bayesian vs Frequentist schism as a "dispute about the nature and origins of probability: whether it comes from 'outside us' in the form of uncontrollable random noise in observations, or 'inside us' as our uncertainty given limited information on the state of the world." Like Clayton, I am a fan of E.T. Jaynes's "Probability Theory: The Logic of Science", which presents the argument (proof really) that probability is a number representing a proposition's plausibility based on background information -- a number which can be updated based on new observations. So, I am a member of the choir to which Clayton is preaching.
And he is preaching. This is one long argument against classical frequentist statistics. But Clayton never implies that frequentists dispute the validity of the formula universally known as "Bayes's Rule". (By the way, Bayes never wrote the actual formula.) Disputing the validity of Bayes's Rule would be like disputing the quadratic formula or the Pythagorean Theorem. Some of the objections to Bayes/Price/Laplace are focused on "equal priors", a term which Clayton never uses. Instead, he says "uniform priors", "principle of insufficient reason", or (from J.M.Keynes) "principle of indifference".
I appreciate that it is available in audio. The narrator is fine, but I find that I need the print version too.
As someone already interested in probability theory and statistics, I highly recommend this book. I can't say how less interested individuals would like it.
Explanation of Bayesian (Jaynesian) statistics
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Noah = Mathematics
on his barge
an Elephant = Statistics
and
a Penguine = Computer Science
Noah is pointing to their offspring, a creature with the body of a penguine [C.S.] and, attached to it, an elephant head [Statistics].
Noah [Mathematics]: "What the hell is this?!..."
E.g. Lifting oneself by one's own hair is unlikely to come down to horsepower.
[.... as Artur Avila pointed out (2014) for which he won the Fields Medal - hands down, to everyones' maximum satisfaction - puting in The Last Word on entire fields of Mathematics!]
A well-marked path that cuts to the chase
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Topic is very important and interesting
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
A strong case for Bayes
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
No punches pulled!
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.