Automating Inequality

How High-Tech Tools Profile, Police, and Punish the Poor
Narrated by: Teri Schnaubelt
Length: 7 hrs and 47 mins
4.4 out of 5 stars (77 ratings)

Audible Premium Plus

$14.95 a month

1 audiobook of your choice.
Stream or download thousands of included titles.
$14.95 a month after 30 days. Cancel anytime.
Buy for $17.49

Buy for $17.49

Pay using card ending in
By confirming your purchase, you agree to Audible's Conditions of Use and Amazon's Privacy Notice. Taxes where applicable.

Publisher's Summary

The State of Indiana denies one million applications for healthcare, foodstamps, and cash benefits in three years because a new computer system interprets any mistake as "failure to cooperate." In Los Angeles, an algorithm calculates the comparative vulnerability of tens of thousands of homeless people in order to prioritize them for an inadequate pool of housing resources. In Pittsburgh, a child welfare agency uses a statistical model to try to predict which children might be future victims of abuse or neglect. 

Since the dawn of the digital age, decision-making in finance, employment, politics, health, and human services has undergone revolutionary change. Today, automated systems - rather than humans - control which neighborhoods get policed, which families attain needed resources, and who is investigated for fraud. While we all live under this new regime of data, the most invasive and punitive systems are aimed at the poor. 

In Automating Inequality, Virginia Eubanks systematically investigates the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The audiobook is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lies dying, to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile. 

The US has always used its most cutting-edge science and technology to contain, investigate, discipline, and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhumane choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values. This deeply researched and passionate audiobook could not be more timely.

©2017 Virginia Eubanks (P)2018 Tantor

What listeners say about Automating Inequality

Average Customer Ratings
Overall
  • 4.5 out of 5 stars
  • 5 Stars
    49
  • 4 Stars
    14
  • 3 Stars
    10
  • 2 Stars
    2
  • 1 Stars
    2
Performance
  • 4 out of 5 stars
  • 5 Stars
    37
  • 4 Stars
    13
  • 3 Stars
    10
  • 2 Stars
    3
  • 1 Stars
    3
Story
  • 4.5 out of 5 stars
  • 5 Stars
    41
  • 4 Stars
    15
  • 3 Stars
    6
  • 2 Stars
    5
  • 1 Stars
    1

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    4 out of 5 stars
  • Performance
    2 out of 5 stars
  • Story
    5 out of 5 stars

Great book, distracting narration

Read this book for a book club. Great data, lots of interesting talking points, and legitimate research. The narration was weird to the point of distraction. Narrator makes a forced voice when quoting anyone and it sounds wacky and sarcastic.

  • Overall
    3 out of 5 stars

Excellent research, sprinkled with person bias

First off, the author did a great job outlining three use cases of the impacts of bad data, bias in algorithms, and lack of ethical standards impact on social services. it was very well written and easily consumable. The portions I could do without were the injects of a very personal bias, which at times overshadowed her research. For example, the first 30 minutes of the story is a personal story of how she believes an algorithm in her health insurance unfairly targeted her. She was unable to validate that her issue stemmed from an algorithm, which she acknowledged. She made it clear from the beginning her personal beliefs which I believed muddied what was seemingly good research.