muckrAIkers Podcast Por Jacob Haimes and Igor Krawczuk arte de portada

muckrAIkers

muckrAIkers

De: Jacob Haimes and Igor Krawczuk
Escúchala gratis

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.© Kairos.fm Ciencia Matemáticas
Episodios
  • AI Skeptic PWNED by Facts and Logic
    Jan 12 2026

    Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques.

    We discuss what "rote tasks" are and why they're now automatable with enough investment, the difference between genuine utility and AGI claims, and why this update actually impacts our bubble analysis. We explore how massive investment has finally produced something useful for a narrow domain, but it doesn't mean the technology is generalizable or that AGI is real.

    Chapters

    • (00:00) - | Introduction
    • (05:07) - | What Changed Igor’s Mind
    • (18:27) - | Rote Tasks Explained
    • (23:31) - | How Does This Impact our Bubble Analysis?
    • (30:48) - | AGI Is Still BS
    • (34:07) - | Externalities Remain Unchanged
    • (37:49) - | Final Thoughts & Outro

    Links
    • Related muckrAIkers episode - Tech Bros Love AI Waifus

    Bubble Talk

    • OfficeChai startup - OpenAI Hasn’t Completed A Successful Full-Scale Pretraining Run Since GPT-4o In May 2024, Says SemiAnalysis
    • Vechron report - Anthropic Prepares for Potential 2026 IPO in Bid to Rival OpenAI
    • YCombinator Forum post on AI crash
    • YCombinator Forum post on OpenAI adopting Anthropic's "skills"
    • YCombinator Forum post on OpenAI rumors
    • YCombinator Forum post on OpenAI add suggestions

    Other Sources

    • LinkedIn post discussing an agentic coding vibe shift
    • Executive Order - Ensuring a National Policy Framework for Artificial Intelligence
    • Inside Tech Law blogpost - Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP
    • NeurIPS 2025 paper - Ascent Fails to Forget
    • NBER working paper - Large Language Models, Small Labor Market Effects
    • Dwarkesh Podcast blogpost - RL is even more information inefficient than you thought
    Más Menos
    39 m
  • Tech Bros Love AI Waifus
    Dec 15 2025
    OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects.NOTE: The project that we cite for the $64 billion blockage is actually a pro-data-center campaign. The numbers still seem ok, but it's worth being aware of.Chapters(00:00) - - Introduction (06:45) - - The Addiction Business Model (10:15) - - Public Sentiment Data (22:45) - - Data Centers and Infrastructure Problems (36:30) - - The Bubble Discussion (44:36) - - Closing Thoughts & OutroLinksPublic Sentiment on AIPew Research report - How People Around the World View AIPew Research report - How the U.S. Public and AI Experts View Artificial IntelligencePew Research report - How Americans View AI and Its Impact on People and SocietyUniversity of Toronto report - Trust, attitudes and use of artificial intelligence: A global study 2025Melbourne Business School report - Key findings on public attitudes towards AIThe Washington Post article - Americans have become more pessimistic about AI. Why?The New York Times article - From Mexico to Ireland, Fury Mounts Over a Global A.I. FrenzyThe Guardian article - ‘It shows such a laziness’: why I refuse to date someone who uses ChatGPTThe Register article - OpenAI's ChatGPT is so popular that almost no one will pay for itAI and Claims of Curing CancerRachel Thomas, PhD blogpost - “AI will cure cancer” misunderstands both AI and medicineThe Atlantic article - OpenAI Wants to Cure Cancer. So Why Did It Make a Web Browser?Independent article - ChatGPT boss predicts when AI could cure cancerThe Atlantic article - AI Executives Promise Cancer Cures. Here’s the RealityAI Porn and the Addiction EconomyForbes article - ChatGPT Will Allow ‘Erotica’ After Easing Mental Health Restrictions, Sam Altman SaysThe Addiction Economy websitePPC article - OpenAI is staffing up to turn ChatGPT into an ad platformTom Nicholas video - Vape-o-nomics: Why Everything is Addictive NowAI BubbleFast Company article - AI isn’t replacing jobs. AI spending isPivot to AI article - The finance press finally starts talking about the ‘AI bubble’Fortune article - Without data centers, GDP growth was 0.1% in the first half of 2025, Harvard economist saysThe Atlantic article - Just How Bad Would an AI Bubble Be?The New York Times article - Debt Has Entered the A.I. BoomWill Lockett's Newsletter article - AI Pullback Has Officially StartedReuters article - Michael Burry of 'Big Short' fame is closing his hedge fundBusiness Insider article - The guy who shorted Enron has a warning about the AI boomDatacentersBloomberg article - AI Needs So Much Power, It’s Making Yours WorseData Center Watch report - $64 billion of data center projects have been blocked or delayed amid local oppositionMore Perfect Union video - We Found the Hidden Cost of Data Centers. It's in Your Electric BillDataCenter Knowledge article - Why Communities Are Protesting Data Centers – And How the Industry Can RespondFighting BackKnight First Amendment Institute essay - AI as Normal TechnologyPranksters vs. Autocrats chapter - Laughtivism: The Secret IngredientSPSP article - Playing with Power: Humor as Everyday ResistanceBlood in the Machine article - The Luddite Renaissance is in full swing
    Más Menos
    46 m
  • AI Safety for Who?
    Oct 13 2025
    Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.Chapters(00:00) - Introduction & AI Investment Insanity (01:43) - The Problem with AI Safety (08:16) - Anthropomorphizing AI & Its Dangers (26:55) - Mental Health, Wellness, and AI (39:15) - Censorship, Bias, and Dual Use (44:42) - Solutions, Community Action & Final ThoughtsLinksAI Ethics & PhilosophyForeign affairs article - The Cost of the AGI DelusionNature article - Principles alone cannot guarantee ethical AIXeiaso blog post - Who Do Assistants Serve?Argmin article - The Banal Evil of AI SafetyAI Panic News article - The Rationality TrapAI Model Bias, Failures, and ImpactsBBC news article - AI Image Generation IssuesThe New York Times article - Google Gemini German Uniforms ControversyThe Verge article - Google Gemini's Embarrassing AI PicturesNPR article - Grok, Elon Musk, and Antisemitic/Racist ContentAccelerAId blog post - How AI Nudges are Transforming Up-and Cross-SellingAI Took My Job websiteAI Mental Health & Safety ConcernsEuronews article - AI Chatbot TragedyPopular Mechanics article - OpenAI and PsychosisPsychology Today article - The Emerging Problem of AI PsychosisRolling Stone article - AI Spiritual Delusions Destroying Human RelationshipsThe New York Times article - AI Chatbots and DelusionsGuidelines, Governance, and CensorshipPreprint - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language ModelMinds & Machines article - The Ethics of AI Ethics: An Evaluation of GuidelinesSSRN paper - Instrument Choice in AI GovernanceAnthropic announcement - Claude Gov Models for U.S. National Security CustomersAnthropic documentation - Claude's ConstitutionReuters investigation - Meta AI Chatbot GuidelinesSwiss Federal Council consultation - Swiss AI Consultation ProceduresGrok Prompts Github RepoSimon Willison blog post - Grok 4 Heavy
    Más Menos
    50 m
Todavía no hay opiniones