AI Goes to College Podcast Por Craig Van Slyke arte de portada

AI Goes to College

AI Goes to College

De: Craig Van Slyke
Escúchala gratis

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.2024
Episodios
  • Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI
    Mar 31 2026
    AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AIHigher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism.The Accessibility Crunch Is Here (and AI Can Help)The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss.Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done.Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him.The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial.What 81,000 Interviews Tell Us About How People Actually Use AILink: https://www.anthropic.com/features/81k-interviewsCraig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropicThe conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter.The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect.Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated).One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong.The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it.Academics Need to Wake Up: 10 Theses on a Shifting LandscapeLink: https://substack.com/home/post/p-189705626The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims.Take thesis number one: AI can already do social science ...
    Más Menos
    47 m
  • We're On Our Own: Academic Integrity through AI Resilience
    Mar 3 2026
    Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.Links referenced in this episode:notebooklmanthropicclaudegooglecanvaseinsteinMentioned in this episode:AI Goes to College Newsletter
    Más Menos
    48 m
  • Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)
    Feb 16 2026
    Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact.That last part is the hard part.Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives.The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it.From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark.The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI.Key Takeaways1. The Socratic method has an AI prerequisite problem. You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins.2. AI dependency is already happening to experienced users. Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on.3. The “helpful by default” design is a subtle problem. Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse.4. Policy chaos is the norm, not the exception. Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics.5. Grace matters more than enforcement right now. Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written.6. Students use AI in ways faculty don’t expect. Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns.7. Education isn’t moving fast enough. New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it.LinksDr. Ludwig's website: https://www.betteludwig.com/ AI Can Do That Substack: https://betteconnects.substack.com/AI Goes to College: https://www.aigoestocollege.com/Craig's AI Goes to College Substack: https://aigoestocollege.substack.com/Mentioned in this episode:AI Goes to College Newsletter
    Más Menos
    40 m
Todavía no hay opiniones