Episodios

  • Starting Slow to Go Fast with Shelby Danks
    Sep 30 2025

    There is prevailing pressure in education to deliver quick, measurable results. But do results from 1-2 years of implementation reflect the true potential of meaningful change? In this episode of Evidence in the Wild, I’m joined by Shelby Danks, founder and principal advisor at Arken Research. We discuss how real impact can often take years, not months. By emphasizing the value of detailed planning, logic models, shared vision, and upfront agreement on goals, we can avoid the cycle of “try, discard, repeat.” Schools are complex ecosystems where conditions vary widely, making one-size-fits-all studies inadequate. By celebrating small milestones and prioritizing long-term learning over immediate test gains, educators and researchers can foster sustainable improvement and avoid the trap of continuous, reactionary change.

    Más Menos
    1 h y 5 m
  • Locally Led R&D at Summit Public Schools: Lessons in Innovation
    Aug 26 2025

    In this episode of Evidence in the Wild, I talk with Greg Ponikvar and Dan Effland from Summit Public Schools about their journey with locally led research and development (R&D). We explore how Summit designs, pilots, and scales new efforts, what’s changed in their approach over time, and the lessons they’ve learned along the way. We also dive into some of the simple but powerful steps that support the process, like developing an R&D plan, creating protected time, and regularly reflecting on what went well and what could be improved, as well as how these practices show up in daily activities. Whether you’re a school or district leader, or simply curious about how R&D can drive meaningful change in education, this conversation offers practical insights into building sustainable, locally driven innovation.

    Más Menos
    44 m
  • Can AI Limit our Potential for Innovation in Education Research and Development?
    Aug 19 2025

    AI in education research and development can certainly help us push the boundaries of what is possible, but what happens when AI draws on best practices from the past? Can it limit our potential? Can it encourage us to implement approaches that didn’t work? In this episode, I discuss the ups and downs of AI in education research.

    Más Menos
    24 m
  • Testing and Accountability?! Is That Still Going to Be a Thing?
    Aug 12 2025

    In this episode of Evidence in the Wild, I explore the current uncertainty around accountability and standardized testing in U.S. education policy. With no changes so far in federal guidance on accountability, many are wondering… will testing mandates disappear? Will states be left to decide? Or will things stay the same… for now? As someone who’s long been skeptical of standardized testing, I find myself in an unexpected position of making the case for why some form of consistent measurement still matters. Without it, how do we track progress in literacy, math readiness, or intervention effectiveness?

    That said, I fully recognize the downsides. Our students are tested more than ever, and I’ve experienced firsthand how these systems can fail the very students they’re meant to help. Join me in this honest reflection about what’s at stake, what we might lose, and where we go from here.

    Más Menos
    18 m
  • Evidence to Impact: A Conversation with Eric Mason
    Aug 5 2025

    In this episode of Evidence in the Wild, I sit down with Eric Mason, a seasoned education leader with experience spanning the classroom, district-level assessment, higher ed, and federal policy. Most recently, Eric served at the Institute of Education Sciences within the U.S. Department of Education.

    Together, we explore the evolving landscape of education research, from teacher apprenticeship models to the complexities of root cause analysis in policymaking. Eric offers insight into where we’ve been, where we’re heading, and what it takes to move from hypothesis to impact.

    Más Menos
    1 h y 25 m
  • The What Works Clearinghouse and Effective Programs for Students with Dyslexia and Struggling Readers
    Jul 28 2025

    In this episode of Evidence in the Wild, I explore two research-backed reading interventions that have shown strong results for students with dyslexia and those struggling with literacy: Pennsylvania’s Dyslexia Screening and Early Literacy Intervention Program and Reading Recovery.

    I also reflect on how programs like these could have supported my own learning journey and why tools like the What Works Clearinghouse are essential for helping educators and leaders identify and elevate practices that truly make a difference.

    Resources mentioned:

    What Works Clearinghouse

    https://ies.ed.gov/ncee/wwc/

    Pennsylvania Dyslexia Screening and Early Literacy Intervention Pilot Program

    https://ies.ed.gov/ncee/WWC/Study/86099

    Reading Recovery

    https://ies.ed.gov/ncee/WWC/Docs/InterventionReports/WWC_RR_IR-brief.pdf

    Más Menos
    17 m
  • Correlation vs. Causation: Why Getting It Wrong Can Derail Good Decisions
    Jul 21 2025

    In this solo episode of Evidence in the Wild, I explore one of the most common pitfalls in interpreting data, confusing correlation with causation. Whether it's linking ice cream consumption to shark attacks, or assuming a program "works" based on surface-level trends, failing to account for confounding variables can lead to deeply flawed conclusions. I share a vivid education example from a well-known randomized control trial of charter schools and explain how rigorous methods help us move from hunches to evidence. We’ll also touch on how these issues show up in everyday conversations, policymaking, and the research-to-practice gap. Read the full study referenced in this episode: https://www.aeaweb.org/articles?id=10.1257/app.5.4.1

    Más Menos
    14 m
  • From Meh to Meaningful: Logic Models
    Jul 14 2025

    In this solo episode of Evidence in the Wild, I dig into the real purpose of logic models, and why they’re more than just a compliance checkbox.

    I explore how logic models can serve as practical, actionable tools to help us define measurable milestones, monitor our progress, and stay focused on outcomes that are actually within our control. Whether you’re leading a program, managing a grant, or trying to make sense of your goals, logic models can provide clarity on what’s working, and what needs to shift.

    🧰 Check out the IES Program Evaluation Toolkit: https://ies.ed.gov/use-work/resource-library/resource/tooltoolkit/program-evaluation-toolkit

    Josh Stewart, Ph.D. Founder and Principal Researcher Rocky Mountain Research & Strategy 🌐 rockymountain-research.org

    Más Menos
    20 m