Friday SLO Talks Podcast Por Jarek Janio arte de portada

Friday SLO Talks

Friday SLO Talks

De: Jarek Janio
Escúchala gratis

Friday SLO Talks: Rethinking Student Learning Outcomes

Welcome to Friday SLO Talks, the podcast that redefines student success in higher education by focusing on learning as skill and competency development, not just course completion or diploma attainment.

Presented by the California Outcomes Assessment Coordinators' Hub (COACHES), each episode explores effective teaching practices and assessment strategies that emphasize meaningful, measurable growth. Through in-depth conversations with educators, program leaders, and academic innovators, we bring you practical insights and tools to enhance student learning in ways that matter.

If you’re a higher education professional dedicated to cultivating real-world skills and competencies in your students, join us for inspiring discussions and a community committed to reshaping the future of student-centered education.

Episodios
  • Beyond a Checklist: Rethinking Rubrics to Honor the Process of Learning
    Mar 17 2026

    In this episode of Friday SLO Talks, a team from the University of California, Berkeley Center for Teaching and Learning discusses how rubrics can be used to clarify expectations, support student learning, and improve the consistency of assessment in higher education classrooms.

    The presenters begin by explaining that rubrics are often misunderstood as simple grading tools. In reality, well-designed rubrics can serve a much broader instructional purpose. When used thoughtfully, rubrics communicate what quality work looks like, help students understand performance expectations, and guide instructors in providing more consistent and transparent feedback.

    The Berkeley team describes how rubrics function as a bridge between learning outcomes, assignments, and evaluation. By clearly defining the criteria for performance and describing levels of achievement, instructors make expectations visible to students. This transparency can help students better prepare their work and understand how their performance will be evaluated.

    A key theme of the presentation is that rubrics are most effective when they are integrated into the learning process rather than used only at the end of an assignment. The presenters encourage instructors to share rubrics with students early, discuss the criteria in class, and use them as tools for reflection, peer review, and revision. In this way, rubrics can support formative feedback and help students develop stronger work over time.

    The discussion also addresses common challenges faculty encounter when creating rubrics. Designing clear criteria and meaningful performance levels requires careful thought about what instructors truly value in student work. The presenters emphasize that effective rubrics focus on observable aspects of performance rather than vague qualities such as “good understanding” or “effort.”

    Another important issue raised in the talk is consistency in evaluation. When multiple instructors or teaching assistants assess student work, rubrics can help align expectations and reduce variability in grading. Calibration conversations among instructors can further improve reliability and ensure that evaluators interpret rubric criteria in similar ways.

    The presenters also highlight the importance of flexibility. Rubrics should not be seen as rigid scoring instruments but as evolving tools that instructors refine over time. By reviewing how rubrics function in practice and gathering feedback from students and colleagues, instructors can continually improve how they define and evaluate learning.

    Throughout the conversation, the Berkeley team emphasizes that rubrics ultimately support a larger goal: helping students understand what successful performance looks like and how they can improve their work. When used effectively, rubrics promote clearer communication between instructors and students and strengthen the connection between assignments and course learning outcomes.

    Although the session focuses on practices developed at UC Berkeley, the ideas discussed apply broadly across disciplines and institutions. The presentation offers practical insights for instructors, assessment coordinators, and educational leaders seeking to design assessment approaches that are transparent, meaningful, and supportive of student learning.

    Más Menos
    22 m
  • Connecting Programmatic Learning Objectives with Practice: Insights from an Analysis of Workforce-Based Assessments
    Mar 17 2026

    In this episode of Friday SLO Talk, we dive deep into the complexities of evaluating student performance in real-world clinical settings. Guests John Moore and Phil Reeves from the National Board of Medical Examiners (NBME) join us to share insights from an extensive research project involving five medical schools and over two million lines of assessment data.

    The Challenge of Standardization

    The study highlights a massive divide in how medical schools design their Workplace-Based Assessments (WBAs). From two-point "pass/fail" scales to complex ten-point rubrics, the lack of standardization across institutions—and even between departments within the same school—makes comparing student competency a significant hurdle.

    Key Research Findings

    Despite the structural differences in how schools grade, the data revealed a remarkably consistent (and concerning) trend:

    • The "Ceiling Effect": Over 92% of all ratings were positive, with more than 60% hitting the highest possible score.
    • Personality vs. Performance: Qualitative feedback often drifted away from clinical skills (like reasoning or diagnosis) toward personality traits, praising students for being "friendly" or "punctual" rather than offering actionable medical critiques.
    • Administrative Friction: The "time tax" on supervising clinicians often turns evaluations into a "check-the-box" exercise rather than a meaningful coaching moment.

    "The assessment process sometimes becomes a procedural requirement rather than a meaningful learning tool."

    Why This Matters Beyond Medicine

    While the data comes from hospitals and clinics, the implications reach into any field involving hands-on performance—from the arts to career technical education. Moore and Reeves challenge educators to look at their own data and ask:

    1. Are we measuring meaningful growth or just generating reassuring numbers?
    2. How do we reduce the cognitive load for the evaluators?
    3. Are we distinguishing between minimum competency and true excellence?

    Tune in to learn how we can move beyond "uninformative data" to create assessment systems that actually help students improve.

    Más Menos
    22 m
  • Buggy Whips, Rocket Ships, or Total Eclipse? Assessing Higher Education in the Age of AI. with J.D. Mosley-Matchett, Ph.D.
    Feb 27 2026

    In this Friday SLO Talk, J.D. Mosley-Matchett, Senior Assessment Developer at Western Governors University, examines how higher education is responding to artificial intelligence and the broader technological changes affecting teaching and learning. Drawing on more than three decades of experience in higher education as a professor, dean, and administrator, Mosley-Matchett frames the current moment through three competing narratives about the future of universities: “buggy whips,” “rocket ships,” and “total eclipse.”

    The “buggy whip” narrative reflects the fear that traditional academic practices may become obsolete as knowledge becomes instantly accessible through AI and digital technologies. However, Mosley-Matchett argues that institutions rarely disappear; instead, they adapt and redefine their roles.

    The “rocket ship” narrative views higher education as a pathway to economic mobility, but this model faces growing pressure as the cost of college rises and questions emerge about grade inflation, credential value, and whether degrees reliably signal competence to employers.

    The “total eclipse” narrative suggests that AI could replace universities entirely. Mosley-Matchett rejects this view, emphasizing that colleges serve broader purposes beyond information delivery, including collaboration, social learning, and professional networking.

    Throughout the discussion, participants explore how AI should be incorporated into teaching rather than resisted. Mosley-Matchett argues that institutions have a responsibility to train faculty to use AI effectively and to move away from assessments that reward merely producing the “right answer.” Instead, education should focus on skills, competencies, and the ability to search for and evaluate information.

    The conversation concludes with reflections on curiosity, student agency, competency-based education, and the evolving role of educators in an AI-rich environment. Rather than replacing higher education, AI is likely to force institutions to reconsider how learning is defined, assessed, and supported.

    Más Menos
    19 m
Todavía no hay opiniones