Episodios

  • Beyond a Checklist: Rethinking Rubrics to Honor the Process of Learning
    Mar 17 2026

    In this episode of Friday SLO Talks, a team from the University of California, Berkeley Center for Teaching and Learning discusses how rubrics can be used to clarify expectations, support student learning, and improve the consistency of assessment in higher education classrooms.

    The presenters begin by explaining that rubrics are often misunderstood as simple grading tools. In reality, well-designed rubrics can serve a much broader instructional purpose. When used thoughtfully, rubrics communicate what quality work looks like, help students understand performance expectations, and guide instructors in providing more consistent and transparent feedback.

    The Berkeley team describes how rubrics function as a bridge between learning outcomes, assignments, and evaluation. By clearly defining the criteria for performance and describing levels of achievement, instructors make expectations visible to students. This transparency can help students better prepare their work and understand how their performance will be evaluated.

    A key theme of the presentation is that rubrics are most effective when they are integrated into the learning process rather than used only at the end of an assignment. The presenters encourage instructors to share rubrics with students early, discuss the criteria in class, and use them as tools for reflection, peer review, and revision. In this way, rubrics can support formative feedback and help students develop stronger work over time.

    The discussion also addresses common challenges faculty encounter when creating rubrics. Designing clear criteria and meaningful performance levels requires careful thought about what instructors truly value in student work. The presenters emphasize that effective rubrics focus on observable aspects of performance rather than vague qualities such as “good understanding” or “effort.”

    Another important issue raised in the talk is consistency in evaluation. When multiple instructors or teaching assistants assess student work, rubrics can help align expectations and reduce variability in grading. Calibration conversations among instructors can further improve reliability and ensure that evaluators interpret rubric criteria in similar ways.

    The presenters also highlight the importance of flexibility. Rubrics should not be seen as rigid scoring instruments but as evolving tools that instructors refine over time. By reviewing how rubrics function in practice and gathering feedback from students and colleagues, instructors can continually improve how they define and evaluate learning.

    Throughout the conversation, the Berkeley team emphasizes that rubrics ultimately support a larger goal: helping students understand what successful performance looks like and how they can improve their work. When used effectively, rubrics promote clearer communication between instructors and students and strengthen the connection between assignments and course learning outcomes.

    Although the session focuses on practices developed at UC Berkeley, the ideas discussed apply broadly across disciplines and institutions. The presentation offers practical insights for instructors, assessment coordinators, and educational leaders seeking to design assessment approaches that are transparent, meaningful, and supportive of student learning.

    Más Menos
    22 m
  • Connecting Programmatic Learning Objectives with Practice: Insights from an Analysis of Workforce-Based Assessments
    Mar 17 2026

    In this episode of Friday SLO Talk, we dive deep into the complexities of evaluating student performance in real-world clinical settings. Guests John Moore and Phil Reeves from the National Board of Medical Examiners (NBME) join us to share insights from an extensive research project involving five medical schools and over two million lines of assessment data.

    The Challenge of Standardization

    The study highlights a massive divide in how medical schools design their Workplace-Based Assessments (WBAs). From two-point "pass/fail" scales to complex ten-point rubrics, the lack of standardization across institutions—and even between departments within the same school—makes comparing student competency a significant hurdle.

    Key Research Findings

    Despite the structural differences in how schools grade, the data revealed a remarkably consistent (and concerning) trend:

    • The "Ceiling Effect": Over 92% of all ratings were positive, with more than 60% hitting the highest possible score.
    • Personality vs. Performance: Qualitative feedback often drifted away from clinical skills (like reasoning or diagnosis) toward personality traits, praising students for being "friendly" or "punctual" rather than offering actionable medical critiques.
    • Administrative Friction: The "time tax" on supervising clinicians often turns evaluations into a "check-the-box" exercise rather than a meaningful coaching moment.

    "The assessment process sometimes becomes a procedural requirement rather than a meaningful learning tool."

    Why This Matters Beyond Medicine

    While the data comes from hospitals and clinics, the implications reach into any field involving hands-on performance—from the arts to career technical education. Moore and Reeves challenge educators to look at their own data and ask:

    1. Are we measuring meaningful growth or just generating reassuring numbers?
    2. How do we reduce the cognitive load for the evaluators?
    3. Are we distinguishing between minimum competency and true excellence?

    Tune in to learn how we can move beyond "uninformative data" to create assessment systems that actually help students improve.

    Más Menos
    22 m
  • Buggy Whips, Rocket Ships, or Total Eclipse? Assessing Higher Education in the Age of AI. with J.D. Mosley-Matchett, Ph.D.
    Feb 27 2026

    In this Friday SLO Talk, J.D. Mosley-Matchett, Senior Assessment Developer at Western Governors University, examines how higher education is responding to artificial intelligence and the broader technological changes affecting teaching and learning. Drawing on more than three decades of experience in higher education as a professor, dean, and administrator, Mosley-Matchett frames the current moment through three competing narratives about the future of universities: “buggy whips,” “rocket ships,” and “total eclipse.”

    The “buggy whip” narrative reflects the fear that traditional academic practices may become obsolete as knowledge becomes instantly accessible through AI and digital technologies. However, Mosley-Matchett argues that institutions rarely disappear; instead, they adapt and redefine their roles.

    The “rocket ship” narrative views higher education as a pathway to economic mobility, but this model faces growing pressure as the cost of college rises and questions emerge about grade inflation, credential value, and whether degrees reliably signal competence to employers.

    The “total eclipse” narrative suggests that AI could replace universities entirely. Mosley-Matchett rejects this view, emphasizing that colleges serve broader purposes beyond information delivery, including collaboration, social learning, and professional networking.

    Throughout the discussion, participants explore how AI should be incorporated into teaching rather than resisted. Mosley-Matchett argues that institutions have a responsibility to train faculty to use AI effectively and to move away from assessments that reward merely producing the “right answer.” Instead, education should focus on skills, competencies, and the ability to search for and evaluate information.

    The conversation concludes with reflections on curiosity, student agency, competency-based education, and the evolving role of educators in an AI-rich environment. Rather than replacing higher education, AI is likely to force institutions to reconsider how learning is defined, assessed, and supported.

    Más Menos
    19 m
  • Behaviorism Myths, Misconceptions with Ronald C Martella
    Nov 8 2025

    This Friday SLO Talk with Drs. Ronald and Nancy Martella challenges common misconceptions about behaviorism and reintroduces it as a precise, ethical, and evidence-based framework for understanding learning and motivation. The discussion emphasizes a simple but powerful truth: behavior is shaped by the environment, not by invisible mental states. When teachers design conditions that promote success, learning becomes predictable, measurable, and replicable.

    Behaviorism views learners as active participants whose actions are selected by consequences. The three-term contingency—Stimulus → Response → Stimulus (S-R-S)—explains how antecedent cues prompt behavior and how reinforcing outcomes make that behavior more likely to reoccur. Reinforcement increases behavior; punishment decreases it. “Positive” means adding a stimulus; “negative” means removing one. What matters is not intent but effect on future behavior.

    The Martellas dismantled persistent myths:

    • Myth 1: Behaviorism is simplistic “S-R psychology.” Operant conditioning studies voluntary behavior shaped by consequences, far beyond reflexive responses.
    • Myth 2: Behaviorism relies on punishment. Ethical practice requires reinforcement first and limits punishment to rare, last-resort use. Skinner’s work sought humane alternatives.
    • Myth 3: Skinner’s “daughter in a box.” False. The “air-crib” was a safe, climate-controlled baby bed; his daughter later described a happy childhood.
    • Myth 4: Behaviorism ignores motivation. Motivation is explained through environmental variables such as the Premack Principle, Response Deprivation, and Motivating Operations, which alter the value of reinforcers and the probability of behavior.
    • Myth 5: Behaviorism only applies to animals. In reality, it underlies effective human interventions—from autism therapy and literacy instruction to Positive Behavior Supports and MTSS.
    • Myth 6: Behaviorism dismisses the mind. Internal events are acknowledged as behaviors influenced by environment, not mysterious causes. This perspective leads to concrete instructional fixes instead of speculation.

    The Martellas contrasted this model with cognitive and developmental theories that often rely on labels and circular logic (“She can’t read because she has a reading disability—she has a reading disability because she can’t read”). Behaviorism avoids such traps by identifying functional relationships between environment and action and then changing those conditions to improve performance.

    They noted three forces shaping behavior—physiology, culture, and environment—and stressed that only the immediate environment is under a teacher’s control. Thus, effective education depends on designing reinforcement systems and clear contingencies that support desired academic and social behaviors.

    Finally, they linked behaviorism to Skinner’s concept of selection by consequences, analogous to natural selection. Just as adaptive traits survive through reinforcement, effective behaviors are “selected” and strengthened. When reinforcement is consistent, students build repertoires of successful behavior; when it is absent or inconsistent, learning stalls.

    The message for educators is clear: learning is observable change, not an internal mystery. By focusing on measurable performance, continuous feedback, and well-designed environments, teachers can replace blame with accountability and speculation with evidence. Behaviorism, properly understood, is not mechanical—it is humane, pragmatic, and relentlessly focused on helping every student succeed.

    Más Menos
    18 m
  • AI, Education, and the Scientific Method with Lizelena Iglesias and Vi Hawes
    Oct 18 2025

    In this episode, educators and innovators explore how generative AI can transform classrooms into continuous experiments in growth. Drawing from the October 10, 2025 session Empowering Inquiry: AI and the Scientific Method in Practice, the discussion reveals how AI can support every stage of scientific inquiry—from background research and hypothesis formation to data analysis and communication.

    Adult educators Lizelena Iglesias, Vi Hawes, Dr. Jarek Janio, and Enrique Jauregui unpack how AI tools can boost problem-solving, data interpretation, and critical thinking while helping teachers model evidence-based teaching. The conversation reframes prompt engineering as a new literacy—an experimental skill that mirrors how scientists refine hypotheses.

    Yet, the panel also wrestles with a vital question: how do we preserve independent thought in the age of intelligent tools? Their answer—treat AI as a partner, not a crutch. Listeners will hear practical classroom strategies for balancing AI assistance with authentic student inquiry and learn how the scientific method can guide not only student learning but also teaching itself.

    Más Menos
    13 m
  • Beyond Transcripts: Beyond the Transcript: Measuring What Physical Therapy Students Truly Learn
    Oct 4 2025

    In this session, Dr. Pavithra Suresh and Dr. Sabrina Altema of Howard University share how the Doctor of Physical Therapy (DPT) program has built a student-centered, equity-driven model of assessment that goes far beyond the traditional transcript. Their approach focuses on preparing graduates who are not only clinically competent but also culturally sensitive and deeply committed to serving under-resourced communities.

    The presentation highlights Howard’s university-wide framework, the Howard Annual Assessment Process (HAP), and its six guiding pillars: centering students and equity, honoring community expertise, prioritizing quality over compliance, fostering collaboration, ensuring transparency, and cultivating lifelong learning.

    The DPT program illustrates these principles in action. With a 94% licensure pass rate in 2023–24, the program emphasizes training underrepresented physical therapists and embedding community service into the student experience. Assessment is designed “with the end in mind,” developing confident and competent practitioners through scaffolded practical exams, formative feedback, Bloom’s Taxonomy made transparent to students, and authentic clinical experiences supported by standardized evaluation tools and trained preceptors.

    Evaluation is holistic, capturing cognitive knowledge, psychomotor skills, and affective growth, while also addressing the “hidden curriculum” of professional norms and communication. Faculty monitor progress at both individual and program levels through weekly meetings, developmental teams, comprehensive exams, and curricular mapping aligned with evolving accreditation standards.

    Key takeaways include the importance of faculty and student buy-in, the value of empowering learners with self-assessment tools, and the role of transparency in deepening engagement. The Howard DPT program demonstrates how assessment can drive both student success and continuous program improvement, ensuring graduates leave with the competence, confidence, and commitment to serve where they are most needed.

    Más Menos
    13 m
  • HyFlex Learning Assessment Using Generative AI
    Sep 30 2025

    Dr. Brian Beatty from San Francisco State University discussing "Addressing the Challenges of Assessment in HyFlex Courses Using Custom GPTs." Dr. Beatty, a professor of instructional design, introduces the concept of HyFlex learning, which blends face-to-face, synchronous, and asynchronous online instruction to offer students flexible participation choices, initially developed to address enrollment issues. A significant portion of the discussion focuses on how he leverages generative AI—specifically custom GPTs built on platforms like ChatGPT—to support both student self-assessment and faculty course design in these complex, multi-modal environments. He details various student-facing GPT tools he created for engagement and formative assessment, such as "QuizMe" and "Breakout for 3," and explains the simple process of building these custom tools. The presentation also addresses assessment challenges in HyFlex, emphasizing the need for equivalent learning outcomes across all modes and the importance of authentic, flexible, and self-assessment strategies.

    Más Menos
    19 m
  • From Theory to Clinical Competency: A Case Study in Authentic, Performance-Based Nursing Education with Dr. Stacy Greathouse and the Team from University of Texas at Arlington
    Apr 27 2025

    In this Friday SLO Talk, Jarek Janio from Santa Ana College and Enrique Jauregui from Fresno City College host a dynamic session highlighting an innovative nursing course development project from the University of Texas at Arlington (UTA). Dr. Leslie Jennings, Missina Minter, Megan Zara, and Dr. Stacy Greathouse, an interdisciplinary team nicknamed the "Motley Crew"—share their collaboration model for building a high-impact, competency-based perioperative nursing course.

    The panel describes how the course originated in response to the national nursing shortage, aiming to prepare students for perioperative roles in operating rooms. Dr. Jennings, a perioperative nurse with over 30 years of experience, led the design effort with the support of an instructional designer, librarians, and an OER specialist. Together, they developed an accelerated "Maymester" course, compressing a full semester’s clinical and didactic content into an intensive two-and-a-half-week experience.

    Stacy Greathouse introduces the "Dark Classroom" and "Motley Crew" frameworks, emphasizing collaboration, transparency, and the breakdown of traditional course design silos. Each member of the team fulfilled clearly defined roles—content specialist, learning architect, instructional technologist, accessibility specialist, and OER librarian—to ensure smooth workflow, full accessibility, and rigorous alignment with student learning outcomes.

    A key focus was meticulous alignment: the team employed a master course map, “Holy Hail” language (ensuring consistent terminology across outcomes, content, and assessments), and Quality Matters (QM) standards. They also integrated Open Educational Resources (OER) and accessibility reviews, while using Rice University’s Workload Estimator to help minimize hidden time constraints for students.

    Jennings and Greathouse explain how an international travel theme unified the course design. Students "traveled" through different stages of perioperative care, earning "passport stamps" as they progressed, with every assignment explicitly tied to course outcomes.

    The discussion highlights the importance of formative assessments and structured "pause points" to help students reflect and maintain mental wellness during the compressed course. Flipped classroom strategies shifted the responsibility for preparation onto students, promoting agency and ownership of learning.

    Attendees then engaged in breakout rooms with the "Motley Crew" members to dive deeper into key practices:

    • OER sourcing and licensing
    • Accessibility auditing and integration
    • Instructional design strategies for accelerated learning
    • Building student-centered activities with tools like H5P

    The session concluded with a reflection on the cultural changes needed in higher education to support collaborative course development and the critical role of advocating for institutional resources to make sustainable, high-quality programs possible. Attendees left inspired by the team's commitment to authentic assessment, transparency, and student-centered design.

    Key Takeaways:

    • High-stakes, accelerated nursing education requires intentional design, transparency, and teamwork.
    • True accessibility and OER integration enhance quality and equitable access for all students.
    • A "Motley Crew" team structure—built on mutual accountability and clear roles—produces stronger, more resilient learning environments.
    • Flipped classrooms and competency-based education support student agency while reducing faculty workload.

    Special Thanks: Thank you to Dr. Leslie Jennings, Missina Minter, Megan Zara, and Stacy Greathouse for sharing their remarkable journey, and to the Friday SLO Talks community for supporting these groundbreaking conversations on student learning outcomes.

    Más Menos
    19 m