Connecting Programmatic Learning Objectives with Practice: Insights from an Analysis of Workforce-Based Assessments Podcast Por  arte de portada

Connecting Programmatic Learning Objectives with Practice: Insights from an Analysis of Workforce-Based Assessments

Connecting Programmatic Learning Objectives with Practice: Insights from an Analysis of Workforce-Based Assessments

Escúchala gratis

Ver detalles del espectáculo

In this episode of Friday SLO Talk, we dive deep into the complexities of evaluating student performance in real-world clinical settings. Guests John Moore and Phil Reeves from the National Board of Medical Examiners (NBME) join us to share insights from an extensive research project involving five medical schools and over two million lines of assessment data.

The Challenge of Standardization

The study highlights a massive divide in how medical schools design their Workplace-Based Assessments (WBAs). From two-point "pass/fail" scales to complex ten-point rubrics, the lack of standardization across institutions—and even between departments within the same school—makes comparing student competency a significant hurdle.

Key Research Findings

Despite the structural differences in how schools grade, the data revealed a remarkably consistent (and concerning) trend:

  • The "Ceiling Effect": Over 92% of all ratings were positive, with more than 60% hitting the highest possible score.
  • Personality vs. Performance: Qualitative feedback often drifted away from clinical skills (like reasoning or diagnosis) toward personality traits, praising students for being "friendly" or "punctual" rather than offering actionable medical critiques.
  • Administrative Friction: The "time tax" on supervising clinicians often turns evaluations into a "check-the-box" exercise rather than a meaningful coaching moment.

"The assessment process sometimes becomes a procedural requirement rather than a meaningful learning tool."

Why This Matters Beyond Medicine

While the data comes from hospitals and clinics, the implications reach into any field involving hands-on performance—from the arts to career technical education. Moore and Reeves challenge educators to look at their own data and ask:

  1. Are we measuring meaningful growth or just generating reassuring numbers?
  2. How do we reduce the cognitive load for the evaluators?
  3. Are we distinguishing between minimum competency and true excellence?

Tune in to learn how we can move beyond "uninformative data" to create assessment systems that actually help students improve.

Todavía no hay opiniones