I was recently listening to The Grading Podcast when Marc Aronson, Dean of Academics at Cheshire Academy in Connecticut, described their practice, in which students undertake Final Demonstration of Learning (FDoL) activities.
Listening to this, I couldn’t help but question: When, in a university course, do students truly demonstrate the course learning outcomes that define their degree?
In universities, we’re proud of our alignment frameworks. Unit Learning Outcomes (ULOs) are neatly mapped to Course Learning Outcomes (CLOs), which, in turn, align with institutional graduate attributes. Theoretically, this structure ensures coherence. In practice, though, it creates an illusion of assurance.
Each unit has its specific outcomes and assessments that can be achieved within a short teaching period. These tasks often evaluate discrete knowledge or skills relevant to that context but are somewhat disconnected from the broader developmental arc of the course in which the unit sits.
A student can demonstrate technical proficiency in one unit and critical reflection in another, yet never be asked to connect the two. When every unit operates in isolation, the course itself becomes fragmented. We end up graduating students who have completed a set of parts without ever showing how those parts work together.
The assumption is that if a student passes each unit, they must have achieved the CLOs. But there’s no direct evidence that this is true. The connection between a student’s unit grades and the institution’s claim that they have achieved the course’s intended capabilities is largely inferred, not demonstrated.
Universities may present a learning progression using a familiar three-tier model: Foundational, Developing, and Assured. In theory, this reflects the student’s journey towards achievement. However, in reality, the final “Assured” stage is typically assumed rather than explicitly assessed. We rarely create assessments that clearly demonstrate it. The idea that simply completing the course equals assurance is comforting and easy to manage, but it’s pedagogically hollow.
Yes curriculum mapping is a powerful planning tool, but it doesn’t verify learning. It shows where learning outcomes are addressed, not whether they’ve been achieved. We tick boxes showing that every CLO appears somewhere in the curriculum, but those ticks don’t prove attainment.
This confusion between coverage and attainment is widespread. Mapping is an administrative exercise, while assessment is an evidentiary one. By conflating the two, we create the impression of coherence without the evidence of learning.
The rise of shared units further weakens our ability to assess CLOs. A single unit that is used across multiple courses is often mapped to different learning outcomes within various programs. For one degree, it might be considered foundational; for another, developing. This makes it nearly impossible to use such a unit as reliable evidence of course-specific skills. What a grade signifies in one context doesn’t necessarily apply in another.
For those designing courses or the quality and leadership teams reviewing them, the mapping can become confusing. It turns into a jumble of checkboxes and Xs that obscures the unit’s goals, how they connect to the CLOs, and the tier level. From a course perspective, it creates a tangled web of relationships that are difficult to trace or demonstrate.
This approach disrupts the natural link between unit and course outcomes, making it more about performative compliance than genuine purpose. Essentially, it shifts focus from pedagogical clarity to simple administrative compliance. While the alignment may appear neat on the spreadsheet for academic governance, it feels fragmented and confusing from a student’s and pedagogical point of view. So, when do we know?
Well, that takes us back to Aronson’s reflection and the awkward question it raises:
When do we really know they’ve learned and can demonstrate the course learning outcomes?
If we don’t assess CLOs directly, our confidence in claiming that a graduate has achieved them rests on an assumption rather than evidence. The act of graduation is meant to certify capability, yet our systems rarely collect or evaluate proof of that capability at the course level.
Many institutions may argue and rely on capstone units to provide the evidence we otherwise lack. Positioned at the end of a degree, the capstone is meant to integrate and assess all the CLOs. However, expecting a single unit to bear the entire responsibility for course-level assurance is unrealistic.
If the capstone is the first time students are asked to synthesise and apply their learning, then it’s already too late. We’re auditing the final outcome, not building the process step by step. By treating capstones as our only proof of course-level achievement, we turn assurance into a single snapshot of high-stakes assessment, rather than a continuous narrative.
To move forward, we need to design ways to see CLOs in action, not just map them. This could mean developmental portfolios, annual demonstrations of learning, or reflective synthesis tasks that span multiple units. These approaches shift assessment from fragmented performance toward integrated demonstration.
The goal isn’t to add more assessment, but to make existing learning coherent and visible. When students can trace how their work across units builds toward course outcomes, and when educators can evaluate that progression, assurance becomes evidence-based rather than assumed.
Until then, we’ll continue to graduate students with neat transcripts and tangled learning stories. Therefore, leaving us confident in our mapping, but uncertain in our measurement.
Reflective Questions
1. How do we currently verify that students have achieved course learning outcomes beyond passing individual units?
2. What would it look like if our curriculum maps became living documents of learning evidence, not just administrative artifacts?
3. How might regular, developmental demonstrations of learning change the way students perceive their own progress through a degree?
4. Are capstone assessments enough to provide genuine course-level assurance, or are we mistaking finality for synthesis?
5. If graduation signifies capability, what authentic evidence should we be collecting along the way to support that claim?
Podcast
This episode was generated using AI narration via Google Notebook LM. It is based on and produced from the article above.
Video Explainer
This video was generated using AI narration via Google Notebook LM. It is based on and produced from the article above.
Past Posts
- What If Failure Isn’t Failure? Rethinking Learning in Higher Education
- Content Can Be Delivered Anywhere. Teaching Cannot.
- Seeing the Thinking: What Creative Arts Assessment Practices Can Offer Other Disciplines
- Creativity Belongs to Every Discipline: Why Access Matters
- Closing the Gap Between Learning Outcomes and Student Learning in Higher Education

