Band-Aids Don’t Fix Bullet Holes: Academic Integrity and the Future of Assessment

The rapid advancement of digital tools has transformed higher education, offering students unprecedented access to information and new ways to “learn”. Yet alongside these benefits, institutions face a pressing challenge: maintaining academic integrity in an era where shortcuts and cognitive offloading are easier than ever to access. Concerns about plagiarism, rote memorisation, and contract cheating have troubled universities for years, but the arrival of generative AI has magnified the problem. Quick fixes won’t solve the underlying issue because the cracks in our assessment practices predate AI, yet they have simply been made impossible to ignore. As Taylor Swift reminds us, “band-aids don’t fix bullet holes”. At the heart of the matter is a question educators have long wrestled with: are we truly assessing what we claim to assess, or have our practices become disconnected from meaningful measures of learning?

Traditional assessment methods such as essays, quizzes, and standardised exams once formed the backbone of higher education. They offered structure, comparability, and a sense of rigour. However, these approaches have always been vulnerable to misuse and often focused on recall rather than understanding. Generative AI has only exacerbated these vulnerabilities. When students can generate text instantly or outsource tasks to contract-cheating services, it not only highlights the ease of dishonesty, but also more importantly exposes the limitations of assessments that fail to measure deeper skills, critical thinking, and authentic application.

The response should go beyond mere detection and punishment, which reflect a police and robber mentality. Instead, universities need to rethink their assessment designs so that integrity is integrated into the process rather than enforced. This involves shifting towards authentic, process-oriented, and student-centred approaches. When students find tasks meaningful for their personal growth, the temptation to cheat decreases. Assignments like project-based tasks that require creating case studies, media artefacts, or policy proposals demand originality and creativity. Reflective portfolios enable students to track and analyse their development over time. Meanwhile, problem-solving activities that push students to apply theory in unpredictable real-world situations make memorisation and copy-paste solutions irrelevant.

Equally important is a stronger emphasis on process. rather than just product. Large projects broken into staged submissions, with opportunities for feedback, make the journey of learning visible and discourage last-minute outsourcing. Oral defences and presentations require students to articulate their understanding in their own voice, providing a richer demonstration of knowledge. Peer review and collaborative projects build in accountability and transparency, reminding students that learning is often a shared, iterative experience. These ideas also connect to approaches such as ungrading, which I have written about previously, where the emphasis shifts from grades to growth, reflection, and feedback as drivers of learning.

Technology, often seen as the problem, can instead be viewed as part of the solution. Instead of banning AI, educators can teach students responsible and appropriate usage, tailored to specific tasks and the learning outcomes being assessed. Tools like Google Docs or GitHub enable visible work evolution, transforming the creation process into proof of integrity. Adaptive, interactive online assessments can dynamically respond to student input, discouraging mere answer-sharing and promoting meaningful engagement.

What is ultimately needed is a cultural shift in how we conceive assessment. Integrity cannot rest on fear of being caught; it must be built into how students understand their tasks. Open conversations about ethics and AI, institutional policies that reflect contemporary challenges, and a stronger framing of assessment as part of learning, rather than as a hurdle to clear, are all crucial steps. Generative AI may have been the catalyst for renewed urgency, but fixing the so-called “AI problem” requires deeper structural change.

The urge to cut corners may always exist, but the future of academic honesty depends on designing assessments that emphasise creativity, critical thinking, and personal development, making integrity a natural result of engaging and meaningful learning.

Reflection Questions

  1. Do your current assessment practices genuinely measure the learning outcomes you claim they do?
  2. Where might your assessments unintentionally reward memorisation or surface-level work rather than deeper thinking?
  3. How could you make the process of learning more visible in your assessments, not just the final product?
  4. In what ways could AI be used ethically in your teaching context, as a support rather than a shortcut?
  5. What cultural or policy changes would your institution need to make to embed integrity rather than enforce it?

Latest Posts

Leave a Reply

Your email address will not be published. Required fields are marked *