Authentic Feedback at Scale: How We Protect Quality, Wellbeing and Professional Judgement in the Age of AI
Teachers are drowning in feedback demands while students continue to hunger for meaningful guidance that genuinely improves their learning. Decades of research confirm that high-quality feedback is one of the most powerful drivers of progress, yet the reality in classrooms and colleges tells a different story. Educators are spending more time than ever on marking, while learners often describe feedback as late, generic or disconnected from their work.
The challenge is not whether AI should be used in feedback. The real challenge is how it is used, and whether it strengthens or erodes the professional judgement that makes feedback meaningful in the first place.
This is not a technical problem. It is a pedagogical, ethical and human one.
Watch the Deep Dive: Authentic Feedback at Scale
This webinar explores the challenges and opportunities of delivering meaningful feedback at scale, including workload, wellbeing and responsible use of AI..
The feedback workload crisis is real and measurable
Across Further and Higher Education, feedback workload has reached a critical point.
On average, lecturers now spend around 10 hours per week on marking and feedback alone, within an average working week of nearly 47 hours. Almost 70% of lecturers describe feedback workload as excessive, and marking is consistently identified as one of the biggest contributors to stress and poor wellbeing.
This is not simply about time. It is about sustainability.
When feedback demands consume this proportion of an educator’s working life, quality becomes increasingly difficult to maintain. Even the most committed professionals are forced into impossible trade-offs between depth, consistency and timeliness.
And when pressure reaches this level, the human cost becomes visible. Over three-quarters of education staff report high stress levels, with more than a third at risk of probable clinical depression. Feedback workload is not the only cause, but it is a significant one.
High-quality feedback is cognitively and emotionally demanding
One of the most persistent myths about feedback is that it is a mechanical task. In reality, high-quality feedback is among the most cognitively and emotionally demanding aspects of teaching.
For each piece of student work, educators must:
- read and interpret complex ideas
- identify misconceptions and developmental priorities
- craft tone carefully to motivate rather than discourage
- balance honesty with encouragement
- record evidence for accountability and quality assurance
This is skilled professional labour, not a checklist exercise.
At scale, the cognitive load becomes overwhelming. A lecturer reviewing 100 or more submissions is not simply reading. They are diagnosing, evaluating, empathising and planning next steps for every individual learner.
This pressure inevitably raises a question of impact.
Why timing matters as much as quality
Learning science provides a clear answer.
Research on memory and retention, including the well-established forgetting curve, shows that without timely reinforcement, learning decays rapidly. Within an hour, a significant proportion of learning is lost. Within 24 hours, that loss increases further.
Feedback does not just need to be good. It needs to arrive at the right time.
When feedback is delayed, even if it is detailed and thoughtful, its impact is reduced. Learners struggle to reconnect feedback to their original thinking, motivation drops and learning momentum is lost. This is the hidden cost of delay.
The maths of feedback at scale simply does not add up
Consider the reality for a typical FE/HE lecturer.
Five classes of around 25 students equates to approximately 125 learners. If meaningful feedback takes just 10 minutes per student, a single assignment requires over 20 hours of marking.
Across a month or academic year, this quickly escalates into hundreds of hours dedicated to feedback alone.
This makes timely, high-quality feedback structurally unsustainable without support.
The ethical risk: when AI gets feedback wrong
AI can process text quickly, but it can also lack context, nuance and cultural understanding.
In subjective areas such as reflective writing, creative work or personal narratives, poorly designed AI feedback can be technically accurate but pedagogically unhelpful or even demotivating.
There is also a risk of invisibility. When AI operates as a “black box”, educators and learners cannot see how judgments are being made or challenge them effectively. This undermines trust and professional accountability.
Our position: human-in-the-loop, always
At TeacherMatic, our response to these challenges is clear.
AI should augment professional judgement, not replace it.
AI should augment professional judgement, not replace it.
This is why our feedback tools are built on a human-in-the-loop design. AI acts as a drafting assistant, not a decision-maker. It supports the mechanical and repetitive aspects of feedback, while educators retain full control over interpretation, tone and final decisions.
We often describe this as an 80–20 model. AI supports the first 80% by structuring feedback, identifying patterns and highlighting areas for attention. The educator applies the final 20% by reviewing, refining, contextualising and approving the feedback.
Professional judgement always remains with the teacher.
The opportunity: reclaiming time without sacrificing quality
When AI is used in this way, the impact is significant.
Research cited during the session suggests that tools designed to reduce mechanical workload can reclaim around 30% of feedback time, equating to several hours per week for many educators.
This reclaimed time is not about doing less. It is about doing what matters most:
- supporting learners
- improving teaching
- strengthening relationships
- protecting professional wellbeing
What’s coming next: Advanced Feedback in TeacherMatic
Building on our existing Feedback Generators, we are introducing a new Advanced Feedback Generator, currently in assessment pilot and planned for release in March.
It has been designed specifically for cohort-scale marking with human oversight, and includes:
- bulk upload of student submissions
- AI-supported analysis of learning and misconceptions
- inline PDF annotation for contextualised comments
- clear visual distinction between AI-generated and teacher-authored feedback
- full review, refinement and approval by the educator
This is not automation for its own sake. It is a deliberate evolution of our feedback capability, built from real assessment practice and informed by educator feedback.
Why this matters for institutions
For institutional leaders, AI-supported feedback is not simply a productivity tool. It is a strategic decision.
When implemented responsibly, it supports:
- staff wellbeing and retention
- consistent professional practice
- transparent and defensible assessment processes
- ethical AI governance
- sustainable quality at scale
Perhaps most importantly, it signals to staff that their time, expertise and professional judgement are valued.
Final thought: protecting what matters most
AI feedback should always supplement, not replace, human feedback.
The goal is not to automate education. It is to reduce the mechanical burden of feedback while protecting what matters most: professional judgement, meaningful relationships and the human connection that makes learning transformative.
That is the future of feedback we believe in.
Next steps
- New to TeacherMatic? Start with our Getting Started with TeacherMatic course, designed to help educators use the platform confidently and responsibly.
- Planning institution-wide adoption? Explore the TeacherMatic Rollout Guide, which supports structured, ethical and sustainable implementation.
- Look out for Advanced Feedback, launching in March as part of our continued work on human-led, scalable assessment.
- Institutions: get in touch to discuss pilots, rollout planning and responsible AI adoption.
Getting Started with TeacherMatic
Rollout Guide for Institutions