Now GenAI is frequently used in education by both teachers and students, its accessibility raises an important question for schools, colleges and universities:
How do we protect assessment integrity in a time when AI support is readily available?
To explore how trust, authenticity and professional judgement are being upheld with AI use in assessments, we surveyed 109 educators from FE and HE settings with questions like:
How often are learners using generative AI for assessed work? How much of a challenge is unauthorised AI use for fair and accurate assessment? Would a scalable solution to support assessment integrity be valuable?
The answers from this questionnaire do not reveal resistance to AI in the sector, but they do highlight the challenges FE and HE face in keeping pace, gaining clarity and achieving a sustainable approach.
What the Questionnaire Revealed
Here is a sample of the findings from all questionnaire respondents, highlighting the differences between FE and HE perspectives:
There is a strong presence of AI in students’ work, which is quickly being recognised and flagged during marking and assessments:
The majority of educators report encountering student work that is largely AI-generated and unacknowledged. At the same time, a small minority of both FE and HE teachers believe this rarely occurs, at approximately 21.5%.
47.2% overall report that unauthorised AI use has increased moderately or dramatically. FE respondents report 19.4% dramatic growth, compared to 25% of HE respondents who report higher dramatic growth in the last 12 months.
Written assessments, such as student homework, coursework and essays, are highly susceptible to an AI influence over exams:
Confidence Levels Are Down, Challenge Levels Are Up
Educators can risk becoming reactive to resolve AI integrity issues rather than being strategic when the challenge and confidence levels are not evenly balanced. This is especially so when staff don’t have access to the right AI tools or training for responsible GenAI use.
- 15.6% of respondents describe themselves as confident in their ability to identify inappropriate AI use.
- 49.5% describe themselves as moderately confident.
There is a clear confidence gap, and it really matters.
When asked, ‘What methods do you currently use to address this issue?’:
- 26.6% are redesigning assessment tasks.
- 57.79% use oral questioning or vivas.
- 29.35% have introduced supervised assessments.
- 40.36% use AI detection tools.
- 16.51% say there is no effective method currently in place.
This adaptive shift is significant because it indicates how FE and HE institutions are pivoting from a ‘policing approach’ to an assessment ‘redesign.’
Key Insight: A Scalable Solution Is Essential
Among the respondents who rated the issue as severe, 58.9% described a scalable solution as important and overall:
- 14% say a scalable tool is essential.
- 44.9% say it is very valuable.
- 27.1% say it is moderately valuable.
This response was expected because when a concern or challenge builds, so does the demand for scalable tools to detect and resolve the issue.
What Educators Are Really Saying
Beyond the statistics, the open comments revealed a sector thinking deeply not just about detection but about adaptation.
1. Professional judgement still matters
While AI detection tools are in use (40.36% according to the data), many educators emphasised that knowing their learners remains their strongest safeguard:
‘Getting to know how your students speak and write initially helps me identify AI use for future assessments.’
‘If I know my learners, I don’t need a detection algorithm to know what’s their work and what isn’t. Teaching must be about connection.’
AI may be evolving rapidly, but educators consistently referenced relational knowledge, assessment-for-learning practices and oral questioning as powerful tools in protecting assessment integrity.
2. Detection is getting harder
There is clear awareness that this is not a static challenge. Several respondents described a growing race between detection and increasingly sophisticated student use:
‘It is reasonably easy to spot something that is not the student’s own work. However, as AI becomes more sophisticated and students become more and more familiar with its use, it will become increasingly difficult.’
‘I believe it’s a growing challenge to detect where unauthorised AI work has been used. I would welcome anything that would help the process of detecting and managing unauthorised AI use.’
Some educators also pointed to practical complications:
- AI-generated summaries now appear directly in Google search results.
- Tools like Grammarly can interfere with AI detection systems.
- Students are using AI in uncontrolled environments, including Snapchat AI.
One respondent captured the concern clearly:
‘Students are using AI whenever they are not in a controlled environment, diminishing their research skills.’
This reinforces the confidence gap identified earlier in the questionnaire findings.
3. Assessment redesign is already underway
Encouragingly, many educators are not simply trying to ‘catch’ AI use. They are actively adapting with strategies such as:
- Designing assessment scenarios not available online.
- Moving to paper-based or supervised methods.
- Increasing oral questioning and practical demonstrations.
- Requiring referencing of limited AI use.
As one educator explained:
‘While identifying AI use is clearly a short-term priority, redesigning assessment practices to be fit for purpose in an AI-enabled world is a better long-term solution.’
Another added:
‘Using varied assessment methods helps to combat the use of AI. It is also a valid part of learning and needs to be implemented, not feared.’
This aligns strongly with the questionnaire data, which show a shift toward assessment redesign and oral questioning.
4. A clear call for practical support
Across the educator responses, one consistent theme emerged: educators want clarity, training and usable tools.
‘It would also be useful to know what to look out for and be updated on this as the AI grows and changes.’
‘If we could have a usable application like Turnitin that we can use as and when to copy a piece of text into it, and it gives you a score of how much is AI, that would be very useful.’
There is a strong demand not just for policing tools, but for:
- Clear guidance
- Ongoing updates
- AI literacy education for students
- Assessment models that reflect real-world AI use
What Are We Doing With These Insights?
What we find notable is that very few educators who took part in the questionnaire suggest eliminating AI entirely from education. Instead, what they are asking for is clarity, fairness and systems that offer responsible solutions.
With AI implementation, we believe:
When confidence grows, integrity strengthens → When clarity increases, trust follows → When trust is protected, progress becomes possible.
TeacherMatic is now exploring ways to help teachers address the challenges highlighted in the questionnaire.
The Alternative Assessment generator enables you to consider assessment methods that help tackle these challenges, and we will also implement an option in our Advanced Feedback generator to produce a set of questions for each piece of learner work. Teachers might then discuss these points with learners to gain further insight into the authenticity and originality of the work.
If you have any other ideas, we might consider implementing them. Please get in touch before the end of March.