As part of the TeacherMatic Webinar Series, we explored what it means to use AI cautiously, transparently and responsibly in practice, specifically across higher education.
Hosted by Peter Kilcoyne from TeacherMatic, the session featured Laura Milne and Jules Barnes from the University of Chester, and Hannah Lawrence from Jisc.
Together, they discussed their approach to AI at the university, their philosophy and how they’re implementing it. Plus, practical examples of institutional AI leadership and updates on TeacherMatic’s latest AI developments for educators.
Here are some key highlights from the conversation.
A Framework for Responsible AI: The University of Chester’s Approach
Laura Milne quickly established the University of Chester’s position on AI integration:
‘The University of Chester intends to cautiously, responsibly and transparently embrace Gen AI.’
‘We really want to have that open dialogue with students and staff about how and where AI is being used, so that transparency comes through. That means that we state openly where we’ve used AI and how we’ve used AI.’
This is embedded into how the university trains staff, supports students and builds their internal practices, as Laura explained:
‘Within the AI and education team, we run quite a lot of training for staff. We’ve designed training for students on ethical and safe AI use. Safety is quite important.’
‘Part of our AI and education group activities was deriving the University of Chester’s generative AI principles. We’ve got 10 guiding principles: the way we work, guiding our actions.’
This approach that Laura and her team follow blends practical AI implementation with human reflection, based on a great metaphor:
‘Catherine Welsh coined the term “The Human AI Sandwich.” So, you’ve got to have human input, an AI process and then a human evaluation, iteration and development of that process.’
By keeping humans firmly in the loop, the university preserves its academic judgement, which is a critical ingredient for responsible AI adoption.
Building AI Tools that Fit
The philosophy of human-centred AI design is also central to the TeacherMatic brand, as Peter explained:
‘Since day one, everything that’s built on TeacherMatic is designed in partnership with practitioners.’
Rather than creating tools in isolation with IT and Dev teams only, TeacherMatic’s development is rooted in open and ongoing collaboration with educators at schools, colleges and universities:
‘The ideas come from the universities and the colleges that are using TeacherMatic, and we build the tools in partnership, working with current practitioners to make sure those tools are doing exactly what people need.’
This shared development model helps ensure that AI supports real teaching practice, not just theoretical use cases. This means that all tools bring value to educators, not just another new, overwhelming tool that ultimately remains dormant on their dashboard.
Ethical Use Starts with Secure Data
One of the most urgent concerns around AI in education is the handling of sensitive data. As AI tools become more embedded in daily workflows, the risk of unintentional data exposure rises.
Peter expanded on how TeacherMatic addresses this by creating a closed and secure environment:
‘Anything that you upload into TeacherMatic, unlike with ChatGPT and Copilot, is not going anywhere. It’s not feeding large language models. It’s all within a “walled garden”, so your data is very safe.’
Laura reinforced this from her university’s perspective:
‘We’re not worried about our data leaking out into the world, which is particularly important when you’re thinking about sensitive data.’
At TeacherMatic, we understand that truly responsible AI use is only possible when data protection is embedded from the start and not ‘added on’ later.
‘Hybrid Models’ that Keep Educators in Control
While one of the core focuses of generative AI is to serve as a support system to increase efficiency, the goal isn’t to replace educators. Instead, TeacherMatic has designed its platform and its resources to support each educator’s own judgement.
Peter exemplified this notion by demonstrating the enhanced TeacherMatic Feedback generator, which allows teachers to stay involved in every stage of the development process:
‘You’re getting a real hybrid AI and teacher. It’s saving time, you’re getting the benefits of AI, but you’re not losing the human impact as well.’
This hybridisation mirrors ‘The Human AI Sandwich’ philosophy. The idea that AI is part of a larger process, not the whole solution. When AI tools are tailored to educator input, they become not just efficient but also trusted across the institution and the teams that use them.
The Impact of Shaping AI in Education from Jisc
Rounding off the session, Hannah Lawrence from Jisc shared her perspective on TeacherMatic’s alignment with national priorities for AI in education:
‘Through the AI team’s work and the wider engagement, TeacherMatic has proven to be an effective and valued tool, fully-aligned with Jisc’s strategic framework for AI.’
Jisc’s recognition of TeacherMatic as a tool that supports safe, ethical and impactful AI use adds weight to the institutional trust already shown by the University of Chester and 350+ schools, colleges and universities currently using TeacherMatic on a daily basis.
Currently partnering with Jisc in a third pilot study, Hannah mentioned the impact that TeacherMatic’s FE, HE and assessment generators are having with educators involved in the scheme:
‘I think that’s a record. I don’t think there’s been another company that’s had three pilots or three other tools. There’s some excellent feedback on the automatic grading tools that are being tested and trialled within the institutions that are taking part.’
Final Reflections
This webinar reinforced a key concept:
Responsible AI in education isn’t just about compliance or caution. It’s about clarity, collaboration and creating space for human judgement.
By shaping AI use through values rather than hype, the University of Chester and TeacherMatic are showing how technology can support education without undermining it.
As Laura shared, ‘It’s not just supportive in terms of training wheels for teaching people how to prompt well, it’s also training wheels for us as we sort out our university processes so that we’re doing these things really ethically and responsibly.’
Watch the Full Webinar Recording
If your institution is exploring how to adopt AI with clarity, care, and confidence, this session offers a grounded, practical view of how to achieve it.
See How TeacherMatic Works in Your Teaching Context
Here’s how to get started: