Last week, I attended a symposium on AI-assisted feedback at Imperial College London. I was particularly inspired by an opening panel presentation by Professor Naomi Winstone on the care-full craft of feedback in the age of generative AI. One question in particular stayed with me: what if the problem isn’t that we’re doing too little feedback, but that we’re doing too much of the wrong thing?
Reflecting on that question since, I realise that in UK higher education, we have become somewhat obsessed with feedback volume. More comments, we assume, signals more care. Yet the National Student Survey continues to tell a familiar story: feedback remains one of the lowest rated aspects of the student experience. If our carefully crafted comments aren’t landing – if students aren’t using them to change what they do – then we haven’t really provided feedback at all. We’ve provided inputs.
Efficiency versus utility
Much of the current focus on generative AI in feedback is driven by staff workload. AI promises quicker turnaround, greater consistency, and more manageable marking. Timeliness does matter – but efficiency alone is a staff facing driver.
As the Manifesto for Feedback in the Age of Generative Artificial Intelligence argues, feedback is not a product but a relational, developmental process; comments only become feedback when students engage with them and something meaningful changes as a result (Winstone et al, 2025). That distinction matters now because generative AI makes it trivially easy to scale feedback production. If we automate poorly designed feedback processes, we risk amplifying – at scale – the very practices students already find least useful.
If we simply use AI to produce the same kinds of comments more quickly, we are doing the same things differently, but not doing different things.
From fixing the work to developing the learner
In my work with Beverley Gibbs on Students as Partners, we found that students conceptualise feedback in a fundamentally different way from academics. They expect feedback to help them adjust course while they are in flight, not simply to explain why they missed the destination once they’ve landed (Wood & Gibbs, 2019).To make sense of this difference (and other gaps between staff and student conceptualisations of the learning experience), we drew on the distinction between single loop and double loop learning, originally articulated by Argyris & Schön (1978), and later applied to learning and feedback in higher education contexts (Gibbs & Wood, 2021).
Most feedback students receive operates in what Argyris & Schön describe as single loop learning. It is a tactical correction designed to help a student fix a specific error in a specific task: improve the structure, strengthen the argument, engage more critically with the literature, etc. Such feedback can improve this piece of work, but it rarely changes how the student approaches learning more fundamentally.
Double loop learning works at a different level. Rather than simply asking how do I fix this?, it prompts the learner to reflect on why this kind of issue is occurring. In feedback terms, this might involve helping a student recognise that difficulties with critical analysis, academic voice, or synthesis appear across multiple modules – not as isolated mistakes, but as patterns in how they approach their learning.
The challenge is not that this kind of reflection lacks value. It’s that our feedback systems are rarely designed to support it. Feedback is fragmented across modules, markers, and assessment points. Individual tutors are constrained by what they can see, and students are left to do the hard work of synthesis themselves – often without the support, confidence, or feedback literacy to do so effectively.
The “all knowing” digital tutor
This is where generative AI opens up a genuinely new possibility.Imagine a student owned feedback repository that captures feedback across modules and across time. A generative AI system could help students identify recurring strengths, persistent challenges, and priorities for development – connections that human tutors, constrained by module and programme boundaries, simply cannot see.
Now imagine that same system sitting alongside a new assessment brief. It could prompt students to ask: What have I previously been told to do less of? Where does this task give me an opportunity to practise something I struggled with before? What examples of good practice from earlier work should I carry forward?
Here, AI is not generating more feedback. It is helping feedback do more work. It enables feedback to function in exactly the way many students already expect it to: as guidance for future action, not post hoc justification.
A provocation for the sector
The COVID 19 pandemic forced us to do things differently, and in doing so we discovered that some forms of disruption led to better practice. AI is a similar inflection point. The risk now is that we use it merely to formalise existing institutional desire lines – automating and entrenching feedback processes that already fail to meet students’ needs – rather than redesigning the landscape around how students actually want to use feedback.If AI becomes little more than a faster way of producing comments, we will have missed a significant opportunity. The real challenge for the sector is to stop asking how AI can help us do the same things more efficiently, and start exploring how it allows us to design feedback differently – around use, patterns, and future learning.
AI is not just a faster pen. It’s an opportunity to reimagine what feedback could be. Are we brave enough to design for it?
References
Argyris, C. and Schön, D.A. (1978) Organizational learning: A theory of action perspective. Reading, MA: Addison Wesley.Gibbs, B. and Wood, G.C. (2021) ‘How can student partnerships stimulate organisational learning in higher education institutions?’, Teaching in Higher Education. https://doi.org/10.1080/13562517.2021.1913722
Winstone, N., Gravett, K., Noble, C., Nicola Richmond, K., Bearman, M., Jensen, L.X., Jones, A., Corbin, T., de Kleijn, R., Gabelica, C., Kainth, R., Poobalan, A. and Reedy, G. (2025) Manifesto for feedback in the age of generative artificial intelligence. Figshare. https://doi.org/10.6084/m9.figshare.30195568
Wood, G.C. and Gibbs, B. (2019) ‘Students as partners in the design and practice of engineering education: Understanding and enabling development of intellectual abilities’, in Malik, M., Andrews, J., Clark, R. and Broadbent, R. (eds) Realising Ambitions: 6th Annual Symposium of the UK & Ireland Engineering Education Research Network. Portsmouth: University of Portsmouth.











