How Peer Marking Slashed Resubmission Rates by 44%
Brief Description
Students gaming the resubmission system learned accountability when they started marking each other's work, dramatically reducing poor-quality submissions and teacher workload.
Summary
Computing students had cracked the code - submit minimum effort work, get feedback, then fix it later. Resubmission rates hit 87% as learners exploited the safety net system. But when they started peer-marking anonymous assignments using official criteria, everything changed. Students became "very honest" critics, understanding for the first time "what it must be like for teachers having to mark poor work." Resubmission rates plummeted to 43% in key units, and submitted work quality improved dramatically.
This episode reveals the psychology behind the transformation - how evaluating others' work forced students to internalize quality standards and recognize command verbs they'd previously ignored. Learn practical strategies for implementing peer assessment, breaking rework cycles, and building accountability through transparency.
The research led to systematic changes including mandatory resubmission request forms that require students to articulate their improvement plans.
-
[Speaker 2]
Okay, let's unpack this. You know, think about any job, maybe, or even just a big project you've worked on. That awful cycle of rework.
You put something in, it maybe just, okay, meets the bare minimum, then it gets kicked back, and you end up spending like twice the time just trying to push this thing over the finish line. Well, today we are diving into a really specific study from education that actually tried to break that exact cycle. Yeah, we've got sources here detailing an action research project.
It was done by Justina Tulloch back in 2018, and her big question was pretty straightforward. Can peer-to-peer marking reduce the number of resubmission and improve the quality of submitted assignment work? Simple question, right?
Like, complex answer.
[Speaker 1]
No, and this wasn't just, you know, an academic exercise. It came from a real pressing need. They were dealing with this huge volume of resubmissions in their BTech, level three computing and IT courses.
It wasn't just annoying. It was creating this massive workload, like marking in the Rio market for the teachers. The whole system was getting bogged down.
[Speaker 2]
And the sources, they really point to this behavioural issue driving it. Students had basically figured out how to game the system. They'd submit just the absolute minimum quality, you know, just enough to hit that first deadline, because they knew as long as they got something in on time, they were pretty much guaranteed a second chance of resubmission.
Right.
[Speaker 1]
And the kicker was nobody was formally tracking these resubmissions. So it was just happening.
[Speaker 2]
And that lack of tracking, it sounds like it became a real crisis during crunch time.
[Speaker 1]
Oh, absolutely. The sources really highlight the previous term, April to July, 2018. End of the year, total pressure cooker.
Teachers were basically forced to mark work almost instantly, you know, to get the grades finalised, notify the awarding body, meet all these hard deadlines. It sounds like pure panic.
[Speaker 2]
So this project, this peer marking idea, it had two main goals then, help the teachers, help the students.
[Speaker 1]
Exactly. For the teachers, it was pretty practical. First, just reduce that insane marking load, but also finally get a proper system in place to monitor the resubmissions, track them, because that data could show them, you know, which units, which assignments were the real pain points where students consistently struggled.
[Speaker 2]
And for the learners, what was the hope there beyond just fewer resubmissions?
[Speaker 1]
Well, the hope was obviously better quality work right from the start, less stress for them too, because they wouldn't be relying on that frantic resubmission push, and maybe more fundamentally, a better sense of accomplishment, you know, getting it right or closer to right the first time.
[Speaker 2]
Okay. Here's where it gets really interesting. The whole solution rests on this idea of peer assessment.
Now, maybe to someone outside education, that just sounds like, I don't know, kids grading each other's homework. What's really going on?
[Speaker 1]
It is students looking at other students' work. Yeah, but it's structured. Topping defined it back in 2010 as learners considering to specify the quality of work from their peers.
But the key thing highlighted by Strybos and Sleismans is how it gets students reflecting, talking, working together. And it turns out the study found students are often, and I quote, very honest about the work they are marking.
[Speaker 2]
Honest how so? Like more critical?
[Speaker 1]
Yeah, because they have to actually use the marking rules, the criteria on someone else's work. It forces them to internalise what good actually looks like, rather than just passively getting feedback. They become active judges.
[Speaker 2]
Okay, and you mentioned teachers were frustrated by students not understanding certain instructions, these command verbs. Can you unpack that a bit? Why was that such a big deal?
[Speaker 1]
Right, the command verbs. It's crucial in vocational stuff especially. It's the difference between say describing a network setup and evaluating its security.
The verb tells you the level of thinking required. If the task is evaluate and the student just describes, they haven't done the job no matter how much they write. So teachers kept writing feedback like you need to apply the command verb or this is incomplete because the student just hadn't grasped the action required.
[Speaker 2]
Okay, that makes sense. So this peer assessment was designed to force that understanding. The project itself ran for about six weeks, autumn 2018.
But first, they needed that baseline, right? The before picture.
[Speaker 1]
Exactly. They needed data.
[Speaker 2]
Yeah.
[Speaker 1]
So quantitatively, they went to the College of Systems, the Oracle VLE, their learning platform, and Turnitin. And they literally counted how many first submissions versus resubmissions for specific units in that nightmare term before. Okay.
And qualitatively, teachers listed those common feedback comments, incomplete work, didn't proofread, misunderstood the command verbs, all symptoms of rushing or not getting it.
[Speaker 2]
So the intervention itself, how did they actually run the peer assessment session? You said it was structured.
[Speaker 1]
Very structured. First, learners got the official unit seven specification document, the rule book essentially.
[Speaker 2]
Right.
[Speaker 1]
Then they were given four pieces of anonymous work from previous students. Anonymity was key obviously.
[Speaker 2]
Yeah, avoids awkwardness.
[Speaker 1]
Definitely. Then working in pairs, they used a specific peer-to-peer marking form they had to decide. Pass, resubmission, or fail based on the spec doc and the command verbs, they now hopefully understood better.
[Speaker 2]
Okay, so they're applying the rules to real flawed examples.
[Speaker 1]
Exactly. And here's the clever bit. After they made their judgement, the actual grade awarded to that anonymous piece was revealed.
[Speaker 2]
So they see the gap.
[Speaker 1]
Precisely. It sparked discussion.
[Speaker 2]
Yeah.
[Speaker 1]
Why did we think this was a pass when it was actually a fail or vice versa? It really highlighted the standard.
[Speaker 2]
Makes sense. Now they also knew that just understanding wasn't enough, right? Time management was part of the problem.
What else did they do?
[Speaker 1]
Yeah, they added two tools. One was a new online progress check form. Students had to fill it out at intervals, track their own progress, set dates for themselves, basically mandatory planning.
[Speaker 2]
Okay.
[Speaker 1]
And the second was simpler, just a formal assignment checklist. Like, have you proofread? Have you addressed all parts of the task?
Have you checked the command verbs? Tick, tick, tick before hitting submit.
[Speaker 2]
Right. Trying to tackle those rushing and incompleteness issues head on. So let's get to the results.
Did it work? Did seeing behind the curtain change student behaviour? Let's compare the before and after numbers.
[Speaker 1]
Okay. Let's look at unit one first. That was communications and employability.
Before the intervention, the resubmission rate was, wait for it, 87%.
[Speaker 2]
87. That's almost everyone needing a redo.
[Speaker 1]
Insane. Right. After the intervention, it dropped to 71%.
[Speaker 2]
71. That's a drop, sure, but still more than two-thirds needing resubmission. That unit seems like a tough nut to crack.
Did they say why?
[Speaker 1]
They didn't elaborate much on why unit one was so resistant in the source we have, but yeah, still very high. But let's look elsewhere. The trend is clear.
Unit seven, organisational system security. Before, it was 71% resubmissions.
[Speaker 2]
Okay.
[Speaker 1]
After the peer marking and other changes, it dropped to 43%.
[Speaker 2]
Wow. Okay. 43%.
That's nearly half. That's a huge difference.
[Speaker 1]
Massive difference. Big impact on workload there.
[Speaker 2]
Yeah, definitely.
[Speaker 1]
And unit 10, communication technologies, similar story. It's 57% before. Dropped down to 36% after.
[Speaker 2]
Okay. So unit one aside, the overall picture is pretty positive then.
[Speaker 1]
Very positive. Significant reductions in those key areas. But here's something else the study stressed, maybe even more important than just the numbers dropping.
It's the quality. They concluded the work submitted in that current term post-intervention was far better.
[Speaker 2]
Oh, okay.
[Speaker 1]
So even when a resubmission was needed, like in that stubborn unit one, the amount of fixing required was far less. The gap between the first submission and a pass was much smaller. So less remarking effort for teachers too.
[Speaker 2]
And what did the students think? We have some feedback from them too, right?
[Speaker 1]
We do. They used a peer-to-peer feedback form. 15 out of 18 students answered yes to all the questions about whether the exercise was helpful.
[Speaker 2]
Pretty strong endorsement.
[Speaker 1]
Yeah. But the qualitative comments, the quotes, they really tell the story.
[Speaker 2]
Like what?
[Speaker 1]
Well, you have students saying things like, peer marking was helpful as I now know what a good level of work is. Straightforward benefit. But then there's this one.
I now understand what it must be for the teachers having to mark poor work.
[Speaker 2]
Oh, wow. That's empathy right there. That's huge.
[Speaker 1]
Isn't it? That alone almost justifies the whole thing, building that understanding of the teacher's burden.
[Speaker 2]
Absolutely.
[Speaker 1]
And another student said, I need to manage my time better. Which shows those planning tools, the checklist, and the progress form also hit the mark. It wasn't just about understanding quality, but also about the process.
[Speaker 2]
So the project was considered a success then. It led to actual changes.
[Speaker 1]
Yeah. Definitely deemed worthwhile. It was effective enough that it wasn't just a one-off experiment.
It led to real lasting changes in how the team handled assessment. It basically helped them move away from that informal crisis mode they were stuck in.
[Speaker 2]
And the big proof of that success is the formalisation, right? The internal verifier, the 4V.
[Speaker 1]
Yeah. The 4V, who's sort of the quality gatekeeper for the course, agreed that the old way just wasn't sustainable or fair. A proper system was needed.
[Speaker 2]
So what's the new system?
[Speaker 1]
They brought in a mandatory resubmission request form. Now, if a student gets feedback saying they need to resubmit, they can't just upload a new version. They have to fill out this form first.
And crucially, they have to state how they plan to improve the work.
[Speaker 2]
Ah, making them articulate the fix.
[Speaker 1]
Yes. Making them identify the specific actions needed to address the feedback. Maybe you related to those command verbs or missing content.
[Speaker 2]
And this gets tracked?
[Speaker 1]
Formally tracked. The form data feeds into an Excel spreadsheet. So the IV, the teachers, everyone can see who is resubmitting, why, and what they plan to do.
It brings transparency and accountability, finally solving that initial problem of things just slipping through the cracks. So we connect this to the bigger picture. What we see here is a really neat example, albeit small scale, of how involving students directly in the assessment process kays off.
Making them use the criteria, not just read it, led directly to better quality work and, importantly, less burnout for the staff.
[Speaker 2]
Yeah. Transparency really driving accountability. And the recommendation was to keep tracking this, right?
To figure out those stubborn spots like unit one.
[Speaker 1]
Exactly. Keep comparing the stats over time. See which specific criteria keep causing problems.
Then you can tweak the teaching for those specific IT topics, you know. Maybe approach them differently if students consistently stumble on evaluation versus description in networking, for instance. Close the loop.
[Speaker 2]
So what does this all mean? We've seen a case where giving students this structured peek behind the curtain of grading really shifted their understanding and their output. It makes you wonder, doesn't it?
If this works for tackling academic rework and improving quality, what other areas, maybe in professional life, maybe in other kinds of learning, could benefit from a similar approach? You know, giving the people doing the work a structured, temporary experience of what it's like to be on the receiving end. Evaluating the quality.
Dealing with the consequences of rushed or incomplete efforts. Something to think about.