After a quiz, most instructors know two things: which students passed and which failed. What they usually do not know is why — which specific concepts broke down, how widespread the confusion is, and what to do about it before the next lesson.
That is the difference between checking for memory and measuring learning. Scores check memory. Gap analysis measures learning — and the gap between them is where most classroom assessment goes wrong.
What a score actually tells you
A score of 72% tells you that a student answered 72% of the questions correctly at the moment they took the quiz. It does not tell you:
- Which 28% they got wrong
- Whether those wrong answers cluster in a specific domain or are scattered randomly
- Whether the correct answers reflect understanding or guessing
- Whether the same student would score 72% on a different version of the same quiz
- What the rest of the class got wrong, and whether it overlaps
Aggregate scores are even less informative. A class average of 78% sounds fine until you notice that every student missed the same three questions — questions that cover the exact concept you will build on in next week's lecture.
The three levels of assessment data
Score-level data (what most tools give you)
Total correct, total wrong, class average. Tells you who needs a conversation. Does not tell you what the conversation should be about.
Question-level data (what better tools give you)
Per-question accuracy rates across the class. Tells you which questions most students missed. Now you can see patterns — if question 7, 11, and 14 all cluster around drug calculations and 60% of your class missed them, that is a diagnosis.
Distractor-level data (what you actually need)
Not just which question students missed, but which wrong answer they chose. If 40% of your class chose "B" on question 7 and B is a specific misconception, you now know the exact misunderstanding to address — not just the topic, but the specific error in thinking.
Why most assessments stop at level 1
The tools instructors have access to optimize for ease of setup, not depth of output. A quiz that takes 5 minutes to create will give you a spreadsheet of scores. To get question-level data, you need a tool that structures its output that way. To get distractor-level data, you need a tool that was built around that question from the start.
This is not an argument for expensive assessment software. It is an argument for thinking about what question you are actually trying to answer when you run a quiz.
The right question after a quiz
Not: "Who passed and who failed?"
Not: "What was the class average?"
The right question: "Which concepts did this class collectively misunderstand, and what am I going to do about it before the next lesson?"
Four concrete ways to get better data from existing quizzes
Tag questions by concept, not just topic
A question on fluid balance is not just "renal system." It might specifically test the student's understanding of osmotic pressure vs hydrostatic pressure. If you tag questions at that granularity, wrong answers tell you something specific. Topic tags are too broad to generate actionable interventions.
Write distractors that represent real misconceptions
Random wrong answers give you random data. Wrong answer choices should represent predictable student mistakes — the answers students give when they understand the surface of a concept but have a specific gap underneath. When students choose those answers, you know exactly what the gap is.
Review answer distributions before moving on
Before you close out a quiz session, look at each question and see which answer students chose most. A question where 25% of the class chose the same wrong answer is a flag that should change what you teach next. This takes 3 minutes and is more valuable than reading the average score.
Build the follow-up practice before you leave the room
The best time to assign targeted practice is immediately after the quiz that revealed the gap — while the stakes are visible and the context is fresh. If you wait until next week, you will forget which specific concepts to target and students will forget why they need it.
What gap-based teaching looks like in practice
A nursing instructor runs a 10-question battle on cardiovascular medications with 28 students. The battle ends and the results show:
- Questions on mechanism of action: 84% class accuracy
- Questions on drug interactions: 71% class accuracy
- Questions on adverse effects in renally-impaired patients: 43% class accuracy
The score average (66%) would have told her to review the whole unit. The gap data tells her exactly what to do: spend 15 minutes on renal dosing adjustments and adverse effect recognition, then assign a 5-question practice set on those specific questions before next class.
That is the difference between teaching to the average and teaching to the gap.
LRNRS battles give you question-level gap data
After every battle, LRNRS shows you per-question accuracy across the class and which answer options students chose. Subscribers can generate a targeted remediation battle from the missed concepts in one click — so the follow-up is built before you dismiss the class.
Run a free battle →For high-stakes exam contexts
In courses where the end goal is a licensure or certification exam — NCLEX, NREMT, FE — gap analysis is not a nice-to-have. The exams themselves use adaptive algorithms to find the edges of student knowledge. If you are not using the same kind of analysis in your practice, students will discover their gaps on the real exam instead of in your class.
NCLEX adaptive prep
Domain-level gap detection built for nursing boards prep.
NREMT adaptive prep
Scenario-based practice with gap detection across all NREMT domains.
Related: Why students pass practice quizzes but fail the real exam · What Kahoot doesn't tell you after the quiz