There is a question instructors ask constantly and almost never answer correctly: "Do my students understand this?"
The reason they cannot answer it is that the tools they use — quizzes, exams, end-of-chapter questions — are designed to check memory, not understanding. The difference matters enormously, especially in applied fields where students need to use knowledge in contexts they have not seen before.
What memory looks like on an assessment
A question that checks memory asks students to retrieve something they were told. "What is the normal range for serum potassium?" "Which drug class ends in -pril?" "What does SIADH stand for?" Students who attended class and read the chapter will answer these correctly. Students who did not will not.
This is useful information — knowing that a student was absent or did not read tells you something. But it does not tell you whether the students who got the answers right have the kind of knowledge that transfers to new situations.
What understanding looks like on an assessment
A question that checks understanding asks students to apply, reason, or evaluate — not just retrieve. "A patient taking lisinopril presents with a dry cough and K+ of 5.8. What is the most appropriate nursing action and why?" This question cannot be answered by someone who memorized a list of ACE inhibitor side effects without understanding the mechanism behind them.
Understanding requires being able to:
- Use the concept in a context different from how it was presented
- Explain why something is true, not just that it is true
- Recognize when the concept applies and when it does not
- Generate the correct answer to a novel variation of a familiar problem
The dangerous middle
The most dangerous students are those who have deep memory but shallow understanding. They pass quizzes convincingly. They appear engaged and competent. Then they encounter a novel clinical situation and freeze — because their knowledge is indexed by topic, not by the underlying principle that connects topics together.
How to tell which one you are measuring
The simplest test: would a student who understood the concept but had not taken your specific course be able to answer the question? If yes, you are checking understanding. If they would need to know specifically what you taught in your class, you are checking memory of your instruction.
A corollary: could the question be answered by searching the textbook? If yes, it is a memory question. Understanding questions require synthesis that cannot be looked up.
Why this matters more for some subjects than others
High-stakes applied fields
Nursing, medicine, EMT, engineering. In these fields, the "exam" is a real-world situation where wrong answers have consequences. Memory is a floor, not a ceiling. The licensure exams (NCLEX, NREMT, PE) specifically test application and clinical reasoning — not recall. Programs that teach to memory produce graduates who pass coursework and fail boards.
Conceptually-dense prerequisites
Anatomy, physiology, organic chemistry, statistics. These courses feed into later courses that assume the knowledge is usable, not just remembered. A student who memorized the nephron diagram but does not understand the mechanism of concentration will struggle with every renal drug, every fluid balance calculation, every acid-base problem they encounter later.
Three ways to shift from memory-checking to understanding-checking
1. Change the question stem from "what" to "why" or "what next"
Instead of "What is the mechanism of action of beta blockers?" — "A patient with heart failure is started on metoprolol. Their heart rate drops from 88 to 54 two days later. Is this expected? What do you do?" The factual content is the same. The requirement shifts from retrieval to reasoning.
2. Use novel contexts, not familiar examples
If you used a hypertension case in lecture, do not put a hypertension case on the quiz. Use a different condition that requires the same underlying concept. Students who memorized your lecture example will fail the unfamiliar application. Students who understood the principle will succeed. That difference is what you want to see before the exam.
3. Look at which students got the same questions wrong, not just how many
If all of your high-scoring students missed one specific question, that question is probably an understanding question that most students could not transfer to. That is your diagnosis. If high and low scorers are randomly distributed across wrong answers, the quiz is probably measuring memory and randomness, not understanding.
What to do with the data
Once you know which questions revealed genuine understanding gaps — as opposed to memory gaps — the remediation is different:
- Memory gap: students need re-exposure. Go over the fact again, put it in a different context, assign a retrieval practice set.
- Understanding gap: students need a different explanation. The initial framing did not work. Try a different analogy, a worked example, or a discussion of the underlying mechanism rather than the surface rule.
Applying a memory-gap remedy to an understanding gap is one of the most common instructional mistakes — it adds review time without changing what the student actually understands. More exposure to a confused concept does not clear the confusion. A different explanation does.
Understanding-focused questions in LRNRS battles
LRNRS battles work best when questions are written to test application and reasoning. After the battle, the gap report shows which concepts broke down across the class — which helps you distinguish between "they do not know this topic" and "they know the topic but cannot apply it." That distinction changes what you teach next.
Create a free battle →Related: Why students pass practice quizzes but fail the real exam · How to actually measure whether learning happened · What Kahoot doesn't tell you after the quiz