Assessment Literacy

We have been learning about academic assessment at work, and standardized testing has more in common with game design than you might expect. You want a good test of your knowledge, skills, and abilities.

You want a clear target of what you are trying to evaluate. Well designed tests and games present a particular, intentional form of challenge. What skills are you trying to challenge or bring into conflict? A strategy game that is primarily decided by clicking speed or a roll of the dice fails as a strategy game. Some games test visual acuity or memorization far more than their intended primary mechanic. Tests have a similar structure: “construct validity” is the degree to which a test measures what it claims to measure. Bad tests have confusing wording or rely on knowledge not relevant to the construct.

You know that bit where your first person shooter has a required vehicle section? Where your strategy RPG puts a reward behind casino games? Where any F2P game devolves into a cash shop grab? That is the same sort of thing as a test with questions worded like, “Which of the following isn’t incorrect?” or math questions that assume you know the rules of a sport or of soybean future trading. Badly designed games and tests both fail you for no reason you could reasonably anticipate, or you pass by no merit of your own.

Good testing systems need clear and timely feedback. If your result is a single number or an opaque wall of words, it does not help you grow. If you get the results long after the test, you forgot the details of the test or they became moot. This is also a difference between formative and summative evaluation: a final test of what you have learned can be more of a thumbs up or down, but there should be many evaluation points along the way that provide guidance on whether you are on the right path.

“Formative assessment” is a concept some games lack. They jump straight to The Test. You learn by failing and trying again. That can work, depending on the scale of “fail”; dying is not always losing or failing in video games. This is my recurring theme that a game should be at least theoretically beatable on a first playthrough. Your first encounter with something should not always result in failure; it should be forgiving enough to provide a chance to learn, recover, and succeed. A “gotcha” that is impossible without foreknowledge and trivial with it is not good design. A more insidious version is an early cost or damage that guarantees a later failure; you survived that particular challenge, but you are a dead man walking.

Final bosses and final tests should have new elements. Novelty creates interesting and meaningful challenges, not just rehashes of what you already did. But they should be extensions of (and culminations to) the learning process, not something wildly unrelated. If you have ever walked into a final exam and felt ambushed, that was probably bad design, either of the test itself or the materials leading up to it. That does not mean you should always pass the test, but you should have a chance to realize you are struggling before failing. There should be a clear connection between what you learned and what you are being tested on. Platformers that lead to non-platforming puzzle bosses are throwing in a new minigame as a final exam.

: Zubon

One thought on “Assessment Literacy”

  1. I’m of the opinion that we’re in the middle part of the pendulum swing of “no one can fail” to “let’s see how people cope with failure”. Testing people’s ability to apply knowledge in one domain, onto another, knowing that they are likely to fail is a great way forward. Kobayashi Maru, anyone?

Comments are closed.