The ARRT item writing manual is extensive, with guidance for everything from terminology to item format and standardized language. Today, I want to talk about cognitive load, one of the key psychological phenomena we consider when writing the manual.
Cognitive load is the amount of burden a task puts on a candidate's mental resources. For example, running dose calculations in your head will have a higher cognitive load than recalling simple facts about a procedure. This is normal, but cognitive load can be a problem for us when it doesn't relate to the knowledge an exam is trying to assess. Unnecessary load increases candidate fatigue as well as the chance of answering incorrectly despite knowing the answer. We won't use items if errors like this become too frequent because they don't measure candidates fairly. With that in mind, let's look at a couple examples from the item writer manual.
First, ARRT no longer accepts "k-type" items. These are items where the possible answers use codes to reference information in the stem and look something like this:
A. 1 and 2
B. 2 and 3
C. 1, 2, and 4
D. 1, 3, and 4
These k-type items require the candidate to keep the information associated with each number in their limited working memory to answer correctly. A candidate at the end of a long exam is likely to run out of capacity, mix up the codes, and give the wrong answer by accident. With modern testing software, we can replace these items with multiselect versions that reduce load by simply allowing the candidate to check a box next to every correct answer.
Similarly, we strive to write items in active voice whenever possible. Consider the following passive sentence, "The patient was counseled by the technologist prior to the exam." The reader must keep a placeholder in memory until the end of the sentence where we finally reveal that the actor was the technologist all along. The active voice sentence, "The technologist counseled the patient prior to the exam" puts the information where the reader needs it, leaving more resources free to think about content relevant to the exam.
Next time you sit down to write new items or review items as part of an exam committee, remember these examples and consider whether the item puts unnecessary cognitive load on the candidate. We want to measure a candidate's clinically relevant knowledge and skills rather than their ability to rearrange sentences under pressure. Sometimes, restructuring a question or changing the item type can make all the difference.