Have you ever wondered how ARRT picks which 200 items will make it onto an exam form? Maybe a dartboard or bingo tumbler? The first draft of an ARRT exam is created through a process known as Automated Form Construction. Like a keyring with many keys, you test each key until one opens the door. Automated Form Construction follows the same idea, but a psychometrician runs multiple forms within a set of rules. The rules set exactly how many scored items must come from each section and subsection of the content specification, and how hard the whole test form should be at the passing standard. Forms that hit the section counts exactly and land closest to the target difficulty rise to the top, where the psychometrician selects the form that best matches both goals and confirms that each section and subsection has the required number of items.
Behind the scenes, nothing is left to guesswork. The tool follows a rule-based search to assemble valid forms and a simple prediction step to check difficulty at the pass line. Different search strategies can be used but the practical outcome is the same: only forms that match the content specification and align with the target difficulty are considered for use. This results in a fair, steady exam that is built the same way, for the same purpose, every time.
Automated Form Construction is valuable because it keeps two promises at the same time. First, it respects the content specification exactly, so each form contains the required number of scored items in every section and subsection defined by ARRT. Second, it helps the whole form to feel similarly difficult across sections. In practice, this means two different forms are expected to treat borderline candidates the same way for fair pass/fail decisions every time.
|
Nov 21, 2025
|
by Assessments Department
|