Content Validity Starts with a Good Practice Analysis
A test must have content validity - the topics and tasks included in the test must be relevant to the purpose for which the test is used. One way of understanding content validity is by thinking of two ovals, a large oval representing all the relevant tasks performed in a job, and a smaller oval representing the content covered on a test. Content validity is the degree to which the smaller oval of the test fits within the larger oval of the tasks performed in a relevant job. When the content on an exam is not nested within the tasks performed on the job, disasters can happen.
One example of what can happen when content on an exam is not nested within topics related to the job is Griggs v. Duke Power Co., 401 U.S. 424 (1971). Multiple black employees were denied employment in the highest paying departments of Duke Power Co. through tests that were not relevant to the tasks performed on the job. The tests were designed by professionals, but because they were not relevant to what was performed on the job there was no content validity. Practice analysis is how we begin the process of making sure the content of a test is connected to what is performed in a job.
Practice analysis results flow into all other aspects of exam development and ensure that the exam has content validity at each step. A practice analysis informs the Task Inventory, a list of the tasks most often performed by technologists. The Task Inventory informs the Content Outline, the topics we have identified as being relevant to entry-level technologist jobs. The knowledge and skills identified in the Content Outline then flow into what items are selected for our examination.
Thank you to all our volunteers that serve on our Practice Analysis committees: you are what allow us to develop exams with content validity. If you would like to learn more about joining a practice analysis committee for one of your disciplines, please visit the ARRT Volunteer Portal.