CCSSO, Center for Assessment Publish Supplement to Report on Score Comparability across Computerized Assessment Delivery Devices
Any body of research evolves over time. Previous understandings become more nuanced, ideas are supported or rebuked, and, eventually we arrive at a clearer view of the issue. The research on score comparability across computerized devices is no exception. CCSSO and the Center for Assessment have published an update to a previous report on score comparability across computerized devices. The updated report supplements information in the prior report with research that has since been published or otherwise made available. This new research provides further nuance to the findings of the previous report, but does not change the main takeaway: Though differences in performance across devices are small, on average, and generally do not seem to follow any strong systematic trends, there are certain features of assessments and devices that have been linked to differential performance across devices, and thus may present barriers to comparability. Virtually all studies on this topic compare computers—both laptops and desktops—to tablets.
Collectively, the current body of research suggests that students generally perform similarly across computerized devices. However, the research did find there can be some exceptions to this rule—exceptions that can often be traced back to student familiarity with a device or to specific types of items. As the body of research has begun to focus more closely on the specific aspects of devices or assessments that cause differences in performance, the methods have followed suit, becoming more sensitive and focused on detecting differences beyond the average overall scale score. Both the original report and the update provide recommendations for states on addressing potential barriers to score comparability across computerized devices