Implementing computerized adaptative test

Dario Cecilio-Fernandes


Traditionally, the assessment of knowledge consists of items, who students answered the same items at the same time, such as test of a specific subject. This assessment may be considered too easy or difficulty by the student. In both cases, the test is likely to be boring by the students and it may provide little information on students’ knowledge level. One way of solving this problem is by creating tailored tests for each student, considering that the next question will be selected based on students’ performance on previous items. This type of test is known as computerized adaptative test. Computerized adaptative test provides both educational and psychometrics advantages compared to the traditional paper-pen testing. Computerized adaptative test requires less items than the traditional test, which in turns will decrease students’ fatigue, and optimizing learning. Furthermore, computerized adaptative test is designed for each student, considering the level of difficulty of each item. This makes the teste more attractive and authentic, since the items will be always aligned with the level of students’ knowledge. Since computerized adaptative test requires both the difficulty of the item and students’ ability, it requires the use of Item Response Theory, which establish a relation between difficulty of the item, students’ ability and the probability of answering a question correctly. Although the implementation of computerized adaptative test is complex, computerized adaptative test has a higher standard in both psychometric point of view and the alignment with modern theories of learning. Because of the high complexity, the implementation of computerized adaptative test is usually in high-stakes test and large scale. However, the new educational paradigm in which requires tailored-made education respecting the pace of each student, the computerized adaptative test will be more used over time.


Medical education; educational assessment; computerized adaptive test.


Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357(9260):945-9.

Wood T. Assessment not only drives learning, it may also help learning. Med Educ 2009;43(1):5-6.

Cecilio-Fernandes D, Cohen-Schotanus J, Tio RA. Assessment programs to enhance learning. Phys Ther Rev. 2018;23(1):17-20.

Bloom BS. Taxonomy of educational objectives: the classification of education goals. New York: Longman; 1956. v. 1.

Cecilio-Fernandes D, Kerdijk W, Jaarsma ADC, Tio RA. Development of cognitive processing and judgments of knowledge in medical students: analysis of progress test results. Med Teach. 2016;38(11):1125-9.

Cecilio-Fernandes D, Kerdijk W, Bremers AJ, Aalders W, Tio RA. Comparison of level of cognitive process between case-based items and non-case-based items of the interuniversity progress test of medicine in the Netherlands. J Educ Eval Health Prof. 2018;15:28.

Collares CF, Cecilio-Fernandes D. When I say ... computerised adaptive testing. Med Educ. 2019;53(2):115-6.

Van der Linden WJ, Glas CAW, editors. Computerized adaptive testing: theory and practice. Dordrecht: Kluwer Academic; 2000.

Weiss DJ. Computerized adaptive testing for effective and efficient measurement in counseling and education. Meas Eval Couns Dev. 2004;37(2):70-84.

Chang H. Psychometrics behind computerized adaptive testing. Psychometrika. 2015;80(1):1-20.

Martin AJ, Lazendic G. Computer-adaptive testing: implications for students’ achievement, motivation, engagement, and subjective test experience. J Educ Psychol. 2018;110(1):27-45.

Beatty J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol Bull. 1982;91(2):276-92.

Thompson NA, Weiss DJ. A framework for the development of computerized adaptive tests. Pract Asses Res Evaluation. 2011;16(1):1-9.

Linacre J. Sample Size and Item Calibration Stability. Rasch Meas Trans. 1994;7(4):328.

Cecilio-Fernandes D, Medema H, Collares CF, Schuwirth L, Cohen-Schotanus J, Tio RA. Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis. BMC Med Educ. 2017;17:192.


This journal is a member of COPE (Committee on Publication Ethics) and follow the principles recommended by this international reference organization on integrity and ethics in scientific publication.

Licença Creative Commons
Except where otherwise noted, the material published in this journal is licensed under a Creative Commons Attribution 4.0 International licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original publication is properly cited.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

 Member of OASPA

Copyright: © 2006-2019 EDIPUCRS