Academic Journals Database
Disseminating quality controlled scientific knowledge

Examining item-position effects in large-scale assessment using the Linear Logistic Test Model

ADD TO MY LIST
 
Author(s): CHRISTINE HOHENSINN | KLAUS D. KUBINGER | MANUEL REIF | STEFANA HOLOCHER-ERTL | LALE KHORRAMDEL | MARTINA FREBORT

Journal: Psychology Science Quarterly
ISSN 1866-6140

Volume: 50;
Issue: 3;
Start page: 391;
Date: 2008;
VIEW PDF   PDF DOWNLOAD PDF   Download PDF Original page

Keywords: Rasch model | LLTM | large-scale assessment | item-position effects | item calibration

ABSTRACT
When administering large-scale assessments, item-position effects are of particular importance because the applied test designs very often contain several test booklets with the same items presented at different test positions. Establishing such position effects would be most critical; it would mean that the estimated item parameters do not depend exclusively on the items’ difficulties due to content but also on their presentation positions. As a consequence, item calibration would be biased. By means of the linear logistic test model (LLTM), item-position effects can be tested. In this paper, the results of a simulation study demonstrating how LLTM is indeed able to detect certain position effects in the framework of a large-scale assessment are presented first. Second, empirical item-position effects of a specific large-scale competence assessment in mathematics (4th grade students) are analyzed using the LLTM. The results indicate that a small fatigue effect seems to take place. The most important consequence of the given paper is that it is advisable to try pertinent simulation studies before an analysis of empirical data takes place; the reason is, that for the given example, the suggested Likelihood-Ratio test neither holds the nominal type-I-risk, nor qualifies as “robust”, and furthermore occasionally shows very low power.
Why do you need a reservation system?      Save time & money - Smart Internet Solutions