Change search
Link to record
Permanent link

Direct link
Sigfrid, Karl
Publications (4 of 4) Show all publications
Sigfrid, K. (2024). IRT for voting advice applications: a multi-dimensional test that is adaptive and interpretable. Quality and quantity, 58(5), 4137-4156
Open this publication in new window or tab >>IRT for voting advice applications: a multi-dimensional test that is adaptive and interpretable
2024 (English)In: Quality and quantity, ISSN 0033-5177, E-ISSN 1573-7845, Vol. 58, no 5, p. 4137-4156Article in journal (Refereed) Published
Abstract [en]

Voting advice applications rely on user input to match user preferences to political parties or candidates. Providing the input can be time-consuming, which may have a negative effect on participation. For individuals who are under time constraints or who are affected by survey fatigue, the participation threshold may be lowered if there is an option to conclude the test without answering all question items. The test result should ideally be close to the result that the participant would have gotten after answering the full battery of questions. We propose a method that allows respondents to conclude a VAA early and still get results with sufficient accuracy.The method proposed here extends the Graded Response Model and the Maximum Information Criterion, used in Item Response Theory. The aim of the method is to allow the user to control the length of the test. Furthermore, we want a simpler interpretation of multidimensional parameter estimates than we get from traditional MIRT. To achieve this, we propose an algorithm for adaptive IRT capable of selecting from a pool of items that belong to separate unidimensional scales. Using both simulated data and response data from a voting advice application project, we evaluate the accuracy of shorter tests implemented with our adaptive method. When only a few test items are answered, our proposed method outperforms a static-order IRT test of equal length in identifying the best match. We expect that implementation of the method can increase participation and engagement in voting advice applications.

Keywords
Computerized adaptive tests, Content-based recommender system, Item response theory, Voting advice application
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:su:diva-235549 (URN)10.1007/s11135-024-01845-6 (DOI)2-s2.0-85186581145 (Scopus ID)
Available from: 2024-11-25 Created: 2024-11-25 Last updated: 2024-11-25Bibliographically approved
Sigfrid, K., Fackle-Fornius, E. & Miller, F.Elo estimated abilities with unknown difficulties.
Open this publication in new window or tab >>Elo estimated abilities with unknown difficulties
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Accurate item difficulty estimates are essential to track student ability growth in intelligent tutoring systems. These estimates can be obtained through separate pre-calibration processes, which may be expensive and time-consuming. The Elo rating method offers an efficient alternative by simultaneously updating both ability estimates and difficulty estimates during actual use. However, the Elo system has a critical limitation. When the mean true ability increases over time – as can be expected for a group of students – the standard Elo algorithm increasingly underestimates item difficulties, causing difficulty deflation.

We address this limitation by anchoring the item difficulties with assumptions about the distribution of initial student abilities. Through simulations, we demonstrate how difficulty deflation can be tracked and corrected. Our approach results in good approximations of the true constant item difficulties, allowing accurate tracking of student progress without the need for a separate pre-calibration step. This reduces the implementation barriers for effective adaptive learning systems.

Keywords
Ability tracking, Dynamic ability growth, Elo algorithm, Growth model, Longitudinal Item response theory, Adaptive learning
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-249768 (URN)
Available from: 2025-11-19 Created: 2025-11-19 Last updated: 2025-11-19
Sigfrid, K., Fackle-Fornius, E. & Miller, F.Estimating Abilities with an Elo-Informed Growth Model.
Open this publication in new window or tab >>Estimating Abilities with an Elo-Informed Growth Model
(English)Manuscript (preprint) (Other academic)
Abstract [en]

An intelligent tutoring system aims to provide instructions and exercises tailored to the student's current level, and therefore needs to track the student ability. Ability estimates can be updated based on the outcomes of practice exercises that are part of the learning process. We propose a new method for tracking abilities, based on the assumption that the abilities for a group of respondents who are all in the same stage of the learning process follow a distribution that can be estimated. Comparisons using both simulated data and real learning data show that the proposed method performs better than the standard Elo algorithm in a scenario with rapid ability growth. The method can lower the threshold for implementing robust intelligent tutoring systems.

Keywords
Ability tracking, Dynamic ability growth, Elo algorithm, Growth model, Longitudinal Item response theory
National Category
Statistics in Social Sciences
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-249767 (URN)
Available from: 2025-11-19 Created: 2025-11-19 Last updated: 2025-11-19
Sigfrid, K., Fackle-Fornius, E. & Miller, F.Optimized questionnaire item selection for tracking the progression of motor symptoms in Parkinson's disease.
Open this publication in new window or tab >>Optimized questionnaire item selection for tracking the progression of motor symptoms in Parkinson's disease
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Long questionnaires increase the response burden for patients and healthcare workers. In the treatment of Parkinson's disease, the MDS-UPDRS questionnaire to track disease progression may be underutilized due to time requirements. While reduced item sets have been studied using Fisher information from Item Response Theory (IRT) models, optimal selection methods remain unclear.

We compared three methods for selecting an optimal subset of items, with the aim of minimizing the uncertainty in the estimates of the disease severity: Ranking by Fisher information, coordinate descent local search to directly minimize estimate uncertainty, and adaptive selection based on prior estimates.

Whereas item ranking based on the expected Fisher information outperformed random choice of items, we saw further gains with the coordinate descent algorithm that directly minimizes the uncertainty of the disease severity estimate. An adaptive algorithm that selects items based on a previous estimate gave an additional slight gain compared to the coordinate descent method. For a 5-item subset, the ranked Fisher information method reduced the expected standard deviation by 14 percent compared to random item selection. The corresponding reductions for coordinate descent and adaptive selection were 26 percent and 34 percent respectively.

More sophisticated selection methods substantially improved estimate accuracy for small item sets, with diminishing returns for larger subsets. The choice of method entails a trade-off between methodological complexity and precision, where coordinate descent optimization offers a practical balance between simplicity and accuracy for real-world implementation.

Keywords
MDS-UPDRS, Parkinson's disease, Longitudinal Item Response Theory, Item selection, Test efficiency, Adaptive testing
National Category
Medical Biostatistics
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-249765 (URN)
Available from: 2025-11-19 Created: 2025-11-19 Last updated: 2025-11-19
Organisations

Search in DiVA

Show all publications