Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Optimizing Calibration Designs with Uncertainty in Abilities
Stockholm University, Faculty of Social Sciences, Department of Statistics.ORCID iD: 0000-0001-7552-8983
Stockholm University, Faculty of Social Sciences, Department of Statistics.ORCID iD: 0000-0003-0528-0083
Stockholm University, Faculty of Social Sciences, Department of Statistics.ORCID iD: 0000-0003-4161-7851
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In computerized adaptive tests, some newly developed items are often added for pretesting purposes. In this pretesting, item characteristics are estimated which is called calibration. It is promising to allocate calibration items to examinees based on their abilities and methods from optimal experimental design have been used for that. However, the abilities of the examinees have usually been assumed to be known for this allocation. In practice, the abilities are estimates based on a limited number of operational items. We develop the theory for handling the uncertainty in abilities in a proper way and show how optimal calibration design can be derived in this situation. The method has been implemented in an R package. We see that the derived optimal calibration designs are more robust if this uncertainty in abilities is acknowledged.

Keywords [en]
Ability, Computerized Adaptive Tests, Item Calibration, Optimal Experimental Design
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
URN: urn:nbn:se:su:diva-198065OAI: oai:DiVA.org:su-198065DiVA, id: diva2:1605908
Funder
Swedish Research Council, 2019-02706Available from: 2021-10-26 Created: 2021-10-26 Last updated: 2022-02-25Bibliographically approved
In thesis
1. Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests
Open this publication in new window or tab >>Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In this thesis, we examine two topics in the area of educational measurement. The first topic studies how to best design two achievement tests with common items such that a population mean-ability growth is measured as precisely as possible. The second examines how to calibrate newly developed test items optimally. These topics are two optimal design problems in achievement testing. Paper I consist of a simulation study where different item difficulty allocations are compared regarding the precision of mean ability growth when controlling for estimation method and item difficulty span. We take a more theoretical approach on how to allocate the item difficulties in Paper II. We use particle swarm optimization on a multi-objective weighted sum to determine an exact design of the two tests with common items. The outcome relies on asymptotic results of the test information function. The general conclusion of both papers is that we should allocate the common items in the middle of the difficulty span, with the two separate test items on different sides. When we decrease the difference in mean ability between the groups, the ranges of the common and test items coincide more.

In the second part, we examine how to apply an existing optimal calibration method and algorithm using data from the Swedish Scholastic Aptitude Test (SweSAT). We further develop it to consider uncertainty in the examinees' ability estimates. Paper III compares the optimal calibration method with random allocation of items to examinees in a simulation study using different measures. In most cases, the optimal design method estimates the calibration items more efficiently. Also, we can identify for what kind of items the method works worse.

The method applied in Paper III assumes that the estimated abilities are the true ones. In Paper IV, we further develop the method to handle uncertainty in the ability estimates which are based on an operational test. We examine the asymptotic result and compare it to the case of known abilities. The optimal design using estimates approaches the optimal design assuming true abilities for increasing information from the operational test.

Place, publisher, year, edition, pages
Stockholm: Department of Statistics, Stockholm University, 2021. p. 42
Keywords
test design, item response theory, optimal experimental design, SweSAT, item calibration, vertical scaling, ability growth, computerized adaptive tests
National Category
Probability Theory and Statistics Educational Sciences
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-197928 (URN)978-91-7911-674-3 (ISBN)978-91-7911-675-0 (ISBN)
Public defence
2021-12-10, hörsal 4, hus 2, Albanovägen 12, Stockholm, 10:00 (English)
Opponent
Supervisors
Available from: 2021-11-17 Created: 2021-10-26 Last updated: 2022-02-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Bjermo, JonasFackle-Fornius, EllinorMiller, Frank

Search in DiVA

By author/editor
Bjermo, JonasFackle-Fornius, EllinorMiller, Frank
By organisation
Department of Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 295 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf