Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests
Stockholm University, Faculty of Social Sciences, Department of Statistics.ORCID iD: 0000-0001-7552-8983
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In this thesis, we examine two topics in the area of educational measurement. The first topic studies how to best design two achievement tests with common items such that a population mean-ability growth is measured as precisely as possible. The second examines how to calibrate newly developed test items optimally. These topics are two optimal design problems in achievement testing. Paper I consist of a simulation study where different item difficulty allocations are compared regarding the precision of mean ability growth when controlling for estimation method and item difficulty span. We take a more theoretical approach on how to allocate the item difficulties in Paper II. We use particle swarm optimization on a multi-objective weighted sum to determine an exact design of the two tests with common items. The outcome relies on asymptotic results of the test information function. The general conclusion of both papers is that we should allocate the common items in the middle of the difficulty span, with the two separate test items on different sides. When we decrease the difference in mean ability between the groups, the ranges of the common and test items coincide more.

In the second part, we examine how to apply an existing optimal calibration method and algorithm using data from the Swedish Scholastic Aptitude Test (SweSAT). We further develop it to consider uncertainty in the examinees' ability estimates. Paper III compares the optimal calibration method with random allocation of items to examinees in a simulation study using different measures. In most cases, the optimal design method estimates the calibration items more efficiently. Also, we can identify for what kind of items the method works worse.

The method applied in Paper III assumes that the estimated abilities are the true ones. In Paper IV, we further develop the method to handle uncertainty in the ability estimates which are based on an operational test. We examine the asymptotic result and compare it to the case of known abilities. The optimal design using estimates approaches the optimal design assuming true abilities for increasing information from the operational test.

Place, publisher, year, edition, pages
Stockholm: Department of Statistics, Stockholm University , 2021. , p. 42
Keywords [en]
test design, item response theory, optimal experimental design, SweSAT, item calibration, vertical scaling, ability growth, computerized adaptive tests
National Category
Probability Theory and Statistics Educational Sciences
Research subject
Statistics
Identifiers
URN: urn:nbn:se:su:diva-197928ISBN: 978-91-7911-674-3 (print)ISBN: 978-91-7911-675-0 (electronic)OAI: oai:DiVA.org:su-197928DiVA, id: diva2:1606225
Public defence
2021-12-10, hörsal 4, hus 2, Albanovägen 12, Stockholm, 10:00 (English)
Opponent
Supervisors
Available from: 2021-11-17 Created: 2021-10-26 Last updated: 2022-02-25Bibliographically approved
List of papers
1. Efficient Estimation of Mean Ability Growth Using Vertical Scaling
Open this publication in new window or tab >>Efficient Estimation of Mean Ability Growth Using Vertical Scaling
2021 (English)In: Applied measurement in education, ISSN 0895-7347, E-ISSN 1532-4818, Vol. 34, no 3, p. 163-178Article in journal (Refereed) Published
Abstract [en]

In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability between two groups of students that differ substantially. This is performed by a simulation study. One- and two-parameter item response models are assumed and the estimated abilities are vertically scaled using the non-equivalent anchor test design by estimating the abilities in one single run, so-called concurrent calibration. The connection between the test design and the Fisher information is also discussed. The results indicate that the expected a posteriori estimation method is preferred when estimating differences in mean ability between groups. Results also indicate that a test design with common items of medium difficulty leads to better precision, which coincides with previous results from horizontal equating.

National Category
Educational Sciences Mathematics
Identifiers
urn:nbn:se:su:diva-195839 (URN)10.1080/08957347.2021.1933981 (DOI)000661773400001 ()
Available from: 2021-08-26 Created: 2021-08-26 Last updated: 2022-02-25Bibliographically approved
2. Optimal Test Design for Estimation of Mean Ability Growth
Open this publication in new window or tab >>Optimal Test Design for Estimation of Mean Ability Growth
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The design of an achievement test is of importance for many reasons. This paper focuses on the mean ability growth of a population from one school grade to another. With test design, we mean how to allocate the test items concerning difficulties. The objective is to estimate the mean ability growth as efficiently as possible. We use the asymptotic expression for the mean ability growth in terms of the test information. With that expression as the criterion for optimization, we use particle swarm optimization to find the optimal design. The optimization function is dependent on the examinees' abilities, and therefore the value of the unknown mean ability growth. Hence, we will also use an optimum in average design. The conclusion is that we should allocate the common items in the middle of the difficulty span, with the two separate test items on different sides. When we decrease the difference in mean ability between the groups, the ranges of the common and test items coincide more.

Keywords
test design, item response theory, ability growth, particle swarm optimization, optimal design, optimal in average
National Category
Probability Theory and Statistics Educational Sciences
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-198066 (URN)
Available from: 2021-10-26 Created: 2021-10-26 Last updated: 2022-02-25
3. Optimal Item Calibration in the Context of the Swedish Scholastic Aptitude Test
Open this publication in new window or tab >>Optimal Item Calibration in the Context of the Swedish Scholastic Aptitude Test
(English)In: Article in journal (Other academic) Submitted
Abstract [en]

Large scale achievement tests require the existence of item banks with items for use in future tests. Before an item is included into the bank, it's characteristics need to be estimated. The process of estimating the item characteristics is called item calibration. For the quality of the future achievement tests, it is important to perform this calibration well and it is desirable to estimate the item characteristics as efficiently as possible. Methods of optimal design have been developed to allocate calibration items to examinees with the most suited ability. Theoretical evidence shows advantages with using ability-dependent allocation of calibration items. However, it is not clear whether these theoretical results hold also in a real testing situation. In this paper, we investigate the performance of an optimal ability-dependent allocation in the context of the Swedish Scholastic Aptitude Test (SweSAT) and quantify the gain from using the optimal allocation. On average over all items, we see an improved precision of calibration. While this average improvement is moderate, we are able to identify for what kind of items the method works well. This enables targeting specific item types for optimal calibration. We also discuss possibilities for improvements of the method.

Keywords
Item Response Theory, Optimal Design, 3PL model, Simulation Study, SweSAT
National Category
Probability Theory and Statistics Educational Sciences
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-198063 (URN)
Funder
Swedish Research Council, 2019-02706
Available from: 2021-10-26 Created: 2021-10-26 Last updated: 2022-02-25
4. Optimizing Calibration Designs with Uncertainty in Abilities
Open this publication in new window or tab >>Optimizing Calibration Designs with Uncertainty in Abilities
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In computerized adaptive tests, some newly developed items are often added for pretesting purposes. In this pretesting, item characteristics are estimated which is called calibration. It is promising to allocate calibration items to examinees based on their abilities and methods from optimal experimental design have been used for that. However, the abilities of the examinees have usually been assumed to be known for this allocation. In practice, the abilities are estimates based on a limited number of operational items. We develop the theory for handling the uncertainty in abilities in a proper way and show how optimal calibration design can be derived in this situation. The method has been implemented in an R package. We see that the derived optimal calibration designs are more robust if this uncertainty in abilities is acknowledged.

Keywords
Ability, Computerized Adaptive Tests, Item Calibration, Optimal Experimental Design
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-198065 (URN)
Funder
Swedish Research Council, 2019-02706
Available from: 2021-10-26 Created: 2021-10-26 Last updated: 2022-02-25Bibliographically approved

Open Access in DiVA

Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests(832 kB)225 downloads
File information
File name FULLTEXT01.pdfFile size 832 kBChecksum SHA-512
b83fbf441d4059449f58f8c0ef3a3231c0c2de8de5846e04e2bbe1a176721a944d5b1dd82c5462ca2a7315dd96f02381d5ce310b9ebc17ae812444145e3ddc8a
Type fulltextMimetype application/pdf

Authority records

Bjermo, Jonas

Search in DiVA

By author/editor
Bjermo, Jonas
By organisation
Department of Statistics
Probability Theory and StatisticsEducational Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 225 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 756 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf