Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Discrimination with Unidimensional and Multidimensional Item Response Theory Models for Educational Data
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Achievement tests are used to characterize the proficiency of higher-education students. Item response theory (IRT) models are applied to these tests to estimate the ability of students (as latent variable in the model). In order for quality IRT parameters to be estimated, especially ability parameters, it is important that the appropriate number of dimensions is identified. Through a case study, based on a statistics exam for  students in higher education, we show how dimensions and other model parameters can be chosen in a real situation. Our model choice is based both on empirical and on background knowledge of the test. We investigate whether dimensionality influences the estimates of the item-parameters, especially the discrimination parameter which provides information about the quality of the item. We perform a simulation study to generalize our conclusions. Both the simulation study and the case study show that multidimensional models have the advantage to better discriminate between examinees.

National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
URN: urn:nbn:se:su:diva-174074OAI: oai:DiVA.org:su-174074DiVA, id: diva2:1357019
Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2019-10-04Bibliographically approved
In thesis
1. Achievement tests and optimal design for pretesting of questions
Open this publication in new window or tab >>Achievement tests and optimal design for pretesting of questions
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Achievement tests are used to measure the students' proficiency in a particular knowledge. Computerized achievement tests (e.g. GRE and SAT) are usually based on questions available in an item bank to measure the proficiency of students. An item bank is a large collection of items with known characteristics (e.g. difficulty). Item banks are continuously updated and revised with new items in place of obsolete, overexposed or flawed items over time. This thesis is devoted to updating and maintaining the item bank with high-quality questions and better estimations of item parameters (item calibration). 

The thesis contains four manuscripts. One paper investigates the impact of student ability dimensionality on the estimated parameters and the other three deal with item calibration.

In the first paper, we investigate how the ability dimensionality influences the estimates of the item-parameters. By a case and simulation study, we found that a multidimensional model better discriminates among the students.

The second paper describes a method for optimal item calibration by efficiently selecting the examinees based on their ability levels. We develop an algorithm which selects intervals for the students' ability levels for optimal calibration of the items. We also develop an equivalence theorem for item calibration to verify the optimal design.  

The algorithm developed in Paper II becomes complicated with the increase of number of calibrated items. So, in Paper III we develop a new exchange algorithm based on the equivalence theorem developed in Paper II.

Finally, the fourth paper generalizes the exchange algorithm described in Paper III by assuming that the students have multidimensional abilities to answer the questions.

Place, publisher, year, edition, pages
Department of Statistics, Stockholm University, 2019. p. 26
Keywords
Achievement test, Equivalence theorem, Exchange algorithm, Item calibration, Item response theory model, Optimal experimental design
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
urn:nbn:se:su:diva-174079 (URN)978-91-7797-879-4 (ISBN)978-91-7797-880-0 (ISBN)
Public defence
2019-11-15, William-Olssonsalen, Geovetenskapens hus, Svante Arrhenius väg 14, floor 1, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 3: Manuscript. Paper 4: Manuscript.

Available from: 2019-10-23 Created: 2019-10-02 Last updated: 2019-10-16Bibliographically approved

Open Access in DiVA

No full text in DiVA

Search in DiVA

By author/editor
Ul Hassan, MahmoodMiller, Frank
By organisation
Department of Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 42 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf