Assessing Performance Competence in Training Games
2011 (English)In: Affective Computing and Intelligent Interaction: Proceedings, Part II / [ed] Sidney D’Mello, Arthur Graesser, Björn Schuller, Jean-Claude Martin, Springer Berlin/Heidelberg, 2011, 518-527 p.Conference paper (Refereed)
In-process assessment of trainee learners in game-based simulators is a challenging activity. This typically involves human instructor time and cost, and does not scale to the one tutor per learner vision of computer-based learning. Moreover, evaluation from a human instructor is often subjective and comparisons between learners are not accurate. Therefore, in this paper, we propose an automated, formula-driven quantitative evaluation method for assessing performance competence in serious training games. Our proposed method has been empirically validated in a game-based driving simulator using 7 subjects and 13 sessions, and accuracy up to 90.25% has been achieved when compared to an existing qualitative method. We believe that by incorporating quantitative evaluation methods like these future training games could be enriched with more meaningful feedback and adaptive game-play so as to better monitor and support player motivation, engagement and learning performance.
Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2011. 518-527 p.
, Lecture Notes in Computer Science, ISSN 0302-9743 ; 6975
Serious games, Performance, Evaluation, Motivation, Driver training
Research subject Computer and Systems Sciences
IdentifiersURN: urn:nbn:se:su:diva-65043DOI: 10.1007/978-3-642-24571-8_65ISBN: 978-3-642-24570-1ISBN: 978-3-642-24571-8OAI: oai:DiVA.org:su-65043DiVA: diva2:460719
Fourth International Conference, ACII 2011, Memphis, TN, USA, October 9–12, 2011