Exploring the Educational Utility of Pretrained Language Models
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
The emergence of pretrained language models has profoundly reshaped natural language processing, serving as foundation models for a wide range of tasks. Over the past decade, pretrained language models have evolved significantly, leading to the development of different types of models and approaches for utilising them. This progression spans from static to contextual models and from smaller models to more powerful, generative large language models. The increasing capabilities of these models have, in turn, led to growing interest in exploring new use cases and applications across various domains, including education, where digitalisation has created opportunities for AI applications that leverage pretrained language models, particularly due to the abundance of text data in educational contexts.
This thesis explores the educational utility of pretrained language models, specifically by investigating how different paradigms of these models can be applied to address tasks in education. These paradigms include various methodologies for leveraging the knowledge embedded in pretrained language models, such as embeddings, fine-tuning, prompt-based learning, and in-context learning. For collaborative learning group formation, a clustering approach based on pretrained embeddings is proposed, enabling the creation of either homogeneous or heterogeneous groups depending on the specific learning situation. For automated essay scoring, a pretrained language model is fine-tuned using both the essay instructions and the essay text as input; the proposed method also highlights key topical sentences that contribute to the predicted essay score. For educational question generation, a method based on prompt-based learning is introduced and shown to be more data-efficient than existing methods. Finally, for educational question answering, certain limitations of the in-context learning (or prompting) paradigm, such as a tendency of large language models to hallucinate or miscalculate, are addressed. Specifically, workflows and prompting strategies based on retrieval-augmented generation and tool-augmented generation are proposed, allowing large language models to ground answers in specific learning materials and to leverage external tools, such as calculators and knowledge bases, within chain-of-thought reasoning processes. These strategies are shown to produce more reliable and transparent answers to complex questions.
Through five empirical studies, methodological innovations within each paradigm of pretrained language models are proposed and evaluated for specific educational use cases. In addition to contributing methodologically to natural language processing, the results demonstrate the potential utility of pretrained language models in educational AI applications, thereby advancing the field of technology enhanced learning. The proposed methods not only improve predictive performance on specific tasks but also aim to enhance the transparency of pretrained language models, which is essential for building reliable and trustworthy educational AI applications.
Place, publisher, year, edition, pages
Stockholm: Department of Computer and Systems Sciences, Stockholm University , 2024.
Series
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 24-017
Keywords [en]
Natural Language Processing, Technology Enhanced Learning, Pretrained Language Models, Large Language Models, Generative AI, Collaborative Learning, Automated Essay Scoring, Educational Question Generation, Educational Question Answering
National Category
Computer Sciences Language Technology (Computational Linguistics)
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-235084ISBN: 978-91-8107-014-9 (print)ISBN: 978-91-8107-015-6 (electronic)OAI: oai:DiVA.org:su-235084DiVA, id: diva2:1909361
Public defence
2024-12-16, L30, Nodhuset, Borgarfjordsgatan 12, Kista., 09:00 (English)
Opponent
Supervisors
2024-11-212024-10-302024-11-11Bibliographically approved
List of papers