Change search
Link to record
Permanent link

Direct link
Publications (10 of 44) Show all publications
Cerratto-Pargman, T., McGrath, C. & Milrad, M. (2025). Towards responsible AI in education: Challenges and implications for research and practice. Computers and Education: Artificial Intelligence, Article ID 100345.
Open this publication in new window or tab >>Towards responsible AI in education: Challenges and implications for research and practice
2025 (English)In: Computers and Education: Artificial Intelligence, E-ISSN 2666-920X, article id 100345Article in journal, Editorial material (Refereed) Epub ahead of print
Abstract [en]

The latest technical innovations in computing technologies and big data analytics have led the way for integrating Artificial Intelligence in Education (AIED), enabling the development and deployment of unprecedented tools and applications. Today, AI has firmly entered the public discourse on education, positioning itself as a transformative force poised to play an increasingly pivotal role in shaping the use of educational services across both K-12 and higher education contexts. AIED technologies in the form of chatbots, intelligent tutoring systems, automated grading systems, and other algorithmic facilitated decision support systems are expected to provide personalized guidance, support, and feedback to students and assist teachers and policymakers in decision-making in a wide range of formal educational contexts (Hwang et al., 2020). However, emerging research suggests that while the use of AI in educational contexts has the potential to support teaching and learning as well as improve human performance, to date, there seems to be little empirical work to support these claims (McGrath et al., 2024). At the same time, the misuse of AI due to algorithm bias and a lack of governance constitutes a risk, potentially inhibiting human rights and solidifying existing inequalities (Prinsloo, 2020; Yang et al., 2021). In this context, research on responsible AI in education underscores the need to integrate ethical considerations into AI literacy programs, equipping educators to evaluate tools based on fairness, accountability, and transparency principles, as well as frameworks that advocate for explainable and socially responsible AI (Floridi & Cowls, 2019), enabling educators to serve as ethical guardians in their adoption and use. However, responsibility regarding the use of AI in education also extends to the development of AIED systems, which may or may not align with fundamental human principles and values to safeguard human flourishing and well-being (Dignum, 2019). In this vein, reflecting on the meaning of responsible AI in education, its challenges and implications for education research and practice, prompts critical questions about autonomy, agency, and academic freedom in the age of AI and its role in contributing to equity in society (Cerratto-Pargman & McGrath, 2021; Macgilchrist, 2019; Nguyen et al., 2023; Prinsloo, 2020; Slade & Prinsloo, 2013; Velander et al., 2021; Williamson & Eynon, 2020).

National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:su:diva-241423 (URN)10.1016/j.caeai.2024.100345 (DOI)2-s2.0-85212935640 (Scopus ID)
Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2025-04-23
Liljedahl, M., Palmgren, P. J. & McGrath, C. (2025). Twelve tips on finding a research orientation: A practical guide for the novice researcher. Medical teacher, 1-5
Open this publication in new window or tab >>Twelve tips on finding a research orientation: A practical guide for the novice researcher
2025 (English)In: Medical teacher, ISSN 0142-159X, E-ISSN 1466-187X, p. 1-5Article in journal (Refereed) Published
Abstract [en]

Health professions education (HPE) research has developed into a robust scientific field, embracing a variety of methodological approaches. Although not always explicitly stated, research inquiries are underpinned by a research orientation; philosophical assumptions that guide the way in which research is designed, conducted, reported and evaluated. In a field as diverse as ours, novice researchers may find it challenging to orient themselves in epistemological and ontological considerations, that is, knowing which questions to ask. In the HPE research field, it is also not unusual for researchers to move between research traditions, that is, being a paradigmatic crossover. This article therefore provides twelve practical tips for novice researchers on finding a research orientation for their study. In summary, we suggest that novice researchers engage in a reflective and exploratory journey around why and how they attempt to address a certain research problem. We further urge that every research inquiry be conducted intentionally and coherently, and suggest that paradigmatic conversations and reflections continue throughout the research career. In conclusion, we hope that the twelve tips provided herein will assist not only novice researchers but also more experienced researchers engaging in paradigmatic crossings.

National Category
Educational Sciences
Identifiers
urn:nbn:se:su:diva-241096 (URN)10.1080/0142159x.2025.2473607 (DOI)001435613200001 ()2-s2.0-86000235050 (Scopus ID)
Available from: 2025-03-21 Created: 2025-03-21 Last updated: 2025-04-24Bibliographically approved
Cerratto-Pargman, T., Sporrong, E., Farazouli, A. & McGrath, C. (2024). Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education. Hogre Utbildning, 14(1), 74-81
Open this publication in new window or tab >>Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education
2024 (English)In: Hogre Utbildning, E-ISSN 2000-7558, Vol. 14, no 1, p. 74-81Article in journal (Refereed) Published
Abstract [en]

Interested in emerging technologies in higher education, we look at AI chatbots through the lens of human–technology mediations. We argue for shifting the focus from what higher education can do with AI chatbots to why AI chatbots are compelling for higher education’s raison-d’être. We call for a critical debate examining the power of AI chatbots in configuring students as civic actors in an increasingly complex and digitalized society. We welcome a continuous and rigorous examination of generative AI chatbots and their impact on teaching practices and student learning in higher education.

Keywords
ChatGPT, criticality, higher education practices, student writing, technological mediations
National Category
Didactics
Identifiers
urn:nbn:se:su:diva-236054 (URN)10.23865/hu.v14.6243 (DOI)2-s2.0-85196730145 (Scopus ID)
Available from: 2025-01-14 Created: 2025-01-14 Last updated: 2025-05-13Bibliographically approved
Macassa, G. & McGrath, C. (2024). Common Problems! and Common Solutions? — Teaching at the Intersection Between Public Health and Criminology: A Public Health Perspective. Annals of Global Health, 90(1), Article ID 12.
Open this publication in new window or tab >>Common Problems! and Common Solutions? — Teaching at the Intersection Between Public Health and Criminology: A Public Health Perspective
2024 (English)In: Annals of Global Health, E-ISSN 2214-9996, Vol. 90, no 1, article id 12Article in journal (Refereed) Published
Abstract [en]

Public health and criminology share similar current and future challenges, mostly related to crime and health causation, prevention, and sustainable development. Interdisciplinary and transdisciplinary approaches to education at the intersection of public health and criminology can be an integral part of future training in areas of mutual interest. Based on reflections on teaching criminology students, this viewpoint discusses the main interconnections between public health and criminology teaching through the public health lens. The paper discusses potential challenges associated with interdisciplinarity and transdisciplinarity. Among these challenges is communication across the different fields and their perspectives to be able to achieve the desired complementarity at the intersection of the two disciplines.

Keywords
causation, intersection public health-criminology, prevention, social determinants, sustainable development
National Category
Public Health, Global Health and Social Medicine Other Legal Research Criminology
Identifiers
urn:nbn:se:su:diva-232509 (URN)10.5334/aogh.4375 (DOI)001226250700006 ()38370862 (PubMedID)2-s2.0-85185614775 (Scopus ID)
Available from: 2024-08-21 Created: 2024-08-21 Last updated: 2025-02-20Bibliographically approved
McGrath, C., Farazouli, A. & Cerratto-Pargman, T. (2024). Generative AI chatbots in higher education: a review of an emerging research area. Higher Education
Open this publication in new window or tab >>Generative AI chatbots in higher education: a review of an emerging research area
2024 (English)In: Higher Education, ISSN 0018-1560, E-ISSN 1573-174XArticle in journal (Refereed) Epub ahead of print
Abstract [en]

Artificial intelligence (AI) chatbots trained on large language models are an example of generative AI which brings promises and threats to the higher education sector. In this study, we examine the emerging research area of AI chatbots in higher education (HE), focusing specifically on empirical studies conducted since the release of ChatGPT. Our review includes 23 research articles published between December 2022 and December 2023 exploring the use of AI chatbots in HE settings. We take a three-pronged approach to the empirical data. We first examine the state of the emerging field of AI chatbots in HE. Second, we identify the theories of learning used in the empirical studies on AI chatbots in HE. Third, we scrutinise the discourses of AI in HE framing the latest empirical work on AI chatbots. Our findings contribute to a better understanding of the eclectic state of the nascent research area of AI chatbots in HE, the lack of common conceptual groundings about human learning, and the presence of both dystopian and utopian discourses about the future role of AI chatbots in HE.

Keywords
AI chatbots, Generative AI, Large language models, Discourses, Theories of learning
National Category
Information Systems, Social aspects
Research subject
Information Society
Identifiers
urn:nbn:se:su:diva-238205 (URN)10.1007/s10734-024-01288-w (DOI)
Available from: 2025-01-17 Created: 2025-01-17 Last updated: 2025-04-01
Farazouli, A., Cerratto-Pargman, T., Bolander Laksov, K. & McGrath, C. (2024). Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers' assessment practices. Assessment & Evaluation in Higher Education, 49(3), 363-375
Open this publication in new window or tab >>Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers' assessment practices
2024 (English)In: Assessment & Evaluation in Higher Education, ISSN 0260-2938, E-ISSN 1469-297X, Vol. 49, no 3, p. 363-375Article in journal (Refereed) Published
Abstract [en]

AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination prompts in undergraduate contexts. University teachers (n = 24) from four different departments in humanities and social sciences participated in Turing Test-inspired experiments, where they blindly assessed student and ChatGPT-written responses to home examination questions. Additionally, we conducted semi-structured interviews in focus groups with the same teachers examining their reflections about the quality of the texts they assessed. Regarding chatbot-generated texts, we found a passing rate range across the cohort (37.5 - 85.7%) and a chatbot-written suspicion range (14-23%). Regarding the student-written texts, we identified patterns of downgrading, suggesting that teachers were more critical when grading student-written texts. Drawing on post-phenomenology and mediation theory, we discuss AI chatbots as a potentially disruptive technology in higher education practices.

Keywords
AI-chatbots, assessment, higher education, home examination, Turing test
National Category
Pedagogy
Identifiers
urn:nbn:se:su:diva-220860 (URN)10.1080/02602938.2023.2241676 (DOI)001040685700001 ()2-s2.0-85166200345 (Scopus ID)
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-09-16Bibliographically approved
Sperling, K., Stenberg, C.-J., McGrath, C., Åkerfeldt, A., Heintz, F. & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open, 6, Article ID 100169.
Open this publication in new window or tab >>In search of artificial intelligence (AI) literacy in teacher education: A scoping review
Show others...
2024 (English)In: Computers and Education Open, ISSN 2666-5573, Vol. 6, article id 100169Article in journal (Refereed) Published
Abstract [en]

Artificial intelligence (AI) literacy has recently emerged on the educational agenda raising expectations on teachers’ and teacher educators’ professional knowledge. This scoping review examines how the scientific literature conceptualises AI literacy in relation to teachers’ different forms of professional knowledge relevant for Teacher Education (TE). The search strategy included papers and proceedings from 2000 to 2023 related to AI literacy and TE as well as the intersection of AI and teaching. Thirty-four papers were included in the analysis. The Aristotelian concepts episteme (theoretical-scientific knowledge), techne (practical-productive knowledge), and phronesis (professional judgement) were used as a lens to capture implicit and explicit dimensions of teachers’ professional knowledge. Results indicate that AI literacy is a globally emerging research topic in education but almost absent in the context of TE. The literature covers many different topics and draws on different methodological approaches. Computer science and exploratory teaching approaches influence the type of epistemic, practical, and ethical knowledge. Currently, teachers’ professional knowledge is not broadly addressed or captured in the research. Questions of ethics are predominantly addressed as a matter of understanding technical configurations of data-driven AI technologies. Teachers’ practical knowledge tends to translate into the adoption of digital resources for teaching about AI or the integration of AI EdTech into teaching. By identifying several research gaps, particularly concerning teachers' practical and ethical knowledge, this paper adds to a more comprehensive understanding of AI literacy in teaching and can contribute to a more well-informed AI literacy education in TE as well as laying the ground for future research related to teachers’ professional knowledge.

Keywords
AI education, Professional development, Teacher training, Aristoteles, AI readiness, Pre-service teachers
National Category
Pedagogy
Research subject
Education
Identifiers
urn:nbn:se:su:diva-228350 (URN)10.1016/j.caeo.2024.100169 (DOI)001224342800001 ()2-s2.0-85209462293 (Scopus ID)
Available from: 2024-04-12 Created: 2024-04-12 Last updated: 2025-02-25Bibliographically approved
Sporrong, E., Cerratto-Pargman, T. & McGrath, C. (2024). Situating AI in assessment – an exploration of university teachers’ valuing practices. AI and Ethics
Open this publication in new window or tab >>Situating AI in assessment – an exploration of university teachers’ valuing practices
2024 (English)In: AI and Ethics, ISSN 2730-5953, E-ISSN 2730-5961Article in journal (Refereed) Epub ahead of print
Abstract [en]

Emerging AI technologies are changing teachers’ assessment practices and posing higher education institutions with novel ethical dilemmas. While frameworks and guidelines promise to align technology with moral and human values, the dilemma of how AI may impact existing valuing practices is often overlooked. To examine this gap, we conducted an interview study with university teachers from different disciplines at a university in Sweden. Following a semi-structured study design, we explored university teachers’ anticipations of AI in assessment and examined how emerging AI technologies may reconfigure the fit between values, challenges, and activities situated in everyday assessment contexts. Our findings suggest that anticipated AI, including automation and AI-mediated communication and grading, may amplify and reduce teachers’ possibilities to align activities with professional, pedagogical, and relational values and solve current challenges. In light of the study’s findings, the paper discusses potential ethical issues in the anticipated shifts from human to automated assessment and possible new and reinforced challenges brought by AI for education.

National Category
Human Computer Interaction
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-238213 (URN)10.1007/s43681-024-00558-8 (DOI)
Available from: 2025-01-17 Created: 2025-01-17 Last updated: 2025-04-01
Storr, C. & McGrath, C. (2023). In search of the evidence: digital learning in legal education, a scoping review. The Law Teacher, 57(2), 119-134
Open this publication in new window or tab >>In search of the evidence: digital learning in legal education, a scoping review
2023 (English)In: The Law Teacher, ISSN 0306-9400, Vol. 57, no 2, p. 119-134Article, review/survey (Refereed) Published
Abstract [en]

There is a lack of consolidated knowledge that identifies best practices when using digital learning tools, technologies and interventions in legal education. This paper seeks to illustrate the scope and nature of the current evidence that supports digital learning in legal education. The paper provides a scoping review of 10 years of empirical research in digital learning in legal education. Moreover, the paper discusses different forms of evidence in an effort to understand the kind of evidence legal scholars invoke when presenting what works in digital learning in legal education and why. In the paper, we present a picture of the empirical field of digital learning in legal education, including where these studies are being done, and the types of studies conducted. Moreover, we thematise the main findings across the studies: improved student learning, student satisfaction with digital tools, and drivers of engagement. We conclude by identifying some potential knowledge gaps. 

Keywords
Legal education, digital interventions, educational evidence, scoping review
National Category
Pedagogy Law
Identifiers
urn:nbn:se:su:diva-215149 (URN)10.1080/03069400.2022.2133212 (DOI)000922974600001 ()2-s2.0-85147280147 (Scopus ID)
Available from: 2023-03-02 Created: 2023-03-02 Last updated: 2023-10-10Bibliographically approved
Cerratto-Pargman, T., McGrath, C., Viberg, O. & Knight, S. (2023). New Vistas on Responsible Learning Analytics. Journal of Learning Analytics, 10(1), 133-148
Open this publication in new window or tab >>New Vistas on Responsible Learning Analytics
2023 (English)In: Journal of Learning Analytics, E-ISSN 1929-7750, Vol. 10, no 1, p. 133-148Article in journal (Refereed) Published
Abstract [en]

The focus of ethics in learning analytics (LA) frameworks and guidelines is predominantly on procedural elements of data management and accountability. Another, less represented focus is on the duty to act and LA as a moral practice. Data feminism as a critical theoretical approach to data science practices may offer LA research and practitioners a valuable lens through which to consider LA as a moral practice. This paper examines what data feminism can offer the LA community. It identifies critical questions for further developing and enabling a responsible stance in LA research and practice taking one particular case — algorithmic decision-making — as a point of departure.

Keywords
Data feminism, critical theory, ethical guidelines, learning analytics, responsibility, research paper
National Category
Pedagogy Ethics
Identifiers
urn:nbn:se:su:diva-216180 (URN)10.18608/jla.2023.7781 (DOI)000952896000002 ()2-s2.0-85150729861 (Scopus ID)
Available from: 2023-04-06 Created: 2023-04-06 Last updated: 2023-04-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8215-3646

Search in DiVA

Show all publications