Change search
Link to record
Permanent link

Direct link
Farazouli, AlexandraORCID iD iconorcid.org/0000-0001-7601-3850
Publications (9 of 9) Show all publications
Premat, C. & Farazouli, A. (2025). Academic Integrity vs. Artificial Intelligence: a tale of two AIs. Práxis Educativa, 20(2025), 1-12
Open this publication in new window or tab >>Academic Integrity vs. Artificial Intelligence: a tale of two AIs
2025 (English)In: Práxis Educativa, ISSN 1809-4031, E-ISSN 1809-4309, Vol. 20, no 2025, p. 1-12Article in journal (Refereed) Published
Abstract [en]

This paper examines how academic integrity is conceptualized and practiced in Swedish higher education in the context of generative Artificial Intelligence (AI) tools such as ChatGPT. It offers both an institutional and a student-centered perspective, drawing on university guidelines, pedagogical resources, and disciplinary cases, alongside a phenomenographic analysis of student reflections on AI use in academic work. The empirical data, gathered from 42 students in a course on academic writing, reveal a spectrum of attitudes—ranging from full transparency to pragmatic distinctions between substantial and auxiliary uses of AI. These reflections are situated within a broader cultural framework where academic integrity is treated not as a rigid code but as a relational and adaptive practice. The text argues that Sweden’s approach to academic integrity emphasizes trust, pedagogical support, and context-sensitive reasoning, rather than surveillance or prohibition. By analyzing institutional responses, cultural values, and student reasoning together, the article offers insights into how academic ethics are evolving in an era of AI-driven transformation.

Abstract [pt]

Este artigo examina como a integridade acadêmica é conceituada e praticada no Ensino Superior sueco no contexto de ferramentas generativas de Inteligência Artificial (IA), como o ChatGPT. Oferece-se uma perspectiva institucional e centrada no aluno, com base em diretrizes universitárias, recursos pedagógicos e casos disciplinares, juntamente com uma análise fenomenográfica das reflexões dos alunos sobre o uso da IA no trabalho acadêmico. Os dados empíricos, coletados de 42 estudantes em um curso de escrita acadêmica, revelam um espectro de atitudes, que vai da transparência total a distinções pragmáticas entre usos substanciais e auxiliares da IA. Essas reflexões situam-se dentro de uma estrutura cultural mais ampla, na qual a integridade acadêmica é tratada não como um código rígido, mas como uma prática relacional e adaptativa. Argumenta-se que a abordagem sueca à integridade acadêmica enfatiza a confiança, o apoio pedagógico e o raciocínio sensível ao contexto, em vez de vigilância ou proibição. Ao analisar em conjunto as respostas institucionais, os valores culturais e o raciocínio dos alunos, o texto oferece reflexões sobre como a ética acadêmica está evoluindo em uma era de transformações impulsionadas pela IA.

Keywords
Academic integrity, Generative AI, ChatGPT, Swedish universities, Institutional policy, Phenomenography, Ethics in Higher Education, Integridad académica, IA generativa, ChatGPT, Universidades suecas, Política institucional, Fenomenografía, Ética en la Educación Superior
National Category
Pedagogy
Research subject
Education
Identifiers
urn:nbn:se:su:diva-245229 (URN)10.5212/praxeduc.v.20.24871.016 (DOI)2-s2.0-105007876285 (Scopus ID)
Available from: 2025-08-01 Created: 2025-08-01 Last updated: 2025-08-19Bibliographically approved
McGrath, C., Farazouli, A. & Cerratto-Pargman, T. (2025). Generative AI chatbots in higher education: a review of an emerging research area. Higher Education, 89(6), 1533-1549
Open this publication in new window or tab >>Generative AI chatbots in higher education: a review of an emerging research area
2025 (English)In: Higher Education, ISSN 0018-1560, E-ISSN 1573-174X, Vol. 89, no 6, p. 1533-1549Article in journal (Refereed) Published
Abstract [en]

Artificial intelligence (AI) chatbots trained on large language models are an example of generative AI which brings promises and threats to the higher education sector. In this study, we examine the emerging research area of AI chatbots in higher education (HE), focusing specifically on empirical studies conducted since the release of ChatGPT. Our review includes 23 research articles published between December 2022 and December 2023 exploring the use of AI chatbots in HE settings. We take a three-pronged approach to the empirical data. We first examine the state of the emerging field of AI chatbots in HE. Second, we identify the theories of learning used in the empirical studies on AI chatbots in HE. Third, we scrutinise the discourses of AI in HE framing the latest empirical work on AI chatbots. Our findings contribute to a better understanding of the eclectic state of the nascent research area of AI chatbots in HE, the lack of common conceptual groundings about human learning, and the presence of both dystopian and utopian discourses about the future role of AI chatbots in HE.

Keywords
AI chatbots, Generative AI, Large language models, Discourses, Theories of learning
National Category
Information Systems, Social aspects Artificial Intelligence
Research subject
Information Society
Identifiers
urn:nbn:se:su:diva-238205 (URN)10.1007/s10734-024-01288-w (DOI)001297006600001 ()2-s2.0-85201948858 (Scopus ID)
Available from: 2025-01-17 Created: 2025-01-17 Last updated: 2025-09-09Bibliographically approved
Farazouli, A., Cerratto Pargman, T., Bolander Laksov, K. & McGrath, C. (2025). Navigating uncertainty: university teachers’ experiences and perceptions of generative artificial intelligence in teaching and learning. Studies in Higher Education
Open this publication in new window or tab >>Navigating uncertainty: university teachers’ experiences and perceptions of generative artificial intelligence in teaching and learning
2025 (English)In: Studies in Higher Education, ISSN 0307-5079, E-ISSN 1470-174XArticle in journal (Refereed) Epub ahead of print
Abstract [en]

The emergence of generative artificial intelligence (GAI) has given rise to diverse narratives about its transformative potential in higher education. Despite widespread speculation about how GAI might change teaching and learning, there is a significant gap in understanding how GAI artefacts are perceived in educational practices, particularly from the perspective of university teachers. This study investigates how GAI mediates teachers’ practices and reconfigures professional roles. Drawing on post-phenomenology and technological mediation theory, we focus on university teachers’ experiences and perceptions of GAI in higher education. Twenty-four university teachers participated in workshops involving assessment exercises with GAI-generated outputs, followed by focus group interviews discussing the challenges and opportunities posed by GAI. Findings reveal that GAI prompts teachers to reassess established practices, particularly in relation to assessment, while confronting ethical concerns regarding fairness, trust, and quality. Teachers described their initial engagement with GAI as transformative yet challenging, as they navigated uncertainties about their roles while prioritising students’ learning and development. By capturing teachers’ voices during this pivotal period, the study contributes to the growing body of research on AI's role in higher education and provides a nuanced understanding of its impact on teaching and learning.

Keywords
Generative artificial intelligence, higher education, postphenomenology, technology mediation, uncertainty, university teachers
National Category
Educational Work Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-247437 (URN)10.1080/03075079.2025.2550766 (DOI)001561705000001 ()2-s2.0-105014934632 (Scopus ID)
Available from: 2025-09-29 Created: 2025-09-29 Last updated: 2025-09-29
Farazouli, A. (2024). Automation and Assessment: Exploring Ethical Issues of Automated Grading Systems from a Relational Ethics Approach. In: Anders Buch; Ylva Lindberg; Teresa Cerratto Pargman (Ed.), Framing Futures in Postdigital Education: Critical Concepts for Data-driven Practices (pp. 209-226). Cham: Springer
Open this publication in new window or tab >>Automation and Assessment: Exploring Ethical Issues of Automated Grading Systems from a Relational Ethics Approach
2024 (English)In: Framing Futures in Postdigital Education: Critical Concepts for Data-driven Practices / [ed] Anders Buch; Ylva Lindberg; Teresa Cerratto Pargman, Cham: Springer, 2024, p. 209-226Chapter in book (Refereed)
Abstract [en]

Automation in assessment is a fast-emerging AI research field that raises ethical issues for education. So far, dominant approaches to ethics have led to the development of numerous ethical guidelines to fix issues that the deployment of AI systems might introduce. This chapter critically examines the ethical considerations of AI automation in education by focusing on assessment and Automated Grading Systems (AGS). To this end, a relational approach to ethics is discussed that focuses on AGS’ specificities regarding data, algorithms, and assessment and the context where these systems are used, including situations and purposes, actors and relations, and time and place.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Postdigital Science and Education, ISSN 2662-5326, E-ISSN 2662-5334
National Category
Educational Work Ethics
Identifiers
urn:nbn:se:su:diva-241590 (URN)10.1007/978-3-031-58622-4_12 (DOI)2-s2.0-85210907680 (Scopus ID)978-3-031-58621-7 (ISBN)978-3-031-58622-4 (ISBN)
Available from: 2025-04-01 Created: 2025-04-01 Last updated: 2025-04-01Bibliographically approved
Cerratto-Pargman, T., Sporrong, E., Farazouli, A. & McGrath, C. (2024). Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education. Hogre Utbildning, 14(1), 74-81
Open this publication in new window or tab >>Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher Education
2024 (English)In: Hogre Utbildning, E-ISSN 2000-7558, Vol. 14, no 1, p. 74-81Article in journal (Refereed) Published
Abstract [en]

Interested in emerging technologies in higher education, we look at AI chatbots through the lens of human–technology mediations. We argue for shifting the focus from what higher education can do with AI chatbots to why AI chatbots are compelling for higher education’s raison-d’être. We call for a critical debate examining the power of AI chatbots in configuring students as civic actors in an increasingly complex and digitalized society. We welcome a continuous and rigorous examination of generative AI chatbots and their impact on teaching practices and student learning in higher education.

Keywords
ChatGPT, criticality, higher education practices, student writing, technological mediations
National Category
Didactics
Identifiers
urn:nbn:se:su:diva-236054 (URN)10.23865/hu.v14.6243 (DOI)2-s2.0-85196730145 (Scopus ID)
Available from: 2025-01-14 Created: 2025-01-14 Last updated: 2025-05-13Bibliographically approved
Farazouli, A., Cerratto-Pargman, T., Bolander Laksov, K. & McGrath, C. (2024). Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers' assessment practices. Assessment & Evaluation in Higher Education, 49(3), 363-375
Open this publication in new window or tab >>Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers' assessment practices
2024 (English)In: Assessment & Evaluation in Higher Education, ISSN 0260-2938, E-ISSN 1469-297X, Vol. 49, no 3, p. 363-375Article in journal (Refereed) Published
Abstract [en]

AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination prompts in undergraduate contexts. University teachers (n = 24) from four different departments in humanities and social sciences participated in Turing Test-inspired experiments, where they blindly assessed student and ChatGPT-written responses to home examination questions. Additionally, we conducted semi-structured interviews in focus groups with the same teachers examining their reflections about the quality of the texts they assessed. Regarding chatbot-generated texts, we found a passing rate range across the cohort (37.5 - 85.7%) and a chatbot-written suspicion range (14-23%). Regarding the student-written texts, we identified patterns of downgrading, suggesting that teachers were more critical when grading student-written texts. Drawing on post-phenomenology and mediation theory, we discuss AI chatbots as a potentially disruptive technology in higher education practices.

Keywords
AI-chatbots, assessment, higher education, home examination, Turing test
National Category
Pedagogy
Identifiers
urn:nbn:se:su:diva-220860 (URN)10.1080/02602938.2023.2241676 (DOI)001040685700001 ()2-s2.0-85166200345 (Scopus ID)
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-09-16Bibliographically approved
Bendixen, C., Gunnerstad, A., Premat, C. & Farazouli, A. (2024). Plagiat — hur kan det undvikas? Handbok för medarbetare vid Stockholms universitet. Stockholm: Centrum för universitetslärarutbildning
Open this publication in new window or tab >>Plagiat — hur kan det undvikas? Handbok för medarbetare vid Stockholms universitet
2024 (Swedish)Report (Other academic)
Abstract [sv]

Den här handboken vänder sig till lärare som undervisar vid Stockholms universitet. Den är skriven med syftett att ge dig stöd för att aktivt motverka plagiering och fusk. Det finns även information om generativ AI.

Place, publisher, year, edition, pages
Stockholm: Centrum för universitetslärarutbildning, 2024. p. 70
Series
Rapporter om undervisning och lärande i högre utbildning, ISSN 2003-1688
Keywords
Plagiat, akademisk integritet, handbok, rekommendationer, grupparbete, examinationsformer
National Category
Pedagogy
Research subject
Education
Identifiers
urn:nbn:se:su:diva-235381 (URN)10.17045/sthlmuni.27612915.v1 (DOI)
Available from: 2024-11-08 Created: 2024-11-08 Last updated: 2024-11-13Bibliographically approved
Bendixen, C., Premat, C., Gunnerstad, A. & Farazouli, A. (2024). Preventing plagiarism: Handbook for Stockholm University Staff (Second Edition). Stockholm University
Open this publication in new window or tab >>Preventing plagiarism: Handbook for Stockholm University Staff (Second Edition)
2024 (English)Report (Other academic)
Abstract [en]

This handbook, Preventing Plagiarism: A Guide for Stockholm University Staff (2024, 2nd Edition), provides a comprehensive framework to combat plagiarism and academic dishonesty in higher education. It addresses challenges arising from the increasing use of digital tools, including generative AI, and explores how these technologies impact student behavior and academic integrity. The guide emphasizes preventive strategies through effective teaching practices, transparent communication of expectations, and the design of assessments. It also includes legal insights, detection methods, and procedures for handling suspected plagiarism cases. By integrating practical exercises, tips for academic writing, and insights into ethical AI usage, this resource equips educators with tools to foster integrity and enhance learning outcomes.

Place, publisher, year, edition, pages
Stockholm University, 2024. p. 68
Keywords
plagiarism, academic integrity, generative AI, academic dishonesty, higher education.
National Category
Pedagogy
Identifiers
urn:nbn:se:su:diva-235826 (URN)10.17045/sthlmuni.27826344.v1 (DOI)
Available from: 2024-11-25 Created: 2024-11-25 Last updated: 2024-11-28Bibliographically approved
Törnqvist, M., Mahamud, M., Guzman, E. M. & Farazouli, A. (2023). ExASAG: Explainable Framework for Automatic Short Answer Grading. In: Ekaterina Kochmar; Jill Burstein; Andrea Horbach; Ronja Laarmann-Quante; Nitin Madnani; Anaïs Tack; Victoria Yaneva; Zheng Yuan; Torsten Zesch (Ed.), Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023): . Paper presented at 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), Toronto, Canada, July 13, 2023 (pp. 361-371). Stroudsburg: Association for Computational Linguistics
Open this publication in new window or tab >>ExASAG: Explainable Framework for Automatic Short Answer Grading
2023 (English)In: Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023) / [ed] Ekaterina Kochmar; Jill Burstein; Andrea Horbach; Ronja Laarmann-Quante; Nitin Madnani; Anaïs Tack; Victoria Yaneva; Zheng Yuan; Torsten Zesch, Stroudsburg: Association for Computational Linguistics, 2023, p. 361-371Conference paper, Published paper (Refereed)
Abstract [en]

As in other NLP tasks, Automatic Short Answer Grading (ASAG) systems have evolved from using rule-based and interpretable machine learning models to utilizing deep learning architectures to boost accuracy. Since proper feedback is critical to student assessment, explainability will be crucial for deploying ASAG in real-world applications. This paper proposes a framework to generate explainable outcomes for assessing question-answer pairs of a Data Mining course in a binary manner. Our framework utilizes a fine-tuned Transformer-based classifier and an explainability module using SHAP or Integrated Gradients to generate language explanations for each prediction. We assess the outcome of our framework by calculating accuracy-based metrics for classification performance. Furthermore, we evaluate the quality of the explanations by measuring their agreement with human-annotated justifications using Intersection-Over-Union at a token level to derive a plausibility score. Despite the relatively limited sample, results show that our framework derives explanations that are, to some degree, aligned with domain-expert judgment. Furthermore, both explainability methods perform similarly in their agreement with human-annotated explanations. A natural progression of our work is to analyze the use of our explainable ASAG framework on a larger sample to determine the feasibility of implementing a pilot study in a real-world setting.

Place, publisher, year, edition, pages
Stroudsburg: Association for Computational Linguistics, 2023
National Category
Computer Sciences
Identifiers
urn:nbn:se:su:diva-235287 (URN)10.18653/v1/2023.bea-1.29 (DOI)2-s2.0-85174493627 (Scopus ID)978-1-959429-80-7 (ISBN)
Conference
18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), Toronto, Canada, July 13, 2023
Available from: 2024-11-08 Created: 2024-11-08 Last updated: 2024-11-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7601-3850

Search in DiVA

Show all publications