Change search
Link to record
Permanent link

Direct link
Publications (10 of 41) Show all publications
Colonna, L. (2025). Artificial Intelligence in Education (AIED): Towards More Effective Regulation. European Journal of Risk Regulation, 1-21
Open this publication in new window or tab >>Artificial Intelligence in Education (AIED): Towards More Effective Regulation
2025 (English)In: European Journal of Risk Regulation, ISSN 1867-299X, E-ISSN 2190-8249, p. 1-21Article in journal (Refereed) Epub ahead of print
Abstract [en]

This paper critically assesses the effectiveness of the EU AI Act in regulating artificial intelligencein higher education (AIED), with a focus on how it interacts with existing education regulation. Itexamines the growing use of high-risk AI systems – such as those used in admissions, assessment,academic progression, and exam proctoring – and identifies key regulatory frictions that arisewhen AI regulation and education regulation pursue overlapping but potentially conflicting aims.Central to this analysis is the concept of human oversight: while the AI Act frames oversight as asafeguard for accountability and fundamental rights, education regulation emphasises theprofessional autonomy of teachers and their role in maintaining pedagogical integrity. Yet, theregulatory role of teachers in AI-mediated environments remains unclear. Applying Mousmouti’seffectiveness test, the paper evaluates the AI Act along four dimensions – purpose, coherence,results, and structural integration with the broader legal framework – and argues that legaleffectiveness in this context requires a more precise alignment between AI and educationregulation.

Keywords
AI Act, AIED, education regulation, effectiveness, GDPR
National Category
Law
Identifiers
urn:nbn:se:su:diva-246349 (URN)10.1017/err.2025.10039 (DOI)001560564100001 ()2-s2.0-105014780936 (Scopus ID)
Available from: 2025-09-01 Created: 2025-09-01 Last updated: 2025-09-09
Colonna, L. (2025). Complex Normativity: Understanding the Relationship between Human Oversight by Design and Standardization in the Context of AI Development and Deployment. In: Eleni Kosta; Dara Hallinan; Paul De Hert; Suzanne Nusselder (Ed.), Eleni Kosta; Dara Hallinan; Paul De Hert; Suzanne Nusselder (Ed.), Data Protection, Privacy and Artificial Intelligence: To Govern or To Be Governed, That Is the Question. Paper presented at 17th International CPDP.ai Conference (CPDP.ai 2024), Brussels, Belgium, 22-24 May, 2024 (pp. 77-113). Oxford: Hart Publishing Ltd
Open this publication in new window or tab >>Complex Normativity: Understanding the Relationship between Human Oversight by Design and Standardization in the Context of AI Development and Deployment
2025 (English)In: Data Protection, Privacy and Artificial Intelligence: To Govern or To Be Governed, That Is the Question / [ed] Eleni Kosta; Dara Hallinan; Paul De Hert; Suzanne Nusselder, Oxford: Hart Publishing Ltd, 2025, p. 77-113Conference paper, Published paper (Refereed)
Abstract [en]

This chapter examines the relationship between Human Oversight by Design (HObD), as outlined in Article 14 of the AI Act, and socio-technical standardisation, aiming to understand their roles as regulatory techniques within AI development and deployment. It argues that the growing influence of sociotechnical standards, which aim to protect health, safety as well as fundamental rights, intersects with ‘Legal Protection by Design’ norms like HObD in ways that require critical legal analysis. By exploring these concepts through the lenses of rule setting, monitoring, and enforcement, the chapter highlights how public and private actors, particularly standardisation bodies and technology providers, are increasingly intertwined in regulatory processes. This shift toward co-regulation raises critical challenges related to ensuring accountability and maintaining the legitimacy of regulations influenced by private actors. Ultimately, the chapter demonstrates that the intersection of these hard and soft law approaches to AI regulation creates complex normative issues.

Place, publisher, year, edition, pages
Oxford: Hart Publishing Ltd, 2025
Keywords
AI Act, Human oversight by design, Socio-technical standards, De-centered regulation
National Category
Law
Identifiers
urn:nbn:se:su:diva-242862 (URN)2-s2.0-105014585319 (Scopus ID)9781509984015 (ISBN)9781509983995 (ISBN)
Conference
17th International CPDP.ai Conference (CPDP.ai 2024), Brussels, Belgium, 22-24 May, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)
Available from: 2025-05-03 Created: 2025-05-03 Last updated: 2025-11-01Bibliographically approved
Jevremovic, A., Aleksic, S., Veinovic, M. & Colonna, L. (2025). Data Security in AAL. In: Albert Ali Salah; Liane Colonna; Francisco Florez-Revuelta (Ed.), Privacy-Aware Monitoring for Assisted Living: Ethical, Legal, and Technological Aspects of Audio- and Video-Based AAL Solutions (pp. 99-128). Springer Nature
Open this publication in new window or tab >>Data Security in AAL
2025 (English)In: Privacy-Aware Monitoring for Assisted Living: Ethical, Legal, and Technological Aspects of Audio- and Video-Based AAL Solutions / [ed] Albert Ali Salah; Liane Colonna; Francisco Florez-Revuelta, Springer Nature, 2025, p. 99-128Chapter in book (Refereed)
Abstract [en]

Active assisted living (AAL) environments inherently collect, generate, and use large amounts of data. These environments are inherently distributed and often use external services, so data can be in different states and locations. This data is often very sensitive, and its compromise can endanger the privacy and security of users. Additionally, data unavailability can jeopardize the proper functioning of critical environment functions. This chapter presents relevant aspects of data protection. General principles, risks, approaches, and solutions for data protection are analyzed, taking into account the specificities applicable in AAL environments. The first part of the chapter provides an overview of security and data protection, along with a chapter outline. The second part offers a retrospective of the evolution of relevant information technologies, with an emphasis on two critical technologies for AAL: the Internet of Things and cloud computing. The third part outlines the relevant cryptological fundamentals of data protection. In the fourth part, a synthesis of the fundamentals, technologies, risks, and possible solutions is presented. The fifth part of the chapter expands the view on data protection from the perspective of the importance of open source and technological sovereignty. The sixth part reviews the importance of proper management of the entire lifecycle of AAL environments and their components. Finally, the last part of the chapter analyzes relevant and current legal aspects of data protection.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Law
Identifiers
urn:nbn:se:su:diva-245115 (URN)10.1007/978-3-031-84158-3_4 (DOI)2-s2.0-105014332593 (Scopus ID)978-3-031-84157-6 (ISBN)978-3-031-84158-3 (ISBN)
Available from: 2025-07-26 Created: 2025-07-26 Last updated: 2025-09-09Bibliographically approved
Colonna, L. & Riva, G. (2025). Smart Mirrors and Data Protection Regulation. In: Albert Ali Salah; Liane Colonna; Francisco Florez-Revuelta (Ed.), Privacy-Aware Monitoring for Assisted Living: Ethical, Legal, and Technological Aspects of Audio- and Video-Based AAL Solutions (pp. 291-311). Springer Nature
Open this publication in new window or tab >>Smart Mirrors and Data Protection Regulation
2025 (English)In: Privacy-Aware Monitoring for Assisted Living: Ethical, Legal, and Technological Aspects of Audio- and Video-Based AAL Solutions / [ed] Albert Ali Salah; Liane Colonna; Francisco Florez-Revuelta, Springer Nature, 2025, p. 291-311Chapter in book (Refereed)
Abstract [en]

Smart mirrors have the potential to significantly enhance the quality of life for older adults by supporting their health, well-being, and social connectivity. However, they also introduce substantial legal and regulatory challenges, particularly in the realm of data protection. This paper aims to examine these challenges and contribute to the interdisciplinary discourse on smart mirrors, facilitating the integration of legal and technological considerations. At the outset, the paper provides an overview of smart mirrors to understand the technological foundation for which the law applies. It then explores various data protection concerns raised by smart mirrors, such as issues related to consent and transparency, security vulnerabilities, and the potential misuse of personal information. Next, the paper presents a taxonomy of key factors that influence how data protection rules apply to smart mirrors, such the physical location of the smart mirror, the context in which it is used (e.g., private home, public facility, healthcare environment), the status of the user (e.g., adult, minor, patient), the status of the service provider (e.g., manufacturer, third-party service), the involvement of any intermediaries in data processing, and the nature and sensitivity of the data being gathered (e.g., biometric data, personal health information). The paper concludes with an examination of the role of data protection by design (DPbD) in this context, highlighting the importance of incorporating regulatory compliance into smart mirrors from the beginning of its development and deployment.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Law
Identifiers
urn:nbn:se:su:diva-245116 (URN)10.1007/978-3-031-84158-3_12 (DOI)2-s2.0-105014478507 (Scopus ID)978-3-031-84157-6 (ISBN)978-3-031-84158-3 (ISBN)
Available from: 2025-07-26 Created: 2025-07-26 Last updated: 2025-09-09Bibliographically approved
Colonna, L. (2025). The end of open source?: Regulating open source under the cyber resilience act and the new product liability directive. The Computer Law and Security Review, 56, Article ID 106105.
Open this publication in new window or tab >>The end of open source?: Regulating open source under the cyber resilience act and the new product liability directive
2025 (English)In: The Computer Law and Security Review, ISSN 0267-3649, Vol. 56, article id 106105Article in journal (Refereed) Published
Abstract [en]

Rooted in idealism, the open-source model leverages collaborative intelligence to drive innovation, leading to major benefits for both industry and society. As open-source software (OSS) plays an increasingly central role in driving the digitalization of society, policymakers are examining the interactions between upstream open-source communities and downstream manufacturers. They aim to leverage the benefits of OSS, such as performance enhancements and adaptability across diverse domains, while ensuring software security and accountability. The regulatory landscape is on the brink of a major transformation with the recent adoption of both the Cyber Resilience Act (CRA) as well as the Product Liability Directive (PLD), raising concerns that these laws could threaten the future of OSS.

This paper investigates how the CRA and the PDL regulate OSS, specifically exploring the scope of exemptions found in the laws. It further explores how OSS practices might adapt to the evolving regulatory landscape, focusing on the importance of documentation practices to support compliance obligations, thereby ensuring OSS's continued relevance and viability. It concludes that due diligence requirements mandate a thorough assessment of OSS components to ensure their safety for integration into commercial products and services. Documentation practices like security attestations, Software Bill of Materials (SBOMs), data cards and model cards will play an increasingly important role in the software supply chain to ensure that downstream entities can meet their obligations under these new legal frameworks.

National Category
Law
Identifiers
urn:nbn:se:su:diva-239444 (URN)10.1016/j.clsr.2024.106105 (DOI)001421263800001 ()2-s2.0-85213216046 (Scopus ID)
Funder
Wallenberg Foundations
Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-10-03Bibliographically approved
Colonna, L. (2024). The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?. In: Andreas Moberg; Eduardo Gill-Pedro (Ed.), The Yearbook of Socio-Economic Constitutions: Law and the Governance of Artificial Intelligence (pp. 51-93). Springer
Open this publication in new window or tab >>The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?
2024 (English)In: The Yearbook of Socio-Economic Constitutions: Law and the Governance of Artificial Intelligence / [ed] Andreas Moberg; Eduardo Gill-Pedro, Springer, 2024, p. 51-93Chapter in book (Refereed)
Abstract [en]

This paper argues that by failing to acknowledge the complexity of modern research practices that are shifting from a single discipline to multiple disciplines involving many entities, some public, some private, the proposed AI Act creates mechanisms for regulatory arbitrage. The article begins with a semantic analysis of the concept of research from a legal perspective. It then explains how the proposed AI Act addresses the concept of research by examining the research exemption that is set forward in the forthcoming law as it currently exists. After providing an overview of the proposed law, the paper explores the research exemption to highlight whether there are any gaps, ambiguities, or contradictions in the law that may be exploited by either public or private actors seeking to use the exemption as a shield to avoid compliance with duties imposed under the law.

To address whether the research exemption reflects a coherent legal rule, it is considered from five different perspectives. The paper begins by examining the extent to which the research exemption applies to private or commercial entities that may not pursue research in a benevolent manner to solve societal problems, but nevertheless contribute to innovation and economic growth within the EU. Next, the paper explores how the exemption applies to research that takes place within academia but is on the path to commercialization. The paper goes on to consider the situation where academic researchers invoke the exemption and then go on to provide the AI they develop to their employing institutions or other public bodies for no cost. Fourth, the paper inspects how the exemption functions when researchers build high-risk or prohibited AI, publish their findings, or share them via an open-source platform, and other actors copy the AI. Finally, the paper considers how the exemption applies to research that takes place “in the wild” or in regulatory sandboxes.

Place, publisher, year, edition, pages
Springer, 2024
Series
YSEC Yearbook of Socio-Economic Constitutions ; 2023
National Category
Law
Identifiers
urn:nbn:se:su:diva-226328 (URN)10.1007/16495_2023_59 (DOI)2-s2.0-86000572331 (Scopus ID)978-3-031-55831-3 (ISBN)978-3-031-55832-0 (ISBN)
Available from: 2024-02-07 Created: 2024-02-07 Last updated: 2025-06-02Bibliographically approved
Colonna, L. (2023). Exploring the Relationship Between Article 22 of the General Data Protection Regulation and Article 14 of the Proposed AI Act: Some Preliminary Observations and Critical Reflections. In: Martin Brinnen; Cecilia Magnusson Sjöberg; David Törngren; Daniel Westman; Sören Öman (Ed.), Dataskyddet 50 År – Historia, Aktuella problem och Framtid: (pp. 443-465). Visby: Eddy.se
Open this publication in new window or tab >>Exploring the Relationship Between Article 22 of the General Data Protection Regulation and Article 14 of the Proposed AI Act: Some Preliminary Observations and Critical Reflections
2023 (English)In: Dataskyddet 50 År – Historia, Aktuella problem och Framtid / [ed] Martin Brinnen; Cecilia Magnusson Sjöberg; David Törngren; Daniel Westman; Sören Öman, Visby: Eddy.se , 2023, p. 443-465Chapter in book (Other academic)
Place, publisher, year, edition, pages
Visby: Eddy.se, 2023
National Category
Law (excluding Law and Society)
Identifiers
urn:nbn:se:su:diva-226327 (URN)9789189840027 (ISBN)
Available from: 2024-02-07 Created: 2024-02-07 Last updated: 2024-02-08Bibliographically approved
Wilkowska, W., Offermann, J., Colonna, L., Florez-Revuelta, F., Climent-Pérez, P., Mihailidis, A., . . . Ziefle, M. (2023). Interdisciplinary perspectives on privacy awareness in lifelogging technology development. Journal of Ambient Intelligence and Humanized Computing, 14(3), 2291-2312
Open this publication in new window or tab >>Interdisciplinary perspectives on privacy awareness in lifelogging technology development
Show others...
2023 (English)In: Journal of Ambient Intelligence and Humanized Computing, ISSN 1868-5137, E-ISSN 1868-5145, Vol. 14, no 3, p. 2291-2312Article in journal (Refereed) Published
Abstract [en]

Population aging resulting from demographic changes requires some challenging decisions and necessary steps to be taken by different stakeholders to manage current and future demand for assistance and support. The consequences of population aging can be mitigated to some extent by assisting technologies that can support the autonomous living of older individuals and persons in need of care in their private environments as long as possible. A variety of technical solutions are already available on the market, but privacy protection is a serious, often neglected, issue when using such (assisting) technology. Thus, privacy needs to be thoroughly taken under consideration in this context. In a three-year project PAAL (‘Privacy-Aware and Acceptable Lifelogging Services for Older and Frail People’), researchers from different disciplines, such as law, rehabilitation, human-computer interaction, and computer science, investigated the phenomenon of privacy when using assistive lifelogging technologies. In concrete terms, the concept of Privacy by Design was realized using two exemplary lifelogging applications in private and professional environments. A user-centered empirical approach was applied to the lifelogging technologies, investigating the perceptions and attitudes of (older) users with different health-related and biographical profiles. The knowledge gained through the interdisciplinary collaboration can improve the implementation and optimization of assistive applications. In this paper, partners of the PAAL project present insights gained from their cross-national, interdisciplinary work regarding privacy-aware and acceptable lifelogging technologies. 

Keywords
Lifelogging applications, Privacy, Acceptance, Interdisciplinary project
National Category
Other Social Sciences
Identifiers
urn:nbn:se:su:diva-215107 (URN)10.1007/s12652-022-04486-5 (DOI)2-s2.0-85143671280 (Scopus ID)
Available from: 2023-02-28 Created: 2023-02-28 Last updated: 2023-03-08Bibliographically approved
Colonna, L. (2023). Teachers in the loop? An analysis of automatic assessment systems under Article 22 GDPR. International Data Privacy Law, 14(1), 3-18
Open this publication in new window or tab >>Teachers in the loop? An analysis of automatic assessment systems under Article 22 GDPR
2023 (English)In: International Data Privacy Law, ISSN 2044-3994, E-ISSN 2044-4001, Vol. 14, no 1, p. 3-18Article in journal (Refereed) Published
Abstract [en]

Key Points

  • This article argues that while there is great promise in the everyday automation of higher education to create benefits for students, efficiencies for instructors, and cost savings for institutions, it is important to critically consider how AI-based assessment will transform the role of teachers and the relationship between teachers and students.
  • The focus of the work is on exploring whether and to what extent the requirements set forth in Article 22 of the General Data Protection Regulation (GDPR) apply within the context of AI-based automatic assessment systems, in particular the legal obligation to ensure that a teacher remains in the loop, for example being capable of overseeing and overriding decisions when necessary.
  • Educational judgments involving automatic assessments frequently occur in a complicated decision-making environment that is framed by institutional processes which are multi-step, hierarchical, and bureaucratic. This complexity makes it challenging to determine whether the output of an AI-based automatic assessment system represents an ‘individual decision’ about a data subject within the meaning of Article 22.
  • It is also unclear whether AI-based assessments involve decisions based ‘solely’ on automatic processing or whether teachers provide decisional support, excluding the application of Article 22. According to recent enforcement decisions, human oversight is entangled with institutional procedures and safeguards as well as system design.
National Category
Law (excluding Law and Society) Computer and Information Sciences
Identifiers
urn:nbn:se:su:diva-225530 (URN)10.1093/idpl/ipad024 (DOI)001114797200001 ()2-s2.0-85188794316 (Scopus ID)
Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-11-14Bibliographically approved
Colonna, L. (2022). Addressing the Responsibility Gap in Data Protection by Design: Towards a More Future-oriented, Relational, and Distributed Approach. Tilburg Law Review, 27(1), 1-21
Open this publication in new window or tab >>Addressing the Responsibility Gap in Data Protection by Design: Towards a More Future-oriented, Relational, and Distributed Approach
2022 (English)In: Tilburg Law Review, ISSN 2211-0046, Vol. 27, no 1, p. 1-21Article in journal (Refereed) Published
Abstract [en]

This paper explores the extent to which technology providers are responsible to end users for embedding data protection rules in the AI systems they design and develop, so as to safeguard the fundamental rights to privacy and data protection. The main argument set forth is that a relational rationale, requiring a broader range of actors in the supply chain to share legal responsibility for Data Protection by Design (DPbD) is better suited to address infringements to these fundamental rights than the current model that assigns responsibility mainly to the data controller or data processor. Reconceptualizing the law in a more future-oriented, relational, and distributed way would make it possible to adapt legal rules – including those within the GDPR and the continuously evolving EU acquis – to the complex reality of technology development, at least partly addressing the responsibility gap in DPbD.

A future-oriented conception of responsibility would require technology providers to adopt more proactive approaches to DPbD, even where they are unlikely to qualify as a controller. A relational approach to DPbD would require technology providers to bear greater responsibilities to those individuals or groups that are affected by their design choices. A distributed approach to DPbD would allow for downstream actors in the supply chain to bear part of the legal responsibility for DPbD by relying on legal requirements that are applicable to various actors in the supply chain supporting DPbD such as those found in contract law, liability law, and the emerging EU acquis governing AI, data, and information security.

Keywords
Data Protection by Design, technology providers, GDPR, AI Act, responsibility
National Category
Law (excluding Law and Society)
Identifiers
urn:nbn:se:su:diva-215106 (URN)10.5334/tilr.274 (DOI)001000908400001 ()2-s2.0-85151292444 (Scopus ID)
Available from: 2023-02-28 Created: 2023-02-28 Last updated: 2024-05-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0009-0007-7354-1675

Search in DiVA

Show all publications