Change search
Link to record
Permanent link

Direct link
Publications (10 of 14) Show all publications
Mochaourab, R., Sinha, S., Greenstein, S. & Papapetrou, P. (2023). Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines. In: Massih-Reza Amini; Stéphane Canu; Asja Fischer; Tias Guns; Petra Kralj Novak; Grigorios Tsoumakas (Ed.), Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI. Paper presented at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022), Grenoble, France, 19-23 September, 2022 (pp. 662-666). Cham: Springer
Open this publication in new window or tab >>Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines
2023 (English)In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI / [ed] Massih-Reza Amini; Stéphane Canu; Asja Fischer; Tias Guns; Petra Kralj Novak; Grigorios Tsoumakas, Cham: Springer, 2023, p. 662-666Conference paper, Published paper (Refereed)
Abstract [en]

We demonstrate the construction of robust counterfactual explanations for support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. Privacy preservation is essential when dealing with sensitive data, such as in applications within the health domain. In addition, providing explanations for machine learning predictions is an important requirement within so-called high risk applications, as referred to in the EU AI Act. Thus, the innovative aspects of this work correspond to studying the interaction between three desired aspects: accuracy, privacy, and explainability. The SVM classification accuracy is affected by the privacy mechanism through the introduced perturbations in the classifier weights. Consequently, we need to consider a trade-off between accuracy and privacy. In addition, counterfactual explanations, which quantify the smallest changes to selected data instances in order to change their classification, may become not credible when we have data privacy guarantees. Hence, robustness for counterfactual explanations is needed in order to create confidence about the credibility of the explanations. Our demonstrator provides an interactive environment to show the interplay between the considered aspects of accuracy, privacy, and explainability.

Place, publisher, year, edition, pages
Cham: Springer, 2023
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743, E-ISSN 1611-3349 ; 13718
Keywords
Counterfactual explanations, Support vector machines, Differential privacy
National Category
Law and Society Computer and Information Sciences
Identifiers
urn:nbn:se:su:diva-215639 (URN)10.1007/978-3-031-26422-1_52 (DOI)978-3-031-26421-4 (ISBN)978-3-031-26422-1 (ISBN)
Conference
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022), Grenoble, France, 19-23 September, 2022
Available from: 2023-03-22 Created: 2023-03-22 Last updated: 2023-03-27Bibliographically approved
Greenstein, S., Papapetrou, P. & Mochaourab, R. (2022). Embedding Human Values into Artificial Intelligence (AI) (1:1ed.). In: Katja De Vries; Mattias Dahlberg (Ed.), Law, AI and Digitalisation: (pp. 91-116). Uppsala: Iustus förlag
Open this publication in new window or tab >>Embedding Human Values into Artificial Intelligence (AI)
2022 (English)In: Law, AI and Digitalisation / [ed] Katja De Vries; Mattias Dahlberg, Uppsala: Iustus förlag, 2022, 1:1, p. 91-116Chapter in book (Other academic)
Place, publisher, year, edition, pages
Uppsala: Iustus förlag, 2022 Edition: 1:1
Series
De lege, ISSN 1102-3317 ; 2021
National Category
Law and Society
Identifiers
urn:nbn:se:su:diva-215640 (URN)978-91-7737-167-0 (ISBN)
Available from: 2023-03-22 Created: 2023-03-22 Last updated: 2023-05-08Bibliographically approved
Colonna, L. & Greenstein, S. (2022). Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence. Stockholm: The Swedish Law and Informatics Research Institute
Open this publication in new window or tab >>Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence
2022 (English)Book (Refereed)
Place, publisher, year, edition, pages
Stockholm: The Swedish Law and Informatics Research Institute, 2022. p. 358
National Category
Law
Identifiers
urn:nbn:se:su:diva-204599 (URN)978-91-8892-964-8 (ISBN)
Available from: 2022-05-12 Created: 2022-05-12 Last updated: 2023-03-22Bibliographically approved
Greenstein, S. (2022). Preserving the rule of law in the era of artificial intelligence (AI). Artificial Intelligence and Law, 30(3), 291-323
Open this publication in new window or tab >>Preserving the rule of law in the era of artificial intelligence (AI)
2022 (English)In: Artificial Intelligence and Law, ISSN 0924-8463, E-ISSN 1572-8382, Vol. 30, no 3, p. 291-323Article in journal (Refereed) Published
Abstract [en]

The study of law and information technology comes with an inherent contradiction in that while technology develops rapidly and embraces notions such as internationalization and globalization, traditional law, for the most part, can be slow to react to technological developments and is also predominantly confined to national borders. However, the notion of the rule of law defies the phenomenon of law being bound to national borders and enjoys global recognition. However, a serious threat to the rule of law is looming in the form of an assault by technological developments within artificial intelligence (AI). As large strides are made in the academic discipline of AI, this technology is starting to make its way into digital decision-making systems and is in effect replacing human decision-makers. A prime example of this development is the use of AI to assist judges in making judicial decisions. However, in many circumstances this technology is a ‘black box’ due mainly to its complexity but also because it is protected by law. This lack of transparency and the diminished ability to understand the operation of these systems increasingly being used by the structures of governance is challenging traditional notions underpinning the rule of law. This is especially so in relation to concepts especially associated with the rule of law, such as transparency, fairness and explainability. This article examines the technology of AI in relation to the rule of law, highlighting the rule of law as a mechanism for human flourishing. It investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within society and questions the extent to which it can survive in the technocratic society. 

Keywords
Artificial Intelligence (AI), Machine Learning (ML), Rule of Law, Judicial Decision-Making, Explainability
National Category
Law and Society
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-195381 (URN)10.1007/s10506-021-09294-4 (DOI)000674131800001 ()2-s2.0-85110441813 (Scopus ID)
Available from: 2021-08-16 Created: 2021-08-16 Last updated: 2022-09-29Bibliographically approved
Greenstein, S. (2021). Confirming Fundamental Rights in the Era of Artificial Intelligence Technologies?. Europarättslig tidskrift (2), 311-330
Open this publication in new window or tab >>Confirming Fundamental Rights in the Era of Artificial Intelligence Technologies?
2021 (English)In: Europarättslig tidskrift, ISSN 1403-8722, E-ISSN 2002-3561, no 2, p. 311-330Article in journal (Other academic) Published
Keywords
Artificial Intelligence, Machine Learning, Fundamental Rights
National Category
Law and Society
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-195382 (URN)
Available from: 2021-08-16 Created: 2021-08-16 Last updated: 2023-03-28Bibliographically approved
Greenstein, S. (2021). Elevating Legal Informatics in the Digital Age. In: Sonya Petersson (Ed.), Digital human sciences: new objects - new approaches (pp. 155-180). Stockholm: Stockholm University Press
Open this publication in new window or tab >>Elevating Legal Informatics in the Digital Age
2021 (English)In: Digital human sciences: new objects - new approaches / [ed] Sonya Petersson, Stockholm: Stockholm University Press, 2021, p. 155-180Chapter in book (Refereed)
Place, publisher, year, edition, pages
Stockholm: Stockholm University Press, 2021
Series
Stockholm studies in culture and aesthetics, ISSN 2002-3227 ; 7
Keywords
Artificial Intelligence, Machine Learning, Legal Informatics
National Category
Law and Society
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-195383 (URN)10.16993/bbk.g (DOI)978-91-7635-147-5 (ISBN)978-91-7635-145-1 (ISBN)978-91-7635-144-4 (ISBN)
Available from: 2021-08-16 Created: 2021-08-16 Last updated: 2022-02-25Bibliographically approved
Greenstein, S. (2019). Predictive Modelling, Scoring and Human Dignity. In: Claes Granmar, Katarina Fast Lappalainen, Christine Storr (Ed.), AI & Fundamental Rights: . Stockholm: n/a
Open this publication in new window or tab >>Predictive Modelling, Scoring and Human Dignity
2019 (English)In: AI & Fundamental Rights / [ed] Claes Granmar, Katarina Fast Lappalainen, Christine Storr, Stockholm: n/a , 2019Chapter in book (Other academic)
Place, publisher, year, edition, pages
Stockholm: n/a, 2019
National Category
Law and Society
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-179471 (URN)9789178192083 (ISBN)
Available from: 2020-03-02 Created: 2020-03-02 Last updated: 2022-02-26Bibliographically approved
Greenstein, S. (2018). At the Mercy of Prediction in the Age of Predictive Models and Scoring. Scandinavian Studies in Law, 65, 197-211
Open this publication in new window or tab >>At the Mercy of Prediction in the Age of Predictive Models and Scoring
2018 (English)In: Scandinavian Studies in Law, ISSN 0085-5944, Vol. 65, p. 197-211Article in journal (Other academic) Published
Keywords
Predictive modelling, scoring, prediction, big data, algorithm
National Category
Law and Society
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-160425 (URN)
Available from: 2018-09-24 Created: 2018-09-24 Last updated: 2022-02-26Bibliographically approved
Greenstein, S. (2017). Our Humanity Exposed: Predictive Modelling in a Legal Context. (Doctoral dissertation). Stockholm: Department of Law, Stockholm University
Open this publication in new window or tab >>Our Humanity Exposed: Predictive Modelling in a Legal Context
2017 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour.

Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits.

Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy.

This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?

Place, publisher, year, edition, pages
Stockholm: Department of Law, Stockholm University, 2017. p. 500
Keywords
predictive modelling, predictive analytics, profiling, big data, algorithm, surveillance, privacy, autonomy, identity, digital identity, data privacy, human rights, data protection, European Convention on Human Rights, Data Protection Directive, General Data Protection Regulation (GDPR), empowerment
National Category
Law
Research subject
Law and Information Technology
Identifiers
urn:nbn:se:su:diva-141657 (URN)978-91-7649-748-7 (ISBN)978-91-7649-749-4 (ISBN)
Public defence
2017-06-01, De Geersalen, Geovetenskapens hus, Svante Arrhenius väg 14, Stockholm, 10:00 (English)
Opponent
Supervisors
Available from: 2017-05-09 Created: 2017-04-17 Last updated: 2022-02-28Bibliographically approved
Svantesson, D. J. & Greenstein, S. (Eds.). (2013). Internationalisation of Law in the Digital Information Society. Köpenhamn: Ex Tuto Publishing
Open this publication in new window or tab >>Internationalisation of Law in the Digital Information Society
2013 (English)Collection (editor) (Other academic)
Place, publisher, year, edition, pages
Köpenhamn: Ex Tuto Publishing, 2013. p. 383
Series
Nordic Yearbook of Law and Informatics ; 2010-2012
National Category
Law (excluding Law and Society)
Identifiers
urn:nbn:se:su:diva-97911 (URN)978-87-92598-22-6 (ISBN)
Available from: 2013-12-19 Created: 2013-12-19 Last updated: 2022-02-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0694-768x

Search in DiVA

Show all publications