Planned maintenance
A system upgrade is planned for 24/9-2024, at 12:00-14:00. During this time DiVA will be unavailable.
Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Colonna, Liane
    et al.
    Stockholm University, Faculty of Law, Department of Law, The Swedish Law and Informatics Research Institute.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law, The Swedish Law and Informatics Research Institute.
    Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence2022Book (Refereed)
    Download full text (pdf)
    fulltext
  • 2.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    At the Mercy of Prediction in the Age of Predictive Models and Scoring2018In: Scandinavian Studies in Law, ISSN 0085-5944, Vol. 65, p. 197-211Article in journal (Other academic)
  • 3.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Confirming Fundamental Rights in the Era of Artificial Intelligence Technologies?2021In: Europarättslig tidskrift, ISSN 1403-8722, E-ISSN 2002-3561, no 2, p. 311-330Article in journal (Other academic)
  • 4.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Elevating Legal Informatics in the Digital Age2021In: Digital human sciences: new objects - new approaches / [ed] Sonya Petersson, Stockholm: Stockholm University Press, 2021, p. 155-180Chapter in book (Refereed)
  • 5.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Legal Status of Standardization2010In: Kluwer International Encyclopedia of Cyber Law: Sweden / [ed] Christine Kirchberger, Netherlands: Kluwer , 2010, p. 82-86Chapter in book (Other academic)
  • 6.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Our Humanity Exposed: Predictive Modelling in a Legal Context2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour.

    Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits.

    Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy.

    This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?

    Download full text (pdf)
    Our Humanity Exposed
    Download (jpg)
    Omslagsframsida
  • 7.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Predictive Modelling, Scoring and Human Dignity2019In: AI & Fundamental Rights / [ed] Claes Granmar, Katarina Fast Lappalainen, Christine Storr, Stockholm: n/a , 2019Chapter in book (Other academic)
  • 8.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Preserving the rule of law in the era of artificial intelligence (AI)2022In: Artificial Intelligence and Law, ISSN 0924-8463, E-ISSN 1572-8382, Vol. 30, no 3, p. 291-323Article in journal (Refereed)
    Abstract [en]

    The study of law and information technology comes with an inherent contradiction in that while technology develops rapidly and embraces notions such as internationalization and globalization, traditional law, for the most part, can be slow to react to technological developments and is also predominantly confined to national borders. However, the notion of the rule of law defies the phenomenon of law being bound to national borders and enjoys global recognition. However, a serious threat to the rule of law is looming in the form of an assault by technological developments within artificial intelligence (AI). As large strides are made in the academic discipline of AI, this technology is starting to make its way into digital decision-making systems and is in effect replacing human decision-makers. A prime example of this development is the use of AI to assist judges in making judicial decisions. However, in many circumstances this technology is a ‘black box’ due mainly to its complexity but also because it is protected by law. This lack of transparency and the diminished ability to understand the operation of these systems increasingly being used by the structures of governance is challenging traditional notions underpinning the rule of law. This is especially so in relation to concepts especially associated with the rule of law, such as transparency, fairness and explainability. This article examines the technology of AI in relation to the rule of law, highlighting the rule of law as a mechanism for human flourishing. It investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within society and questions the extent to which it can survive in the technocratic society. 

  • 9.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Regulation of Cryptography and other Dual-Use Products2010In: Kluwer International Encyclopedia of Cyber Law: Sweden / [ed] Christine Kirchberger, Netherlands: Kluwer , 2010, p. 76-81Chapter in book (Other academic)
  • 10.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    The Utilization of Information Technology Solutions as a Response to Present Challenges2011In: European Journal of Law and Technology, E-ISSN 2042-115X, Vol. 2, no 1, p. 1-6Article in journal (Other academic)
  • 11.
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law, The Swedish Law and Informatics Research Institute.
    Vem reglerar informationssamhället?2010Collection (editor) (Other academic)
    Abstract [sv]

    Den 23:e Nordiska konferensen i rättsinformatik ägde rum i Stockholm 2008 och tog sig an temat ”IT Regulations and Policies: from Theory into Practice”. Huvudsyftet med konferensen var att formulera en juridisk agenda för reglering av IT och policy. De frågor som behandlades rörde bland annat hur man bör balansera kraven på ökad juridisk reglering och hur man bör ta hänsyn till de röster som vill ha mindre regler i informationssamhället. Därtill diskuterades vilka juridiska angreppssätt som är mest lämpliga.

    Konferensen tog också upp problem som har blivit relevanta som en följd av samhällets användning av informations- och kommunikationsteknologi, bland annat: 

    • Hur den personliga integriteten ter sig i framtiden.
    • Hur immaterialrätten påverkas i informationssamhället.
    • Hur man kan skydda den personliga integriteten och samtidigt främja informationssäkerheten.
    • Vad begrepp som semantic web och sociala media har för betydelse för utvecklingen av juristers informationssökning.
    • Hur kunskapshanteringssystem kommer att fungera i framtiden. 

    I denna utgåva av Nordisk årsbok i rättsinformatik presenteras bidrag från flera av de talare som medverkade på konferensen. Boken belyser på så vis både aktuella frågor och utvecklingstendenser i informationssamhället.

  • 12.
    Greenstein, Stanley
    et al.
    Stockholm University, Faculty of Law, Department of Law, The Swedish Law and Informatics Research Institute.
    Papapetrou, Panagiotis
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences. Aalto University, Finland.
    Mochaourab, Rami
    RISE Research Institutes of Sweden, Sweden.
    Embedding Human Values into Artificial Intelligence (AI)2022In: Law, AI and Digitalisation / [ed] Katja De Vries; Mattias Dahlberg, Uppsala: Iustus förlag, 2022, 1:1, p. 91-116Chapter in book (Other academic)
    Download full text (pdf)
    fulltext
  • 13.
    Mochaourab, Rami
    et al.
    RISE Research Institutes of Sweden, Sweden.
    Sinha, Sugandh
    Greenstein, Stanley
    Stockholm University, Faculty of Law, Department of Law.
    Papapetrou, Panagiotis
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines2023In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI / [ed] Massih-Reza Amini; Stéphane Canu; Asja Fischer; Tias Guns; Petra Kralj Novak; Grigorios Tsoumakas, Cham: Springer, 2023, p. 662-666Conference paper (Refereed)
    Abstract [en]

    We demonstrate the construction of robust counterfactual explanations for support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. Privacy preservation is essential when dealing with sensitive data, such as in applications within the health domain. In addition, providing explanations for machine learning predictions is an important requirement within so-called high risk applications, as referred to in the EU AI Act. Thus, the innovative aspects of this work correspond to studying the interaction between three desired aspects: accuracy, privacy, and explainability. The SVM classification accuracy is affected by the privacy mechanism through the introduced perturbations in the classifier weights. Consequently, we need to consider a trade-off between accuracy and privacy. In addition, counterfactual explanations, which quantify the smallest changes to selected data instances in order to change their classification, may become not credible when we have data privacy guarantees. Hence, robustness for counterfactual explanations is needed in order to create confidence about the credibility of the explanations. Our demonstrator provides an interactive environment to show the interplay between the considered aspects of accuracy, privacy, and explainability.

  • 14. Svantesson, Dan Jerker B.
    et al.
    Greenstein, StanleyStockholm University, Faculty of Law, Department of Law, The Swedish Law and Informatics Research Institute. Stockholm University.
    Internationalisation of Law in the Digital Information Society2013Collection (editor) (Other academic)
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf