Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Attacking and Defending the Privacy of Clinical Language Models
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0001-8988-8226
2023 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The state-of-the-art methods in natural language processing (NLP) increasingly rely on large pre-trained transformer models. The strength of the models stems from their large number of parameters and the enormous amounts of data used to train them. The datasets are of a scale that makes it difficult, if not impossible, to audit them manually. When unwieldy amounts of potentially sensitive data are used to train large machine learning models, a difficult problem arises: the unintended memorization of the training data.

All datasets—including those based on publicly available data—can contain sensitive information about individuals. When models unintentionally memorize these sensitive data, they become vulnerable to different types of privacy attacks. Very few datasets for NLP can be guaranteed to be free from sensitive data. Thus, to varying degrees, most NLP models are susceptible to privacy leakage. This susceptibility is especially concerning in clinical NLP, where the data typically consist of electronic health records. Unintentionally leaking publicly available data can be problematic, but leaking data from electronic health records is never acceptable from a privacy perspective. At the same time, clinical NLP has great potential to improve the quality and efficiency of healthcare.

This licentiate thesis investigates how these privacy risks can be mitigated using automatic de-identification. This is done by exploring the privacy risks of pre-training using clinical data and then evaluating the impact on the model accuracy of decreasing these risks. A BERT model pre-trained using clinical data is subjected to a training data extraction attack. The same model is also used to evaluate a membership inference attack that has been proposed to quantify the privacy risks associated with masked language models. Then, the impact of automatic de-identification on the performance of BERT models is evaluated for both pre-training and fine-tuning data.

The results show that extracting training data from BERT models is non-trivial and suggest that the risks can be further decreased by automatically de-identifying the training data. Automatic de-identification is found to preserve the utility of the data used for pre-training and fine-tuning BERT models, resulting in no reduction in performance compared to models trained using unaltered data. However, we also find that the current state-of-the-art membership inference attacks are unable to quantify the privacy benefits of automatic de-identification. The results show that automatic de-identification reduces the privacy risks of using sensitive data for NLP without harming the utility of the data, but that these privacy benefits may be difficult to quantify.

Abstract [sv]

Den språkteknologiska forskningen blir alltmer beroende av stora förtränade transformermodeller. Dessa kraftfulla språkmodeller utgörs av ett stort antal parametrar som tränas genom att bearbeta enorma datamängder. Träningsdatan är typiskt av en sådan omfattning att det är svårt – om inte omöjligt – att granska dem manuellt. När otympliga mängder av potentiellt känsliga data används för att träna stora språkmodeller uppstår ett svårhanterligt fenomen: oavsiktlig memorering.

Väldigt få datakällor är helt fria från känsliga personuppgifter. Eftersom stora språkmodeller visat sig memorera detaljer om sina träningsdata gör det dem sårbara för integritetsröjande attacker. Denna sårbarhet är särskilt oroväckande inom klinisk språkteknologi, där data typiskt utgörs av elektroniska patientjournaler. Det är problematiskt att röja personuppgifter även om de är offentliga, men att läcka information från en individs patientjournaler är en oacceptabel integritetskränkning. Samtidigt så har klinisk språkteknologi stor potential att både förbättra kvalitén och öka effektiviteten inom sjukvården.

Denna licentiatavhandling undersöker hur de nyss nämnda integritetsriskerna kan minskas med hjälp av automatisk avidentifiering. Detta undersöks genom att först utforska riskerna med att förträna språkmodeller med kliniska träningsdata och sedan jämföra hur modellernas tillförlitlighet och prestanda påverkas av att dessa risker minskas. En BERT-modell som förtränats med kliniska data utsätts för en attack som syftar till att extrahera träningsdata. Samma modell används också för att utvärdera en föreslagen metod för att kvantifiera integritetsrisker hos maskade språkmodeller och som baseras på modellernas mottaglighet för medlemskapsinferensattacker. Därefter utvärderas hur användbara automatiskt avidentifierade data är för att förträna BERT-modeller och för att träna dem att lösa specifika språkteknologiska problem.

Resultaten visar att det är icke-trivialt att extrahera träningsdata ur språkmodeller. Samtidigt kan de risker som ändå finns minskas genom att automatiskt avidentifiera modellernas träningsdata. Därtill visar resultaten att språkmodeller tränade med automatiskt avidentifierade data fungerar lika väl som de som tränats med känsliga data. Detta gäller både vid förträning och vid träning för specifika problem. Samtidigt visar experimenten med medlemskapsinferens att nuvarande metoder inte fångar integritetsfördelarna av att automatiskt avidentifiera träningsdata. Sammanfattningsvis visar denna avhandling att automatisk avidentifiering kan användas för att minska de integritetsrisker som kommer av att använda känsliga data samtidigt som deras användbarhet bibehålls. Än saknas dock vedertagna metoder för att kvantifiera dessa integritetsvinster.

Place, publisher, year, edition, pages
Stockholm: Department of Computer and Systems Sciences, Stockholm University , 2023.
Series
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 23-004
National Category
Language Technology (Computational Linguistics) Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-216693OAI: oai:DiVA.org:su-216693DiVA, id: diva2:1752845
Presentation
2023-05-15, M20, Borgarfjordsgatan 12, Kista, 10:00 (English)
Opponent
Supervisors
Available from: 2023-04-25 Created: 2023-04-24 Last updated: 2023-04-25Bibliographically approved
List of papers
1. Are Clinical BERT Models Privacy Preserving? The Difficulty of Extracting Patient-Condition Associations
Open this publication in new window or tab >>Are Clinical BERT Models Privacy Preserving? The Difficulty of Extracting Patient-Condition Associations
2021 (English)In: Proceedings of the AAAI 2021 Fall Symposium on Human Partnership with Medical AI: Design, Operationalization, and Ethics (AAAI-HUMAN 2021) / [ed] Thomas E. Doyle; Aisling Kelliher; Reza Samavi; Barbara Barry; Steven Yule; Sarah Parker; Michael Noseworthy; Qian Yang, 2021Conference paper, Published paper (Refereed)
Abstract [en]

Language models may be trained on data that contain personal information, such as clinical data. Such sensitive data must not leak for privacy reasons. This article explores whether BERT models trained on clinical data are susceptible to training data extraction attacks. Multiple large sets of sentences generated from the model with top-k sampling and nucleus sampling are studied. The sentences are examined to determine the degree to which they contain information associating patients with their conditions. The sentence sets are then compared to determine if there is a correlation between the degree of privacy leaked and the linguistic quality attained by each generation technique. We find that the relationship between linguistic quality and privacy leakage is weak and that the risk of a successful training data extraction attack on a BERT-based model is small.

Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 3068
Keywords
nlp, natural language processing, privacy preserving machine learning, language models, transformers, natural language generation
National Category
Language Technology (Computational Linguistics)
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-201651 (URN)
Conference
AAAI 2021 Fall Symposium on Human Partnership with Medical AI: Design, Operationalization, and Ethics (AAAI-HUMAN 2021), Virtual Event, November 4-6, 2021
Available from: 2022-01-31 Created: 2022-01-31 Last updated: 2023-04-24Bibliographically approved
2. Using Membership Inference Attacks to Evaluate Privacy-Preserving Language Modeling Fails for Pseudonymizing Data
Open this publication in new window or tab >>Using Membership Inference Attacks to Evaluate Privacy-Preserving Language Modeling Fails for Pseudonymizing Data
2023 (English)In: 24th Nordic Conference on Computational Linguistics (NoDaLiDa), 2023, p. 318-323Conference paper, Published paper (Refereed)
Abstract [en]

Large pre-trained language models dominate the current state-of-the-art for many natural language processing applications, including the field of clinical NLP. Several studies have found that these can be susceptible to privacy attacks that are unacceptable in the clinical domain where personally identifiable information (PII) must not be exposed.

However, there is no consensus regarding how to quantify the privacy risks of different models. One prominent suggestion is to quantify these risks using membership inference attacks. In this study, we show that a state-of-the-art membership inference attack on a clinical BERT model fails to detect the privacy benefits from pseudonymizing data. This suggests that such attacks may be inadequate for evaluating token-level privacy preservation of PIIs.

Series
Northern European Association for Language Technology (NEALT), ISSN 1736-8197, E-ISSN 1736-6305 ; 52
National Category
Language Technology (Computational Linguistics)
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-216681 (URN)
Conference
Nordic Conference on Computational Linguistics
Available from: 2023-04-24 Created: 2023-04-24 Last updated: 2023-10-04Bibliographically approved
3. Utility Preservation of Clinical Text After De-Identification
Open this publication in new window or tab >>Utility Preservation of Clinical Text After De-Identification
2022 (English)In: Proceedings of the 21st Workshop on Biomedical Language Processing / [ed] Dina Demner-Fushman; Kevin Bretonnel Cohen; Sophia Ananiadou; Junichi Tsujii, Association for Computational Linguistics , 2022, p. 383-388Conference paper, Published paper (Refereed)
Abstract [en]

Electronic health records contain valuable information about symptoms, diagnosis, treatment and outcomes of the treatments of individual patients. However, the records may also contain information that can reveal the identity of the patients. Removing these identifiers - the Protected Health Information (PHI) - can protect the identity of the patient. Automatic de-identification is a process which employs machine learning techniques to detect and remove PHI. However, automatic techniques are imperfect in their precision and introduce noise into the data. This study examines the impact of this noise on the utility of Swedish de-identified clinical data by using human evaluators and by training and testing BERT models. Our results indicate that de-identification does not harm the utility for clinical NLP and that human evaluators are less sensitive to noise from de-identification than expected.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2022
National Category
Language Technology (Computational Linguistics)
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-207402 (URN)10.18653/v1/2022.bionlp-1.38 (DOI)978-1-955917-27-8 (ISBN)
Conference
60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22-27 May 2022
Available from: 2022-07-15 Created: 2022-07-15 Last updated: 2023-04-24Bibliographically approved
4. Downstream Task Performance of BERT Models Pre-Trained Using Automatically De-Identified Clinical Data
Open this publication in new window or tab >>Downstream Task Performance of BERT Models Pre-Trained Using Automatically De-Identified Clinical Data
2022 (English)In: Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), European Language Resources Association , 2022, p. 4245-4252Conference paper, Published paper (Refereed)
Abstract [en]

Automatic de-identification is a cost-effective and straightforward way of removing large amounts of personally identifiable information from large and sensitive corpora. However, these systems also introduce errors into datasets due to their imperfect precision. These corruptions of the data may negatively impact the utility of the de-identified dataset. This paper de-identifies a very large clinical corpus in Swedish either by removing entire sentences containing sensitive data or by replacing sensitive words with realistic surrogates. These two datasets are used to perform domain adaptation of a general Swedish BERT model. The impact of the de-identification techniques is assessed by training and evaluating the models using six clinical downstream tasks. The results are then compared to a similar BERT model domain-adapted using an unaltered version of the clinical corpus. The results show that using an automatically de-identified corpus for domain adaptation does not negatively impact downstream performance. We argue that automatic de-identification is an efficient way of reducing the privacy risks of domain-adapted models and that the models created in this paper should be safe to distribute to other academic researchers.

Place, publisher, year, edition, pages
European Language Resources Association, 2022
Keywords
Privacy-preserving machine learning, pseudonymization, de-identification, Swedish clinical text, pre-trained language models, BERT, downstream tasks, NER, multi-label classification
National Category
Language Technology (Computational Linguistics)
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-207395 (URN)
Conference
Conference on Language Resources and Evaluation (LREC 2022), Marseilles, France, 21-23 June 2022
Available from: 2022-07-15 Created: 2022-07-15 Last updated: 2023-04-24

Open Access in DiVA

fulltext(3493 kB)519 downloads
File information
File name FULLTEXT01.pdfFile size 3493 kBChecksum SHA-512
cfd92de91923c703604c388407e5d0f02a2894f4ad4c319b9c73fe5bae9fbc6c2cd4271fd54508b216d74cd983faf02cad64d9dc5031919982031b726bc560ec
Type fulltextMimetype application/pdf

Authority records

Vakili, Thomas

Search in DiVA

By author/editor
Vakili, Thomas
By organisation
Department of Computer and Systems Sciences
Language Technology (Computational Linguistics)Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 519 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 2113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf