Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
End-to-End Pseudonymization of Fine-Tuned Clinical BERT Models: Privacy Preservation with Maintained Data Utility
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0001-8988-8226
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0001-9731-1048
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0003-0165-9926
Number of Authors: 32024 (English)In: BMC Medical Informatics and Decision Making, E-ISSN 1472-6947, article id 162Article in journal (Refereed) Published
Abstract [en]

Many state-of-the-art results in natural language processing (NLP) rely on large pre-trained language models (PLMs). These models consist of large amounts of parameters that are tuned using vast amounts of training data. These factors cause the models to memorize parts of their training data, making them vulnerable to various privacy attacks. This is cause for concern, especially when these models are applied in the clinical domain, where data are very sensitive.

One privacy-preserving technique that aims to mitigate these problems is training data pseudonymization. This technique automatically identifies and replaces sensitive entities with realistic but non-sensitive surrogates. Pseudonymization has yielded promising results in previous studies. However, no previous study has applied pseudonymization to both the pre-training data of PLMs and the fine-tuning data used to solve clinical NLP tasks.

This study evaluates the predictive performance effects of end-to-end pseudonymization of clinical BERT models on five clinical NLP tasks compared to pre-training and fine-tuning on unaltered sensitive data. A large number of statistical tests are performed, revealing minimal harm to performance when using pseudonymized fine-tuning data. The results also find no deterioration from end-to-end pseudonymization of pre-training and fine-tuning data. These results demonstrate that pseudonymizing training data to reduce privacy risks can be done without harming data utility for training PLMs.

Place, publisher, year, edition, pages
2024. article id 162
Keywords [en]
Natural language processing, language models, BERT, electronic health records, clinical text, de-identification, pseudonymization, privacy preservation, Swedish
National Category
Natural Language Processing
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-232099DOI: 10.1186/s12911-024-02546-8PubMedID: 38915012Scopus ID: 2-s2.0-85196757461OAI: oai:DiVA.org:su-232099DiVA, id: diva2:1885711
Available from: 2024-07-24 Created: 2024-07-24 Last updated: 2025-11-27Bibliographically approved
In thesis
1. Preserving the Privacy of Language Models: Experiments in Clinical NLP
Open this publication in new window or tab >>Preserving the Privacy of Language Models: Experiments in Clinical NLP
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

State-of-the-art methods in natural language processing (NLP) increasingly rely on large pre-trained language models. The strength of these models stems from their large number of parameters and the enormous amounts of data used to train them. The datasets are of a scale that makes it difficult, if not impossible, to audit them manually. When unwieldy amounts of potentially sensitive data are used to train large models, an important problem arises: unwelcome memorization of the training data.

All datasets—including those based on publicly available data—can contain personally identifiable information (PII). When models memorize sensitive data, they become vulnerable to privacy attacks. Very few datasets for NLP can be guaranteed to be free of sensitive data. Consequently, most NLP models are susceptible to privacy leakage. This susceptibility is especially concerning in clinical NLP, where the data typically consist of electronic health records (EHRs). Leaking data from EHRs is never acceptable from a privacy perspective. This doctoral thesis investigates the privacy risks of using sensitive data and how they can be mitigated—while maintaining their utility as training data.

A BERT model pre-trained using clinical data is subjected to a training data extraction attack. The same model is used to evaluate a membership inference attack that has been proposed to quantify the privacy risks of masked language models. Multiple experiments assess the performance gains from adapting pre-trained models to the clinical domain. Then, the impact of automatic de-identification on the performance of BERT models is evaluated for both pre-training and fine-tuning data. The final experiments of the thesis explore how synthetic training corpora can be generated while limiting the use of sensitive data, and working under computational constraints. The quality of these corpora, and the factors affecting their utility, are explored by training and evaluating BERT models.

The results show that domain adaptation leads to significantly better performance on clinical NLP tasks. They also show that extracting training data from BERT models is difficult and suggest that the risks can be further decreased by automatically de-identifying the training data. Automatic de-identification is found to preserve the utility of the data used for pre-training and fine-tuning BERT models. However, we also find that contemporary membership inference attacks are unable to quantify the privacy benefits of this technique. Similarly, high-quality synthetic corpora can be generated using limited resources, but further research is needed to determine the privacy gains from using them. The results show that automatic de-identification and training data synthesis reduce the privacy risks of using sensitive data for NLP while preserving the utility of the data. However, these benefits are difficult to quantify, and there are no rigorous methods for comparing different privacy-preserving techniques.

Abstract [sv]

Den språkteknologiska forskningsfronten förlitar sig i hög utsträckning på stora förtränade språkmodeller. Deras styrka kommer av deras stora antal parametrar och de enorma mängder data som används för att träna dessa. Deras träningsdatamängder är så stora att det är svårt, om inte omöjligt, att granska dem manuellt. När oregerliga mängder potentiellt känsliga data används för att träna stora modeller uppstår ett besvärligt problem: oönskad memorering av träningsdata.

Alla datamängder—även offentligt tillgängliga sådana—kan innehålla personligt identifierbar information (PII). När modeller memorerar sådana känsliga data blir de sårbara för olika integritetsröjande angrepp. Väldigt få datamängder kan garanteras vara fria från PII. Därmed är också de flesta språkteknologiska modeller sårbara för angrepp. Dessa sårbarheter är särskilt besvärande när språkteknologi tillämpas inom den medicinska domänen. Där utgörs ofta träningsdata av patientjournaler—data som aldrig får läcka. Denna doktorsavhandling undersöker vilka integritetsrisker som kommer av att använda känsliga data och hur dessa risker kan bemötas—utan att påverka användbarheten hos dessa data.

En BERT-modell som tränats med patientjournaler utsätts för ett datautvinningsangrepp. Samma modell och data utsätts för ett tillhörighetsbedömande angrepp. Detta angrepp har tidigare föreslagits som en metod för att bedöma integritetsrisker hos maskerade språkmodeller. Flera experiment undersöker nyttan av att domänanpassa modeller med medicinska data. Ytterligare experiment granskar sedan huruvida automatiskt avidentifierade data lämpar sig för förträning och finjustering av språkmodeller. Avhandlingens sista experiment utforskar hur användningen av känsliga data kan begränsas vid framtagningen av syntetiska träningsdata. Kvalitén på dessa data, samt vilka faktorer som påverkar deras användbarhet, bedöms genom att träna och utvärdera BERT-modeller.

Resultaten visar tydligt att domänanpassning leder till bättre presterande modeller för medicinska tillämpningar. De visar också att riskerna att träningsdata kan utvinnas ur BERT-modeller är små, och att de risker som kvarstår kan begränsas ytterligare genom att automatiskt avidentifiera modellernas träningsdata. Automatisk avidentifiering visar sig även bibehålla datamängdernas användbarhet när de används för att förträna och finjustera BERT-modeller. Det visar sig dock att det är svårt att kvantifiera integritetsvinsterna av denna metod, och att tillhörighetsbedömande angrepp inte mäter nyttan med denna integritetsbevarande metod. Experimenten med syntetiska data visar att högkvalitativa sådana kan framställas även med sparsam användning av känsliga data, och med begränsad beräkningskapacitet. Avhandlingen visar att automatisk avidentifiering och datasyntes kan minska riskerna som kommer av att använda känsliga data—samtidigt som de bibehåller sin användbarhet—men att det saknas tillförlitliga metoder för att mäta och jämföra olika integritetsbevarande metoder.

Place, publisher, year, edition, pages
Stockholm: Department of Computer and Systems Sciences, Stockholm University, 2025. p. 126
Series
Report Series / Department of Computer & Systems Sciences, ISSN 1101-8526 ; 26-001
Keywords
natural language processing, privacy, membership inference, training data extraction, automatic de-identification, synthetic data, named entity recognition, domain adaptation, large language models
National Category
Natural Language Processing
Research subject
Computer and Systems Sciences
Identifiers
urn:nbn:se:su:diva-250015 (URN)978-91-8107-462-8 (ISBN)978-91-8107-463-5 (ISBN)
Public defence
2026-01-13, Lilla hörsalen, NOD-huset, Borgarfjordsgatan 12, Kista, 13:30 (English)
Opponent
Supervisors
Available from: 2025-12-17 Created: 2025-11-27 Last updated: 2025-12-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Vakili, ThomasHenriksson, AronDalianis, Hercules

Search in DiVA

By author/editor
Vakili, ThomasHenriksson, AronDalianis, Hercules
By organisation
Department of Computer and Systems Sciences
In the same journal
BMC Medical Informatics and Decision Making
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf