Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Detecting Suicidal Ideation on Social Media Using Large Language Models with Zero-Shot Prompting
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0001-9731-1048
Number of Authors: 42025 (English)In: Proceedings of the 11th International Conference on Information and Communication Technologies for Ageing Well and e-Health ICT4AWE - Volume 1, Science and Technology Publications, Lda , 2025, p. 259-267Conference paper, Published paper (Refereed)
Abstract [en]

Detecting suicidal ideation in social media posts using Natural Language Processing (NLP) and Machine Learning has become an essential approach for early intervention and providing support to at-risk individuals. The role of data is critical in this process, as the accuracy of NLP models largely depends on the quality and quantity of labeled data available for training. Traditional methods, such as keyword-based approaches and models reliant on manually annotated datasets, face limitations due to the complex and time-consuming nature of data labeling. This shortage of high-quality labeled data creates a significant bottleneck, limiting model fine-tuning. With the recent emergence of Large Language Models (LLMs) in various NLP applications, we utilize their strengths to classify posts expressing suicidal ideation. Specifically, we apply zero-shot prompting with LLMs, enabling effective classification even in data-scarce environments without needing extensive fine-tuning, thus reducing the dependence on large annotated datasets. Our findings suggest that zero-shot LLMs can match or exceed the performance of traditional approaches like fine-tuned RoBERTa in identifying suicidal ideation. Although no single LLM outperforms consistently across all tasks, their adaptability and effectiveness underscore their potential to detect suicidal thoughts without requiring manually labeled data.

Place, publisher, year, edition, pages
Science and Technology Publications, Lda , 2025. p. 259-267
Series
International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE - Proceedings, E-ISSN 2184-4984
Keywords [en]
Large Language Models, Natural Language Processing, Prompting, Suicidal Ideation Detection
National Category
Other Computer and Information Science
Identifiers
URN: urn:nbn:se:su:diva-243460DOI: 10.5220/0013283400003938Scopus ID: 2-s2.0-105003532350OAI: oai:DiVA.org:su-243460DiVA, id: diva2:1960926
Conference
11th International Conference on Information and Communication Technologies for Ageing Well and e-Health ICT4AWE, Porto, Portugal, 2025
Available from: 2025-05-26 Created: 2025-05-26 Last updated: 2025-05-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Henriksson, Aron

Search in DiVA

By author/editor
Henriksson, Aron
By organisation
Department of Computer and Systems Sciences
Other Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 31 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf