Document Type

Article

Source of Publication

Industrial Management and Data Systems

Publication Date

12-3-2025

Abstract

Purpose – This study seeks to bridge the gap between users’ multidimensional needs and the single-task capabilities of existing Mental Health Question Answering (MHQA) systems by tackling the underexplored challenge of jointly understanding medical informational needs and emotional support needs within complex consumer mental health inquiries. Design/methodology/approach – Grounded in Rhetorical Structure Theory (RST), the proposed Multi-Needs and Context Recognition (MNCR) framework decomposes mental health question understanding task into four interrelated subtasks: Medical Needs Recognition (MNR), Medical Needs-related Context Extraction (MNCE), Emotional Needs Recognition (ENR) and Emotional Needs-related Context Extraction (ENCE). A new benchmark dataset, MHQ-MedEmo, was constructed through multi-layered semantic annotation of 703 clinical queries sourced from real-world online health consultation platforms. The performances of six base LLMs and two fine-tuned LLMs were evaluated across precision, recall, F1 score and latency metrics. Findings – Dense, fine-tuned models strike the optimal balance between accuracy and latency for end-to-end MNCR tasks; subtask sensitivity varies markedly across different model architectures; fine-tuning consistently enhances overall performance; the joint-prompt strategy consistently improves both effectiveness and efficiency over the separate-prompt strategy and model architecture and scale significantly influence performance on MNCR subtasks. Originality/value – This study introduces MNCR and MHQ-MedEmo, the first framework and benchmark for simultaneously understanding medical informational needs and emotional support needs in mental health questions. Comparative evaluation of eight LLMs reveals distinct model-specific strengths, guiding future architectures that balance accuracy and latency and offering concrete guidance for healthcare organizations seeking to deploy LLM-based MHQA solutions in practice.

ISSN

0263-5577

Publisher

Emerald

First Page

1

Last Page

26

Disciplines

Computer Sciences | Medicine and Health Sciences

Keywords

Large language models, Mental health, Rhetorical structure theory

Scopus ID

105025151085

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Hybrid: This publication is openly available in a subscription-based journal/series

Share

COinS