GNN as Explainable Tool with Heterogeneous and Homogeneous Data for Medical Claim Validation

Document Type

Conference Proceeding

Source of Publication

2024 International Conference on Computational Intelligence and Network Systems (CINS)

Publication Date

11-29-2024

Abstract

This paper investigates the explainability of Graph Neural Networks (GNNs) in detecting fraudulent medical insurance claims, a critical challenge in the healthcare industry. Given the complexity of healthcare data and the high stakes involved in fraud detection, understanding model decisions is essential. We apply explainability techniques, GNNExplainer and PGExplainer to two GNN architectures: HINormer, a heterogeneous GNN, and RE-GraphSAGE, a modified homogeneous GNN adapted for heterogeneous data. Both models achieved high classification accuracy (84 % and 83 %) and served as a basis for evaluating the reliability and practicality of explainability techniques in health-care fraud detection, marking a pioneering effort in applying these methods to heterogeneous GNNs in medical claims. Using real-world data from the MENA region, we assess the ability of these explainers to provide meaningful interpretations of model decisions. Real-case scenarios reviewed by medical experts highlight that while these techniques can sometimes offer valid justifications, further development is required to ensure consistent reliability in practical settings. This work underscores the critical need for advanced explainability tools to foster trust and transparency in high-stakes medical decision-making.

ISBN

979-8-3315-0410-6

Publisher

IEEE

Volume

00

First Page

1

Last Page

6

Disciplines

Computer Sciences

Keywords

Graph Neural Networks, medical claim validation, explainability, fraud detection, healthcare data

Indexed in Scopus

no

Open Access

no

Share

COinS