Review
Copyright ©The Author(s) 2023. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatry. Jan 19, 2023; 13(1): 1-14
Published online Jan 19, 2023. doi: 10.5498/wjp.v13.i1.1
Emotion recognition support system: Where physicians and psychiatrists meet linguists and data engineers
Peyman Adibi, Simindokht Kalani, Sayed Jalal Zahabi, Homa Asadi, Mohsen Bakhtiar, Mohammad Reza Heidarpour, Hamidreza Roohafza, Hassan Shahoon, Mohammad Amouzadeh
Peyman Adibi, Hassan Shahoon, Isfahan Gastroenterology and Hepatology Research Center, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran
Simindokht Kalani, Department of Psychology, University of Isfahan, Isfahan 8174673441, Iran
Sayed Jalal Zahabi, Mohammad Reza Heidarpour, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 8415683111, Iran
Homa Asadi, Mohammad Amouzadeh, Department of Linguistics, University of Isfahan, Isfahan 8174673441, Iran
Mohsen Bakhtiar, Department of Linguistics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran.
Hamidreza Roohafza, Department of Psychocardiology, Cardiac Rehabilitation Research Center, Cardiovascular Research Institute (WHO-Collaborating Center), Isfahan University of Medical Sciences, Isfahan 8187698191, Iran
Mohammad Amouzadeh, School of International Studies, Sun Yat-sen University, Zhuhai 519082, Guangdong Province, China
Author contributions: Adibi P, Kalani S, Zahabi SJ, Asadi H, Bakhtiar M, Heidarpour MR, Roohafza H, Shahoon H, Amouzadeh M; all contributed in conceptualization, identifying relevant studies, framing the results; Kalani S and Roohafza H wrote the psychological related part of the paper; Zahabi SJ and Heidarpour MR wrote the data science related part of the paper; Aasdi H, Bakhtiar M, and Amouzadeh M, wrote the phonetics-linguistic, cognitive-linguistic, and semantic-linguistic related parts of the paper, respectively; Adibi P, Roohafza H, Shahoon H supervised the study.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Sayed Jalal Zahabi, PhD, Assistant Professor, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 8415683111, Iran. zahabi@iut.ac.ir
Received: June 19, 2022
Peer-review started: June 19, 2022
First decision: September 4, 2022
Revised: September 18, 2022
Accepted: December 21, 2022
Article in press: December 21, 2022
Published online: January 19, 2023
Processing time: 207 Days and 17.4 Hours
Abstract

An important factor in the course of daily medical diagnosis and treatment is understanding patients’ emotional states by the caregiver physicians. However, patients usually avoid speaking out their emotions when expressing their somatic symptoms and complaints to their non-psychiatrist doctor. On the other hand, clinicians usually lack the required expertise (or time) and have a deficit in mining various verbal and non-verbal emotional signals of the patients. As a result, in many cases, there is an emotion recognition barrier between the clinician and the patients making all patients seem the same except for their different somatic symptoms. In particular, we aim to identify and combine three major disciplines (psychology, linguistics, and data science) approaches for detecting emotions from verbal communication and propose an integrated solution for emotion recognition support. Such a platform may give emotional guides and indices to the clinician based on verbal communication at the consultation time.

Keywords: Physician-Patient relations; Emotions; Verbal behavior; Linguistics; Psychology; Data science

Core Tip: In the context of doctor-patient interactions, we focus on patient speech emotion recognition as a multifaceted problem viewed from three main perspectives: Psychology/psychiatry, linguistics, and data science. Reviewing the key elements and approaches within each of these perspectives, and surveying the current literature on them, we recognize the lack of a systematic comprehensive collaboration among the three disciplines. Thus, motivated by the necessity of such multidisciplinary collaboration, we propose an integrated platform for patient emotion recognition, as a collaborative framework towards clinical decision support.