Deep convolution network based emotion analysis towards mental health care
Facial expressions play an important role during communications, allowing information regarding the emotional state of an individual to be conveyed and inferred. Research suggests that automatic facial expression recognition is a promising avenue of enquiry in mental healthcare, as facial expressions can also reflect an individual’s mental state. In order to develop user-friendly, low-cost and effective facial expression analysis systems for mental health care, this paper presents a novel deep convolution network based emotion analysis framework to support mental state detection and diagnosis. The proposed system is able to process facial images and interpret the temporal evolution of emotions through a new solution in which deep features are extracted from the Fully Connected Layer 6 of the AlexNet, with a standard Linear Discriminant Analysis Classifier exploited to obtain the final classification outcome. It is tested against 5 benchmarking databases, including JAFFE,KDEF,CK+, and databases with the images obtained ‘in the wild’ such as FER2013 and AffectNet. Compared with the other state-of-the-art methods, we observe that our method has overall higher accuracy of facial expression recognition. Additionally, when compared to the state-of-the-art deep learning algorithms such as Vgg16, GoogleNet, ResNet and AlexNet, the proposed method demonstrated better efficiency and has less device requirements. The experiments presented in this paper demonstrate that the proposed method outperforms the other methods in terms of accuracy and efficiency which suggests it could act as a smart, low-cost, user-friendly cognitive aid to detect, monitor, and diagnose the mental health of a patient through automatic facial expression analysis
Funding
The research work is funded by Strathclyde’s Strategic Technology Partnership (STP) Programme withCAPITA (2016-2019). We thank Dr Neil Mackin and Miss Angela Anderson for their support. The contents in the paper are those of the authors alone and don’t stand for the views of CAPITA plc. Huiyu Zhou was partly funded by UK EPSRC under GrantEP/N011074/1, and Royal Society-Newton Advanced Fellowship under Grant NA160342. The authors thank Shanghai MentalHealth Center for their help and support. We also thank Dr Fei Gao from Beihang University, China for his kind support and comment.
History
Citation
Neurocomputing Volume 388, 7 May 2020, Pages 212-227Version
- AM (Accepted Manuscript)