Automated Deception Detection: Tokyo University Researchers Utilize Facial Expressions and Pulse Rates to Unmask Deception through Machine Learning

0
4

In the digital era, automated deception detection systems have become vital across various fields. The demand for accurate detection is evident in commerce, medicine, education, law enforcement, and national security. Human interviewers’ limitations pose risks of false accusations and ineffective detection. To address these challenges, Tokyo University of Science researchers propose a machine-learning approach that combines facial expressions and pulse rate data for comprehensive deception detection. The goal is to develop a fair and reliable system that can assist in interviews with crime victims, suspects, and individuals with mental health issues. The researchers emphasize the importance of precise suspect classification to avoid misidentifications and to uphold ethical and legal considerations; they suggest a human-in-the-loop approach. This innovative method ensures ethical compliance while enabling widespread applications in crucial decision-making processes.

In related work, previous studies have explored deception detection using various methods. One study developed a “deception analysis and reasoning engine,” employing multi-modal information from videos to detect deception with an AUC of approximately 87%. Another study focused on identifying differences in valences and arousal between truthful and deceptive speakers, achieving an AUC of 91% using emotional, visual, audio, and verbal features. AUC is a commonly used metric in binary classification tasks like deception detection. Additionally, a machine learning approach was used to detect deception based on non-verbal behavior (NVB), achieving an accuracy of approximately 80% by identifying cues such as facial micro-movements, changes in gaze, and blink rates. However, limitations were observed in some of these studies due to unnatural role-playing approaches for data collection.

In contrast to traditional methods, this innovative study introduces a natural approach where subjects freely improvise deceptive behaviors to enhance deception detection accuracy. The proposed method employs machine learning, specifically the Random Forest (RF) technique, to create a deception detection model that integrates facial expressions and pulse rate data. Data were collected from four male graduate students discussing random images while making deceptive statements. Facial expressions were recorded using a web camera, and pulse rates were measured using a smartwatch during the interviews.  

The process involves standard machine learning steps, including data collection, labeling, feature extraction, preprocessing, and classification. Subjects were shown various images and encouraged to express their thoughts, including deceptive statements. The resulting dataset was labeled based on the subjects’ intentions, specifically focusing on intentional deception rather than errors or false memory. Facial landmarks from recorded videos were extracted using the OpenFace library, and various facial features, such as eyebrow tilt, eye aspect ratio, mouth area, blink rate, gaze, head tilt, and pulse rate, were derived from these landmarks. Preprocessing involved removing missing values, filtering outliers, and applying undersampling to balance positive and negative cases.

https://link.springer.com/article/10.1007/s10015-023-00869-9

The Random Forest (RF) was trained and evaluated using 10-fold cross-validation, with performance metrics like accuracy, precision, recall, and F1 score used to assess its effectiveness. Remarkably, experiments conducted with actual remote job interviews demonstrated similar performance to cross-validation results, confirming the method’s real-world applicability. The analysis of feature importance highlighted specific facial features, pulse rate, and gaze and head movements as significant indicators of deception across different subjects. For example, changes in the mouth area, silence, and blinking indicated deceptive behavior in some cases, while others showed notable variations in pulse rate and gaze direction during deception. 

Overall, this research provides a practical and promising approach to detecting deceptive statements in remote interviews using machine learning and facial feature analysis, offering valuable insights for real-world applications. The proposed method, which eliminates human bias, demonstrated promising accuracy and F1 scores between 0.75 and 0.88 for different subjects. Common features related to facial expressions and pulse rate during deception were observed among subjects. However, further studies are needed to handle multi-class classification and include psychological assessments for a more comprehensive analysis. Despite the limitations in dataset size, this research offers a foundation for interviewers interested in utilizing automatic deception detection systems while emphasizing the importance of ethical considerations and legal compliance in their application.


Check out the Paper and Blog Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 27k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Madhur Garg is a consulting intern at MarktechPost. He is currently pursuing his B.Tech in Civil and Environmental Engineering from the Indian Institute of Technology (IIT), Patna. He shares a strong passion for Machine Learning and enjoys exploring the latest advancements in technologies and their practical applications. With a keen interest in artificial intelligence and its diverse applications, Madhur is determined to contribute to the field of Data Science and leverage its potential impact in various industries.


🔥 Use SQL to predict the future (Sponsored)

Credit: Source link