MEDIAPIPE DRIVEN SIGN TO SPEECH TRANSLATION SYSTEM USING HAND GESTURE RECOGNITION
DOI:
https://doi.org/10.6084/m9.figshare.26090416Abstract
The inability to communicate verbally presents a significant obstacle and is recognized as a disability. To overcome this challenge, individuals often utilize sign language, a prevalent communication method among the deaf and hard of hearing community. This paper explores the process of recognizing sign language gestures through the application of advanced computer vision techniques, specifically leveraging the capabilities of Mediapipe, a powerful tool for real-time perception tasks. The methodology encompasses various stages, including data acquisition, where video sequences of sign language gestures are captured, preprocessing to enhance image quality and reduce noise, manipulation for alignment and standardization, feature extraction to capture the essential characteristics of each gesture, segmentation to isolate individual signs within continuous movements, and outcome evaluation to assess the accuracy and performance of the system. Through experimentation and analysis, we demonstrate the efficacy of our approach in accurately interpreting sign language gestures. Furthermore, we discuss potential avenues for future research, including the integration of machine learning algorithms to enhance recognition accuracy, the development of user-friendly interfaces to improve accessibility, and the exploration of multi-modal approaches combining visual and spatial cues for more robust recognition in diverse environments.
This research contributes to the advancement of sign language translation systems, ultimately facilitating more effective communication and inclusivity for individuals with hearing impairments.