Signmeet-Word Level Sign Language Recognition Using Deep Learning
Main Article Content
Abstract
Effective communication is essential for exchanging information, ideas and emotions. However, virtual meeting platforms such as Zoom, Microsoft Teams, and Google Meet are often not customized for people with hearing or speech impairment, creating a major barrier to inclusivity. Sign language provides a means of communication for the deaf, but interpretation remains a challenge for those who do not speak sign language. Existing sign language recognition technologies are limited in terms of accuracy and accessibility and are not suitable for seamless integration into virtual platforms. This project introduces an AI-driven system to bridge the communication gap between hearing impaired and hearing people in virtual meetings. The system utilizes advances in deep learning, particularly in time convolutional networks (TCNs), to enable real-time two-way communication. This includes character recognition modules (SRMs), which interpret characters using TCNs, speech recognition and synthesis modules (SRSMs), which utilize hidden Markov models that convert spoken words into text, and language visually to corresponding characters. It contains three core modules for the Avatar Module (AM) to translate. The avatar module is essential to visually expressing spoken language in sign language format, ensuring that non-sign language users can communicate effectively with sign language users in an intuitive and engaging way. Trained in Indian Sign Language, the system promotes communication between various groups, including deaf, mute, hearing loss, visually impaired and non-sign language. Improves accessibility and participation by integrating with popular virtual meeting platforms through an easy-to-use web-based interface. This represents significant advances in promoting inclusiveness and accessibility in virtual meeting environments