IIT Guwahati Develops Sensor for Voice Recognition in Speech-Impaired Individuals
The sensor is designed as an assistive tool for individuals with voice disabilities who cannot use conventional speech-based systems.
Researchers at the Indian Institute of Technology (IIT) Guwahati, in collaboration with Ohio State University, have developed an underwater vibration sensor that can enable contactless voice recognition.
The sensor is designed as an assistive tool for individuals with voice disabilities who cannot use conventional speech-based systems.
The research, published in Advanced Functional Materials, involves contributions from Prof. Uttam Manna (Department of Chemistry), Prof. Roy P. Paily (Department of Electronics and Electrical Engineering), research scholars Debasmita Sarkar, Rajan Singh, Anirban Phukan, and Priyam Mondal of IIT Guwahati, and Prof. Xiaoguang Wang and Ufuoma I. Kara of The Ohio State University.
AI-Enabled Design Detects Air Disturbance Without Sound
The sensor identifies disturbances caused by exhaled air over a water surface, even when sound is not produced. Positioned just below the air-water interface, it detects minute vibrations from exhaled breath and converts them into electrical signals. The device uses a chemically reactive porous sponge and integrates Convolutional Neural Networks (CNNs) to interpret signal patterns, allowing recognition of attempted speech.
The team stated that voice recognition technologies are inaccessible to many individuals with speech impairments, especially children and young adults aged 3 to 21. This device seeks to address that communication gap.
Clinical Validation and Cost Reduction in Pipeline
The prototype developed in the lab currently costs around INR 3,000. The researchers are exploring industry partnerships to reduce the cost further and make the technology more accessible.
Speaking on the technology, Prof. Uttam Manna said, “It is one of the rare designs of material allowing the recognition of voice based on monitoring the water wave formed at the air/water interface because of exhaling air from the mouth. This approach will likely provide a viable communication solution with individuals with partially or entirely damaged vocal cords.”
The sensor has demonstrated durability during prolonged underwater use and could be applied in other areas such as movement detection, exercise tracking, and underwater sensing. The team also plans to collect speech data from individuals with voice disabilities to refine the AI model and enable recognition of specific words or phrases used in smart device operations.
Stay tuned for more such updates on Digital Health News