Seeing speech: New tech helps deaf individuals learn to speak using visual feedback
Peer-Reviewed Publication
Updates every hour. Last Updated: 30-Apr-2025 00:08 ET (30-Apr-2025 04:08 GMT/UTC)
A pioneering study has demonstrated the remarkable potential of visual feedback technology to assist profoundly deaf individuals in developing oral speech. By translating speech sounds into visual patterns, the technology enables users to "see" their vocal efforts and adjust them to match reference models. Early trials with 72 participants have shown significant progress, with many learning up to 18 phonetic sounds within six months. This groundbreaking approach could revolutionize speech rehabilitation, offering a viable alternative to traditional methods like sign language and cochlear implants, particularly for those without early auditory interventions.
In a research paper, scientists from the Tsinghua University proposed a novel enhanced Digital Light Processing (DLP) 3D printing technology, capable of printing composite magnetic structures with different material sin a single step. Furthermore, a soft robot with a hard magnetic material-superparamagnetic material composite was designed and printed.
A technique to cool the planet, in which particles are added to the atmosphere to reflect sunlight, would not require developing special aircraft but could be achieved using existing large planes, according to a new modelling study led by UCL (University College London) researchers.
Scientists aren’t comedians, but it turns out a joke or two can go a long way. That’s according to a new University of Georgia study that found when researchers use humor in their communication — particularly online — audiences are more likely to find them trustworthy and credible.
Most current AI models rely on high-quality scanned ECG images. But in the real world, doctors don’t always have access to perfect scans. They often rely on paper printouts from ECG machines, which they might photograph with a smartphone to share with colleagues or add to a patient’s records. These photographed images can be tilted, crumpled, or shadowed, making AI analysis much more difficult.
To solve this, Dr. Vadim Gliner, a former Ph.D. student in Prof. Yael Yaniv’s Biomedical Engineering Lab at the Technion, in collaboration with the Schuster Lab in the Henry and Marilyn Taub Faculty of Computer Science, has developed a new AI interpretability tool designed specifically for photographed ECG images. This paper was published in npj-Digital Medicine. Using an advanced mathematical technique (based on the Jacobian matrix), this method offers pixel-level precision, meaning it can highlight even the smallest details within an ECG. Unlike previous models, it doesn’t get distracted by the background and can even explain why certain conditions don’t appear in a given ECG.