The MIT Technology Review just published an article about the detection of PTSD and other diseases through voice. The company behind it, Beyond Verbal, has been working in the emotion detection space for many years and it’s exciting to see a move in this direction.
While mental disorders can manifest themselves through language and emotion, there are other diseases that can affect voice, such as stroke, throat cancer, or even dental problems. The key requirement for being able to develop such technology is getting enough speech samples to develop a model to identify and match different voices with diseases.
It’s a bit lamentable that Google Health shut down a few years back. By matching health records with speech samples, it could have had the ability to start developing these models. However, as new AI becomes better at pattern matching, it’s likely fewer samples would be required to develop these models.