On Friday, I had the privilege of being on a panel for the Toronto Tech Summit run by Genesys. It’s been amazing to see the this event take off over the years — growing from dozens to now hundreds of attendees. The panel was on Always On Voice and discussed the ramifications and opportunities of having voice always on.
The panelists talked about a couple of really interesting issues, especially privacy. Bianca Lopes of BioConnect spoke about the tokenization of our interactions and the potential that we could potentially share in the revenue that these platforms make off of our data. Will Pate of Connected Lab spoke about the need of accessibility being put first into design and not as an afterthought. Sachin Mahajan of MobileLive mentioned that AI is about the feeling that we derive from interacting with it and Janelle Matthews of Genesys spoke how always on voice is becoming another mode for companies to engage with clients.
Of the questions lobbed my way — I spoke about how the biggest drivers of usage of always on voice over the past three years have been advances in technology (speech recognition, DSP, etc) and new services (AVS as an all-in-one service for third party hardware makers). The ecosystem is now comprised of building out two parts: Endpoints and APIs. Hitching onto a point made by panelist Will Pate, I added that the next big push will be layering of different voice information on top of speech recognition to inform decision engines (gender, age, health, and mood of user, as an example), and that machines will have a better job of identifying this that us (probably within the next five years).
The TO Tech Summit normally runs twice a year so make sure to catch it in the fall. Thanks to Genesys and Chris Connoly for hosting and moderating.