Adobe turns Voice assistants into Voice analytics sources with new Tool

adobe turns voice assistants into voice analytics sources with new tool

Adobe has launched Adobe Sensei for Voice, a voice analytics tool within the Adobe Analytics Cloud that analyzes voice data from voice assistants. This tool will make Adobe the first company to analyze voice data from all major Assistants including Siri, Alexa, Bixby and Cortana. Adobe is betting on the future of the voice assistant market which could very well outdo the phone market one day.

The Voice assistant market has been booming with all the major companies turning to it as the next big breakthrough on how interact with technology. Conversational AI is increasingly becoming the point of discussion among the biggies and Adobe is cashing in on that.

"We're seeing a big shift from a user experience and interaction standpoint from touch to voice. In Mary Meeker's 2017 trends report, she said voice is beginning to replace typing and online queries," said Colin Morris, Director of Product Management for Adobe Analytics Cloud. "Voice query accuracy is higher now, whether it's interacting with an app, your in-car experience, or at home trying to make a purchase on your Amazon Echo. We want to make sure your brand can collect data from those interactions, whether it's Alexa, Google, Siri, or what have you."

According to Adobe the new software can consume data from Alexa, Bixby,Siri and Cortana including both contextual and user intent data. Contextual data can be used by brands when targeting customers across other channels. Although Amazon has its own Voice analytics tool, the difference in Adobe’s tool, according to Adobe is that this tool combines different datasets and tracks devices across multiple products.

This basically means that Adobe can track the actions the users take with their conversational AI assistant of choice and the things they regularly interact with giving companies a better idea and more insights into the AI’s use. For example, A user usuall calls an Uber and listens to music, right after it. This is kept track and can help the assistant to adapt better according to the user’s actions.

With Adobe Sensei, we take that voice data and start to run behavioral analysis based on what people are saying across different assistants and channels, and cluster that based on what's valuable," Morris added. "You might get all sorts of new insights based on what people are asking."