Shyamala’s Substack
The Future is Spoken
Use Cases for Voice-First Interfaces
0:00
-39:18

Use Cases for Voice-First Interfaces

This week’s guest on The Future is Spoken is Brielle Nickoloff. 

Brielle is a conversational designer based in Washington DC. A passion for language and the patterns of language use led Brielle to a career in voice tech. 

She has always been curious about what people say and why they choose the words they do. In college Brielle took an elective called conversational analysis. The students would analyze conversations and word choices.  

“We are such natural conversationalists, it’s second nature to us to speak. There are such interesting ways that we communicate - sometimes we don’t even think about it,” she observes.

Brielle observes that in some settings humans communicate in such a specific context, we don’t even have to use language. She cites the example of buying something in a store; most of us will place something at the checkout and there may be few, if any, words used. 

“We are just naturally aware of how to communicate,” she explains, adding that the college course was instrumental in piquing her interest in linguistics. Not long afterwards, she wanted to explore more.

Brielle explains that everyone has their own idiolect, which means each individual has their own unique language use patterns. So while it would be impossible to create a voice tech interface for an individual, it is possible to design an interface for a group with similarities.

Voice tech is one of few new technologies that can take us away from a screen-based world. Voice is giving us different options when it comes to figuring out the best way to communicate with others, or with artificial intelligence. 

Examples include communicating with a vehicle while driving, entering a home carrying groceries and asking to switch the lights on, or running on a treadmill and requesting a new song or different podcast episode.

Brielle says that often the starting point for interface design is usually to consider a user’s physical environment and what their request may be.  

“When you start looking at things this way, really amazing opportunities for use cases start to emerge,” says Brielle.

The next step is determining whether the voice interface needs to give a confirmation to a request. For example, a request to switch on the lights requires a response from the interface. Part of the reason for this is that if, for example, a bulb isn’t working, a human can determine where the problem may lie.  

Brielle explores other examples of use cases for voice-first, our emotional responses to this technology, and lots more in this exciting episode! 

Find Brielle on LinkedIn.

Discussion about this podcast

Shyamala’s Substack
The Future is Spoken
My personal Substack