How Voice Technology Is Changing the Way Brands Talk to Consumers
As sight and touch give way to voice and sound, serving ads will get even more personal and conversational, writes Leo Burnett Strategy Director Chadi Saab
If you own a smartphone, you probably have already interacted with Apple’s Siri or Android’s Google Assistant. While smartphones started as a hub for these voice-activated assistants, the technology has found new homes on devices such as Amazon’s Echo and Google’s Home interactive device. The Consumer Technology Association forecasts that U.S. volume sales of voice-activated systems will grow by 52% year on year to reach 4.5 million units sold by end of 2017.
In 2016, 55% of consumers started their online product searches on Amazon, up 125% from 2015. This number is expected to grow even further as Echo’s capabilities and third-party integration capabilities expand even further. How is this significant? It means the consumer decision journey could potentially begin and end on Amazon before Google or any other platform has the chance to appeal to shoppers. We should expect Amazon, Google, IBM and Microsoft to rapidly build new and innovative connections to and for consumers.
Voice-activated systems come with a set of built-in capabilities to perform specific tasks. For instance, Amazon Echo today has approximately 1,500 capabilities (with a possibility of adding thousands more capabilities in 2017) that can check your bank account; have your FitBit stats read for you; lock your door; order an Uber; buy items from Amazon.com; and pick a film or skip to a specific scene. Ultimately, vocal computing will have major implications on advertisers and brand custodians.
Say you want to know who the global CEO of Leo Burnett is. Chances are you’ll unlock your phone and either use your mobile browser’s search engine or prompt Siri to find the answer. You can either type or voice search. If you choose the keyboard, you’re likely to type “Leo Burnett Global CEO,” but if you voice search, chances are you’re going to say, “Who is the global CEO of Leo Burnett?” (Voice search typically results in longer queries and use more question phrases than when you type.) Whichever method you choose, the phone’s voice assistant most likely will show you results on the screen.
Now, say you start your voice query (“Who is the global CEO of Leo Burnett?”) using Amazon’s Echo or Google Home. There is no screen. Your answer will be a very audible, “The Global CEO of Leo Burnett is Rich Stoddart.”
These two very basic changes in behavior and UI – spoken queries and audible responses – have three strong implications on advertising:
- Better understanding of the consumer’s intent
- More addressable and personalized ads
- Conversational advertisements
Better Understanding of the Consumer’s Intent: As consumers start using natural language when initiating search by voice, the consumer’s intent can be revealed more clearly. For example, you can search “Galaxy S7” on your phone to look at pictures of the device, compare it with other smartphones, or read reviews about the device. But when it comes to voice query, you might ask, “Alexa, what colors does the Galaxy S7 come in?” or “What are the reviews of the Galaxy S7?” The conversational query provides a signal of intent to purchase or of consideration. Understanding the intent of the consumer will help marketers and advertisers to build user-intent models to understand where the consumer is on the customer journey. What are the common challenges during each phase of these journeys, and what can the brand do to overcome these challenges?
Addressable and personalized ads will become inevitable: While 66% of marketers think they are doing an excellent or very good job at personalizing advertisements, only 31% of consumers say companies are doing so. By understanding the consumer’s intent, advertisers and marketers will be able to build and serve ads that work harder in converting new prospects, or engaging current customers to increase loyalty. These advertising implications are not solely restricted to search-engine ads. Typing “Galaxy S7” does not show any specific interest or intent, but if you ask Google Assistant or Google Home to “Show me photos of Samsung Galaxy S7,” brands can learn that this prospect is interested in the design of the phone rather than its features or battery life. This way, brands will be able to reach this prospect on all of Google’s advertising solutions: YouTube, Google Display Network and Google search.
Conversational advertisements: With the rise of these home-based voice assistants, information between the consumer and voice assistant will be exchanged verbally. So no more keyboard and no more screen. It will become harder for companies to target users with a visual ad. But there is an opportunity to create conversational ads and other content for consumers to engage with. Patron Tequila recently launched a partnership with Amazon Echo to provide drink recommendations and spoken recipes designed for you. This allows Patron to extend its content strategy beyond its owned and social channels to create compelling content on search that speaks to consumers and links back to an ecommerce platform to facilitate an easy sales conversion. Software giant Adobe is in the process of helping retailers and brands develop deeper relationships and experiences with their customers. By boosting the home-based voice-assistant’s IQ through Automated Insights, Adobe will help Amazon Echo and Google Home turn raw data into sentences to deeply connect with customers.
As our interaction with the machine shifts from sight and touch to voice and sound, the internet evolves from a marketplace that requires a few clicks to complete a purchase to voice waves that command items to be summoned to wherever you please.
Chadi Saab is a strategy director at Leo Burnett.