The Race for AI-Enabled, Natural-Language and Voice Interface PlatformsThe Race for AI-Enabled, Natural-Language and Voice Interface Platforms
Major tech companies are creating voice interfaces backed by AI that will force developers to rethink not just user experience and user interfaces but also applications and platform allegiances.
May 2, 2017
By Chris Kocher
Did you ever stop to wonder: What is Amazon not doing with technology? These days, you’d be hard-pressed to answer that question, given the company’s incessant schedule for announcing updates and new products. The Seattle-based e-commerce giant is seemingly everywhere—whether it’s the latest cloud offerings in AWS, new entertainment shows on Prime, automated retail stores, leased fleets of Boeing jets, smart speakers, payment systems, autonomous cars and trucks, freight forwarding companies, or airborne warehouses. Amazon also happens to have warehouses within 20 miles of 44% of the population of the United States, according to Piper Jaffray analyst Gene Munster.
A Brave New World
In many of the company’s recent announcements, Amazon’s voice assistant Alexa plays a central role. It’s clear that artificial intelligence-enabled, natural-language, voice recognition systems are going to be even more important to Amazon in the future. In fact, company CEO Jeff Bezos says Alexa could be the fourth pillar of its business. Complementing Amazon’s retail marketplace, AWS and Amazon Prime, Alexa and its 10,000-plus “skills” could become one of Amazon’s strategic initiatives.
Not only is it important to Amazon, but all the major tech companies are gearing up for a major competitive battle in this evolving platform war. There are a range of personal assistants from the likes of Google (Google Now), Microsoft (Cortana), Apple (Siri) and Samsung (Bixby), as well as Watson, the natural language cognitive computing platform from IBM.
Even some small companies like Voysis (Dublin, Ireland) are developing voice systems on “neutral” third-party platforms for all the other tech players who can’t afford to play in this game with the major tech companies. According to a company statement, Voysis believes that “voice will soon be the first point of contact between ‘man’ and machine.”
It’s now clear that voice represents a whole new platform paradigm. Just as we went from DOS to Windows, or from PCs to mobile app-based interfaces, we are now seeing a whole new metaphor evolve around voice. This is why Amazon has released the Alexa APIs to encourage other developers to use its platform. And by the way, the more interaction flowing through Alexa, the more opportunity the company has to gather data, refine, and improve Amazon’s back-end AI systems.
[Hear from Chris Kocher and other IoT experts at the world’s biggest IoT event, Internet of Things World, in Santa Clara, Calif., this month.]
Recently, Amazon even released the reference design for its “far-field speech processing hardware” and its patented seven-speaker microphone array in the Echo device. Now other hardware vendors, appliance manufacturers, medical equipment, and industrial machinery vendors can incorporate Alexa voice control into their refrigerators, washing machines, entertainment systems, test instruments, cars, and everything else a person might wish for.
Even if these companies don’t have in-house microphone engineers, voice recognition specialists, or AI experts, they can still leverage Amazon’s systems to build voice-enabled control capabilities and AI-backed intelligence into their devices.
Alexa and the other AI-enabled, natural-language voice recognition systems are going to be a major part of the UI landscape of the future. Here’s what the vendors other than Amazon are doing:
Google is working to expand Google Now’s capabilities even though it is way behind Alexa in terms of the volume of skills. That platform already has 10,000 “skills,” thanks to solid support from Amazon’s developer community.
Apple has had Siri available on its systems for years but still needs to extend it to other devices or have it be perceived as a “closed” system.
Microsoft is busy working hard with its long-established developer community: Developers will build out support for the company’s Cortana-based voice UI and Azure cloud systems, which will be the repository of their AI initiatives.
Samsung appears to be late to the game with Bixby, which is in limited distribution and only running on the Galaxy S8 phones.
IBM has been touting Watson for AI since it first competed on Jeopardy in 2011. Since then it has focused on a number of vertical markets. It has an opportunity to be an underlying AI technology and work with Google, Amazon, Apple, and Microsoft in various vertical application areas like health or perhaps more broadly across industries and applications.
These are the contours of the looming platform wars. It’s early days as these large tech companies jockey for position. They’re all striving to attract developers and partners to their ecosystems as they seek to dominate the space that may shape the future of man/machine interfaces. What happens in this space will also heavily influence the development of IoT devices, networks, and user interaction across multiple industries and markets for the next 10 to 20 years.
As in other platform battles of the past, strong ecosystems will be critical. Robust APIs, good documentation, examples, tools and templates, consulting, support, and developer programs will be table stakes. Ultimately, use cases, efficiency and market opportunity will help sway developers to prioritize platforms.
For developers, this is a double-edged sword. Major players will be wooing them with all kinds of goodies and incentives. But they will have to bet on one or two platforms to be the most successful for their specific applications and audiences, and then focus their finite development resources there.
To Infinity and Beyond
Over the next decade, developers will need to rethink and rework their approach to usability, interaction, and how they provide value to their users through these new voice-enabled interfaces. This pending shift will impact UI and product design, required programming skills, development schedules, partnerships, and ultimately resources and budgets to build these new systems.
As in past platform wars, innovators will leapfrog the competition, legacy laggards will be marginalized by these changes, and many other companies will end up somewhere in between.
There may be a premium on new UI and UX skills to reimagine and build user-friendly, effective voice interfaces. And, discipline, rigor and expertise will also be required to guide and test these development initiatives to some desired user experience objective. Just bolting on a thin layer of voice features to an existing app probably won’t cut it.
The consequences may be profound. Upstart developers that exploit voice and natural language in a compelling way may be able to attract disproportionate attention and grab market share from incumbent applications and devices. For the large platform vendors, getting this right can have huge impacts on their ecosystems, partnerships and platform momentum that may determine whether they ever reach critical mass. Welcome to the platform wars battle royale.
You May Also Like