Apple has opened up its voice-based AI assistant to third-party developers. It’s the big news out of this week’s WWDC developer’s conference, and validates the widespread speculation prior to the event that Siri would take center stage.
In a special presentation today, Apple SVP Craig Federighi explained that developers can now enable Siri-based user queries, letting users dictate a message via WeChat, for example, or book a ride with Uber. It’s an expansion of a system into which Apple has already invested considerably, seeking to refine the speech and voice recognition capabilities of Siri since the program’s launch five years ago.
While the move may surprise some given Apple’s closed ecosystem approach, which tends to shun third party products in favor of Apple’s own proprietary offerings, it’s a reflection of a heightening competition among the major IT giants to master voice-based AI technology, with many anticipating that it will be the key user interface in the emerging Internet of Things. Apple is in fact emulating the approach of Amazon, whose opening of its Alexa voice-based AI system to third party development about a year ago has led to much richer functionality for end users today.
Apple may even take things a step further by adding a facial recognition capability to Siri in the future, a move that could open a whole other can of worms. For now, though, it’s promoting more voice-based interaction with what it hopes will be an increasingly useful AI assistant.