- Google is working on a new API for Android 16 that lets system apps perform actions on behalf of users inside applications.
- This new API is guarded by a permission that’ll be granted to the default assistant app, ie. Gemini on new Android devices.
- This could let Gemini act as an AI agent on your phone, which is something Google originally promised the Pixel 4’s new Google Assistant would do.
Google is giving everything it has to make its Gemini chatbot and large language model more successful, including integrating it across its entire product suite. On Android, Gemini has become the default assistant service on many devices, and the number of things it can do continues to grow with each update. While Gemini can interact with some external services, its ability to control Android apps is very limited at the moment. However, that could change in a big way with next year’s Android 16 release, which is set to include a new API that lets services like Gemini perform actions on behalf of users inside applications.
Gemini Extensions are how Google’s chatbot currently interacts with external services. Extensions give Gemini access to web services like Google Flights, Google Hotels, OpenStax, and more, allowing it to pull in data from these services when you ask it relevant questions. There are also extensions for things like Google Maps, Google Home, YouTube, and Google Workspace, all of which are available as apps on Android. However, these extensions let the chatbot use your account data when calling the backend APIs for these services rather than directly controlling the respective Android apps. Finally, there are some extensions like Utilities that do let Gemini control Android apps directly, but they only let the chatbot perform basic actions using well-defined intents.