Google has officially launched new camera and screen-sharing features through its Gemini Live app, making real-time AI interactions more intuitive and accessible. Initially thought to be exclusive to flagship devices like the Pixel 9 and Galaxy S25, these features are now available on any Android device running Android 10 or newer, provided users have a Gemini Advanced subscription.
Live Camera Integration:
Users can point their smartphone camera at objects, scenes, or text and ask Gemini contextual questions. For example, Gemini can identify objects, suggest decor ideas, or analyze printed text in real time.
Screen Sharing Capabilities:
With the "Share Screen With Live" feature, users can share their device screens with Gemini to receive instant AI-powered assistance. Whether browsing a website, shopping online, or reviewing documents, Gemini provides actionable insights based on on-screen content.
Enhanced Multimodal AI:
These features leverage Google's advanced multimodal AI from Project Astra to deliver seamless interactions, combining visual and contextual understanding for richer user experiences.
First showcased at Google I/O 2024 and finalized at MWC 2025, Project Astra introduces a new way for users to interact with AI by combining live video and screen-sharing functionalities. From helping ceramicists choose glazes to suggesting outfit ideas based on images, Project Astra demonstrates practical applications for both personal and professional use.
Early adopters have shared positive experiences with Gemini's new features. A Reddit user demonstrated how the app analyzed their screen content and camera feed seamlessly. However, some users noted limitations like screen sharing not resuming automatically after interruptions.
Gemini's new camera and screen-sharing features mark a significant milestone in Google's journey to make AI assistance more immersive and integrated into daily life. These capabilities roll out to more devices, they promise to redefine how users interact with technology.