TL;DR
Google has launched Gemini 3, a major AI model upgrade featuring enhanced reasoning capabilities, generative UI interfaces, and agentic task automation. The model comes in two flavours—Gemini 3 Pro for everyday use and Gemini 3 Deep Think for complex reasoning—marking significant improvements in multimodal understanding and context handling with a million-token context window.
Enhanced Reasoning and Multimodal Capabilities
Google is positioning Gemini 3 as a leap forward in reasoning rather than just raw size or speed. The company demonstrates the model’s capabilities through examples like parsing and translating handwritten recipes, merging them with voice notes, and writing cookbooks from the combination.
The model’s multimodal capabilities feel most transformed in video analysis. Gemini 3 now understands movement, timing, and other details—capable of analysing sports games and suggesting training plans for players. Its million-token context window enables tracking sprawling, real-world information without degrading performance halfway through long sessions.
Gemini 3 comes in two versions: Gemini 3 Pro is the everyday, full-featured version available immediately in apps, Search, and developer tools. Deep Think is the “enhanced reasoning” mode with additional processing capabilities, currently in testing and available to Google AI Ultra subscribers.
Gemini App Overhaul and Agentic Features
Gemini 3 arrives alongside one of the largest overhauls of the Gemini app, featuring a new interface navigation system and a “My Stuff” folder for all AI-generated content. The most noticeable change is new generative interfaces built in real time based on user requests, rather than using templates.
For example, asking for help planning a vacation might produce a magazine-style itinerary complete with visuals. Instead of walls of text answering complicated questions, users see visually-heavy layouts of diagrams, tables, and other illustrations.
The Gemini app is also introducing an agent to act on users’ behalf. If given a task requiring dozens of steps, it can carry them out using connected Google apps. The agent is starting with Google AI Ultra members and expanding from there.
Search Integration with Generative UI
For the first time, a Gemini model is available in Search immediately, with Google routing the toughest queries to it. US Google AI Pro and Ultra subscribers will see Gemini 3 Pro in AI Mode, with broader access coming soon.
Gemini 3 improves Google’s approach of searching multiple interpretations of questions to find relevant content. The model understands intent deeply enough to discover material that earlier versions routinely missed.
The most striking upgrade is the new generative UI in Search. When users ask complex questions, Gemini 3 constructs layouts with visuals, tables, grids, and even custom-coded interactive simulations alongside answers. A question about the three-body problem produces a manipulable model. A question about loans generates a calculator tailored to included details. Answers become more like small applications.
Google includes plenty of links to source material, theoretically encouraging follow-ups by users. The company says this system will evolve, particularly as automatic model selection routes more queries to Gemini 3 behind the scenes.
Looking Forward
The Gemini 3 launch represents Google’s most confident positioning of its AI capabilities to date. The emphasis on reasoning, multimodal understanding, and generative interfaces suggests the company is moving beyond simple question-answering toward more complex task completion and information synthesis.
The agent capabilities, currently limited to Ultra subscribers, indicate Google’s strategic direction: AI systems that don’t just provide information but actively complete multi-step tasks across integrated services. How quickly these capabilities expand to broader user bases will likely determine Google’s competitive position against rivals like OpenAI and Anthropic.
Source: TechRadar