According to the company, “It boasts the most powerful coding capabilities and the capacity to process intricate prompts, coupled with superior understanding and reasoning about world knowledge compared to any previous models.” This new version extends the prompt context window to an impressive 2 million tokens, streamlining its ability to process extensive inputs effortlessly.
Moreover, Google is introducing a more budget-friendly alternative: the 2.0 Flash-Lite model, which is now in public preview. Designed for affordability and enhanced performance, it can be accessed through Google AI Studio and Vertex AI systems.
What’s new in the Gemini mobile app?

For users on smartphones, the Gemini app is set to receive these AI enhancements. Starting today, mobile users can select between the new Gemini 2.0 Flash Thinking Experimental model and the Gemini 2.0 Pro Experimental model.
Ranked first on the Chatbot Arena LLM Leaderboard and surpassing OpenAI’s ChatGPT-4o and DeepSeek R1, the Gemini 2.0 Flash Thinking Experimental model represents a significant advancement. This model can utilize information gathered from platforms such as YouTube, Google Maps, and Search, allowing it to verify details from various sources and deliver relevant responses based on user queries.

Additionally, it incorporates advanced reasoning capabilities. Users can witness in real-time how the AI dissects their commands and synthesizes information into coherent answers.
The improvements, as stated by Google, include enhanced explainability, speed, and performance. This model can accept both text and image inputs with support for a maximum token window of one million, and its knowledge is up to date as of June 2024.
Next is the Gemini 2.0 Pro Experimental model, available to those subscribing to Gemini Advanced. Google describes this model as “exceptional at intricate tasks,” particularly excelling in mathematical problem-solving and coding.
This multi-modal AI system not only retrieves relevant data from Google Search but also leverages an enhanced understanding of the world to tackle more complex tasks. Users can access the new models from both the mobile app and the web interface.