2.0 flash, flash-lite, pro experimental

In December, we started the Agentic era by releasing an experimental version of the Gemini 2.0 Flash – our very effective work horse model for low latency developers and improved performance. Earlier this year we updated 2.0 Flash Thinking Experimental In Google AI Studio, which improved its performance by combining Flash’s speed with the ability to resonate through more complex problems.

And last week we made an updated 2.0 flash available to all users of the Gemini app on desktop and mobile, which helped everyone discover new ways of creating, interacting and collaborating with Gemini.

Today we make the updated Gemini 2.0 -flash generally available via Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 flash.

We also release an experimental version of the Gemini 2.0 Pro, our best model yet for coding performance and complex prompt. It is available in Google AI Studio and Vertex AIAnd in Gemini app For gemini advanced users.

We release a new model, Gemini 2.0 Flash-Lite, our most cost-effective model yet, in the public preview of Google AI Studio and Vertex AI.

Finally, 2.0 Flash Thinking Experimental will be available to the Gemini App users in the Dropdown model on desktop and mobile.

All of these models will have multimodal input with text output at release, with multiple modalities ready for general accessibility in the coming months. More information, including details of pricing, can be found in Google for developers blog. As we look ahead, we are working on more updates and improved capacities for the Gemini 2.0 family of models.

2.0 Flash: A new update to general availability

First introduced on the I/O 2024, the Flash series with models is popular with developers as a powerful work horse model, optimal to high volumes, high-frequency tasks in scale and much capable of multimodal reasoning across large quantities of information with a context window of 1 million tokens. We have been excited to see its reception of the developer community.

2.0 Flash is now generally available to more people across our AI products along with improved performance in Key Benchmarks, with image generation and text-to-speech coming soon.

Try gemini 2.0 -flash in Gemini app or gemini api in Google AI Studio and Vertex AI. Price information can be found in Google for developers blog.

2.0 Pro Experimental: Our best model yet for coding performance and complex prompts

As we continued to share early, experimental versions of Gemini 2.0 as Gemini-EXP-1206, we have received excellent feedback from developers about its strengths and best utility cases, such as coding.

Today we release an experimental version of the Gemini 2.0 Pro that responds to this feedback. It has the strongest coding performance and ability to deal with complex prompts with better understanding and reasoning for world knowledge than any model we have released so far. It comes with our largest context window of 2 million tokens, which allows it to analyze and understand huge amounts of information as well as the opportunity to call tools such as Google search and code execution.