Introducing Gemini 1.5, Google’s next-generation AI mannequin
Introducing Gemini 1.5
By Demis Hassabis, CEO of Google DeepMind, on behalf of the Gemini group
That is an thrilling time for AI. New advances within the subject have the potential to make AI extra useful for billions of individuals over the approaching years. Since introducing Gemini 1.0, we’ve been testing, refining and enhancing its capabilities.
At present, we’re saying our next-generation mannequin: Gemini 1.5.
Gemini 1.5 delivers dramatically enhanced efficiency. It represents a step change in our method, constructing upon analysis and engineering improvements throughout almost each a part of our basis mannequin growth and infrastructure. This contains making Gemini 1.5 extra environment friendly to coach and serve, with a brand new Mixture-of-Experts (MoE) structure.
The primary Gemini 1.5 mannequin we’re releasing for early testing is Gemini 1.5 Professional. It’s a mid-size multimodal mannequin, optimized for scaling throughout a wide-range of duties, and performs at a similar level to 1.0 Ultra, our largest mannequin up to now. It additionally introduces a breakthrough experimental characteristic in long-context understanding.
Gemini 1.5 Professional comes with a normal 128,000 token context window. However beginning immediately, a restricted group of builders and enterprise clients can attempt it with a context window of as much as 1 million tokens by way of AI Studio and Vertex AI in non-public preview.
As we roll out the total 1 million token context window, we’re actively engaged on optimizations to enhance latency, scale back computational necessities and improve the consumer expertise. We’re excited for individuals to do that breakthrough functionality, and we share extra particulars on future availability under.
These continued advances in our next-generation fashions will open up new potentialities for individuals, builders and enterprises to create, uncover and construct utilizing AI.