Video: "New Google Gemini 3.2 + Omni LEAKS!" by Julian Goldie on YouTube.
What the leak actually shows
The leak came from UI strings inside the Gemini app — not a document dump or a code repository. Someone noticed that interface text referenced both a "Gemini 3.2" model and something called "Omni" inside the app's settings and model selector. Google has not confirmed either. UI string leaks like this are generally reliable — they come from compiled app code and are rarely accidental — but they don't tell you much about capability or release timing.
What makes this one interesting is the combination. Gemini 3.2 as a model name suggests an incremental update above the current 3.0 release, which is not especially surprising. The Omni label is the more significant signal, because it implies a multimodal system: one model handling video, images, text, and potentially browser-based actions within the same pipeline.
What Omni tells us about Google's direction
Google currently has separate models and separate tools for different modalities. Gemini handles text and some image work. Veo handles video generation. Deep Research handles longer information tasks. If Omni brings those capabilities under a single model inference layer — so that Gemini can take in a video, generate an image, and write a report in the same session — that is a meaningful architectural shift, not just a feature update.
That said, this is a leak, not an announcement. The Omni label might refer to something more limited: a video generation feature inside Gemini, rather than a full multimodal reasoning system. We will know more after I/O. What we can say is that Google is clearly moving towards collapsing its separate AI tools into fewer, more capable products.
How this might change search — and what it means for SEO
Google's AI Overviews already pull text from pages and present answers directly in the search results. If Google's AI can also process video, analyse images, and reason across document types, the range of content it can cite and summarise expands considerably. For sites that produce good video content, well-structured image libraries, or detailed PDF resources, that is potentially an opportunity. For sites that rely on thin text pages, the picture gets harder.
The practical implication: if Omni ships at I/O, Google's AI answers will increasingly reference and summarise non-text content. Optimising purely for text-based AI Overviews will not be enough. Sites that want to appear in Gemini-powered answers will need to think about how their video and image content is structured and labelled — the same E-E-A-T principles that apply to text apply to everything else too.
What to watch for at I/O and what to do now
Google I/O runs 19–20 May 2026. The most useful thing to watch for is not the announcement itself but the specific search integrations: does Omni power AI Overviews, and if so, for what query types? That will tell you much more about the SEO impact than any keynote framing. In the meantime, there is nothing to do differently right now — any tactical response should wait until the actual capability is clear.
Worth knowing: Gemini 3.2 and Omni are still speculative. Leaks from UI strings have a decent track record, but Google also tests things that never ship. Don't restructure a content strategy around a string in an app build.
Where this connects to NordSys
Google I/O announcements have a habit of changing what SEO actually requires. When AI Overviews expanded, sites that were already optimising for citations fared better than those that scrambled to adapt afterwards. If Gemini 3.2 and Omni represent the next step, the same principle applies — get your content quality and structure right before the announcement, not after. Our SEO & AI Ranking service covers exactly this: building a content foundation that performs in AI-driven search, whatever Google ships next.
See our SEO & AI Ranking service →