Gemini Flash model gets visual reasoning capability
Google has added an Agentic Vision capability to its Gemini 3 Flash model, which the company said combines visual reasoning with code execution to ground answers in visual evidence. The capability fundamentally changes how AI models process images, according to Google.
Introduced January 27, Agentic Vision is available via the Gemini API in the Google AI Studio development tool and Vertex AI in the Gemini app.
Agentic Vision in Gemini Flash converts image understanding from a static act into an agentic process, Google said. By combining visual reasoning andcode execution, the model formulates plans to zoom in, inspect, and manipulate images step-by-step. Until now, multimodal models typically processed the world in a single, static glance. If they missed a small detail—like a serial number or a distant sign—they were forced to guess, Google said. By contrast, Agentic Vision converts image understanding into an active investigation, introducing an agentic, “think, act, observe” loop into image understanding tasks, the company said.
Agentic Vision allows a model to interact with its environment by annotating images. Instead of just describing what it sees, Gemini 3 Flash can execute code to draw directly on the canvas to ground reasoning. Also, Agentic Vision can parse high-density tables and execute Python code to visualize findings. Future plans for Agentic Vision including adding more implicit code-driven behaviors, equipping Gemini models with more tools, and delivering the capability in more model sizes, extending it beyond Flash.
Original Link:https://www.infoworld.com/article/4123202/gemini-flash-model-gets-visual-reasoning-capability.html
Originally Posted: Wed, 28 Jan 2026 03:20:44 +0000












What do you think?
It is nice to know your opinion. Leave a comment.