Google Enhances Gemini API with New Features for Gemini 3
Google has rolled out significant updates to its Gemini API to support the latest Gemini 3 AI model. These enhancements aim to improve the model’s reasoning, multimodal understanding, and autonomous capabilities, providing developers with more precise control over its functions.
Improved Reasoning and Media Control
The latest API updates introduce simplified controls for the model’s reasoning processes through the thinking_level parameter. This setting allows users to specify the depth of internal reasoning, enabling complex analyses or faster responses for time-sensitive applications. Additionally, the API offers granular control over multimodal vision processing via the media_resolution parameter, which adjusts image, video, and document input fidelity based on token usage and desired detail.
Higher resolution settings enhance the model’s ability to interpret small text or intricate details, while lower settings optimize for speed and cost-efficiency. These adjustments help tailor media processing to specific application needs.
Introduction of Thought Signatures and Enhanced Web Integration
Starting with Gemini 3, the API reintroduces ‘thought signatures’—encrypted representations of the model’s internal reasoning. Passing these signatures back in subsequent API calls helps maintain the reasoning chain across multi-step workflows, which is crucial for complex, agentic tasks.
Developers can now combine structured outputs with Gemini’s integrated tools, such as Grounding with Google Search and URL Context. This capability allows for dynamic web data fetching and extraction in JSON format, supporting real-time information retrieval. Notably, Google has adjusted the pricing for Grounding with Google Search from a flat $35 per 1,000 prompts to a usage-based rate of $14 per 1,000 search queries, making it more cost-effective for agentic workflows.
These API updates reflect Google’s commitment to enhancing Gemini 3’s reasoning, media handling, and web interaction features, empowering developers to build more sophisticated AI applications.












What do you think?
It is nice to know your opinion. Leave a comment.