Microsoft Adds Anthropic’s Claude to Its AI Toolbox for Better Flexibility
Microsoft is now giving users more options when it comes to AI tools inside its Microsoft 365 Copilot suite. Alongside OpenAI’s popular GPT models, users can now choose Anthropic’s Claude models, including Claude Sonnet 4 and Claude Opus 4.1. This move aims to give businesses more control and flexibility when building AI-driven workflows and research tools.
This update means that users can switch between OpenAI and Anthropic models in certain Microsoft tools like the Researcher agent and Copilot Studio. The Researcher tool helps users develop detailed strategies, analyze trends, and generate reports by reasoning across web data, third-party sources, and internal company content like emails and files. With the new support, Researcher can now run on either OpenAI’s or Anthropic’s models, depending on what’s best for the task.
What Does This Mean for Businesses?
Microsoft describes Researcher as a special kind of AI that can reason through complex, multi-step research tasks. It’s designed to help teams analyze market trends or compile reports by connecting data from many sources. Now, with Anthropic support, users can choose models that fit their specific needs. For example, Claude models are known for deep reasoning and trustworthiness, while GPT models are faster and better at drafting content.
In Copilot Studio, enterprises can create custom AI agents powered by Claude or GPT. These agents can handle tasks like workflow automation or complex reasoning. The system supports mixing models from different vendors, including Anthropic, OpenAI, and others from Microsoft’s Azure AI Model Catalog. This flexibility allows companies to tailor AI tools to different types of workloads.
Microsoft’s goal isn’t to replace GPT models but to complement them. Claude models excel in tasks that need detailed reasoning or longer context, while GPT models are faster at generating drafts. The idea is to match the right model to the right job, which can improve results and efficiency.
Why the Shift Toward Multiple Models?
The move to support Anthropic’s Claude models is part of Microsoft’s broader strategy to diversify its AI partnerships. Microsoft has experienced outages with GPT models, such as during the ChatGPT disruption earlier this year, which temporarily cut off access to GPT-based tools. Interestingly, during those outages, Copilot and Claude continued working. That highlighted the importance of having backup options in AI systems.
Having multiple models from different providers helps reduce risk. If one system experiences issues, another can take over. Microsoft’s approach shows a clear shift toward multi-model strategies. It’s a way to ensure better resilience and avoid over-reliance on a single vendor.
Market experts see this as a smart move. Microsoft is responding to increased competition and the need for enterprises to have more flexible, reliable AI options. By supporting models hosted outside Microsoft’s own cloud—like Claude on AWS—they are also navigating the challenges of cross-cloud operations.
Challenges and Considerations of Cross-Cloud AI
Running Claude on AWS introduces some technical and governance challenges. Because Claude is hosted outside of Microsoft’s Azure cloud, every time it’s used, data must cross cloud borders. This can lead to issues with latency, costs for data egress, and compliance concerns about data sovereignty.
Experts warn that enterprises need to plan carefully. Routing requests to the right cloud region, caching data to reduce repeated calls, and setting up strict monitoring are essential. Enterprises should also document where models are used, enforce security guardrails, and ensure requests are tied to specific users and regions.
Gogia, an analyst, emphasizes that this isn’t as simple as flipping a switch. Companies should treat multi-model setups as multi-cloud environments. They need to prepare for increased latency when data leaves Azure, and be ready to answer questions from compliance teams about data boundaries. Good practices include pinning Claude usage to the nearest AWS region, pre-configuring firewall rules, and logging all interactions for audit purposes.
Overall, Microsoft’s move to support Anthropic’s Claude models alongside GPT models aims to give enterprises more resilience, flexibility, and control. But with this comes the need for careful planning around security, latency, and governance to truly benefit from a multi-model AI environment.












What do you think?
It is nice to know your opinion. Leave a comment.