What’s next for Azure containers?
The second part of Azure CTO Mark Russinovich’s “Azure Innovations” Ignite 2025 presentation covered software and a deeper look at the platforms he expects developers will use to build cloud-native applications.
Azure was born as a platform-as-a-service (PaaS) environment, providing the plumbing for your applications so you didn’t have to think about infrastructure, as it was all automated and hidden by APIs and configured through a web portal. Over the years, things have evolved, and Azure now supports virtual infrastructures and a command line where you can manage applications as well as its own infrastructure-as-code (IaC) development tools and language.
Despite all this, the vision of a serverless Azure has been a key driver for many of its innovations, from its Functions on-demand compute platform, to the massive data environment of Fabric and the hosted scalable orchestration platform that underpins the microservice Azure Container Instances. This vision is key to many of the new tools and services Russinovich talked about, delivering a platform that allows developers to concentrate on code.
That approach doesn’t stop Microsoft from working on new hardware and infrastructure features for Azure; they remain essential for many workloads and are key to supporting the new cloud-native models. It’s important to understand what underlies the abstractions we’re using, as it defines the limits of what we can do with code.
Serverless containers at scale
One of the key serverless technologies in Azure is Azure Container Instances. ACI is perhaps best thought of as a way to get many of the benefits of Kubernetes without having to manage and run a Kubernetes environment. It hosts and manages containers for you, handling scaling and container life cycles. In his infrastructure presentation, Russinovich talked about how new direct virtualization tools made it possible to give ACI-hosted containers access to Azure hardware such as GPUs.
Microsoft is making a big bet on ACI, using it to host many elements of key services across Azure and Microsoft 365. These include Excel’s Python support, the Copilot Actions agents, and Azure’s deployment and automation services, with many more under development or in the middle of migrating to the platform. Russinovich describes it as “the plan of record for Microsoft’s internal infrastructure.”
ACI development isn’t only happening at the infrastructure level. It’s also happening in the orchestration services that manage containers. One key new feature is a tool called NGroups, which lets you define fleets of a standard container image that can be scaled up and burst out as needed. This model supports the service’s rapid scaling standby pools which can be deployed in seconds, applying customization as needed.
With ACI needing to support multitenant operations, there’s a requirement for fair managed resource sharing between containers. Otherwise it would be easy for a hostile container to quickly take all the resources on a server. However, there’s still a need for containers within a subscription to be able to share resources as necessary, a model that Russinovich calls “resource oversubscription.”
This is related to a new feature that builds on the direct virtualization capabilities being added to Azure: stretchable instances. Here you can define the minimum and maximum for CPU and memory and adjust as load changes. Where traditionally containers have scaled out, stretchable instances can also scale up and down within the available headroom on a server.
Improving container networking with managed Cilium
Container networking, another area I’ve touched on in the past, is getting upgrades, with improvements to Azure’s support for eBPF and specifically for the Cilium network observability and security tools. Extended Berkeley Packet Filters let you put probes and rules down into the kernel securely without affecting operations, both in Linux and Windows. It’s a powerful way of managing networking in Kubernetes, where Cilium has become an important component of its security stack.
Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking.
You’ll see the biggest improvements in pod-to-pod routing with small message sizes. This shouldn’t be a surprise: the smaller the message, the bigger the routing overhead using iptables. Understanding how this can affect your applications can help you design better messaging, and where small messages get delivered three times faster, it’s worth optimizing applications to take advantage of these performance boosts.
By integrating Cilium with Azure’s AKS, it now becomes the default way to manage container networking on a pod host (38% faster over a bring-your-own install), working as part of the familiar Advanced Container Networking Services tools. On top of that, Microsoft will ensure your Cilium instance is up to date and will provide support that a bring-your-own instance won’t get.
Even though you are unlikely to interact directly with Azure’s hardware, many of the platform innovations Russinovich talks about depend on the infrastructure changes he discussed in a previous Ignite session, especially on things like the network accelerator in Azure Boost.
This underpins upgrades to Azure Container Storage, working with both local NVMe storage and remote storage using Azure’s storage services. One upgrade here is a distributed cache that allows a Kubernetes cluster to share data using local storage rather than download it to every pod every time you need to use it—a problem that’s increasingly an issue for applications that spin up new pods and nodes to handle inferencing. Using the cache, a download that might take minutes is now a local file access that takes seconds.
Securing containers at an OS level
It’s important to remember that Azure (and other hyperscalers) isn’t in the business of giving users their own servers; its model uses virtual machines and multiple tenants to get as much use out of its hardware as possible. That approach demands a deep focus on security, hardening images and using isolation to separate virtual infrastructures. In the serverless container world, especially with the new direct virtualization features, we need to lock down even more than in a VM, as our ACI-hosted containers are now sharing the same host OS.
Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable.
Microsoft has been adding new capabilities to Linux that can verify the code running in a container. This new feature, Integrity Policy Enforcement, is now part of what Microsoft calls OS Guard, along with another new feature: dm-verity. Device-mapper-verity is a way to provide a distributed hash of the containers in a registry and the layers that go into composing a container, from the OS image all the way up to your binaries. This allows you to sign all the components of a container and use OS Guard to block containers that aren’t signed and trusted.
Delivering secure hot patches
Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.
As this approach is open source, Microsoft is working to upstream dm-verity into the Linux kernel. You can think of it as a way to deploy hot fixes to containers between building new immutable images, quickly replacing problematic code and keeping your applications running while you build, test, and verify your next release. Russinovich describes it as rolling out “a hot fix in a few hours instead of days.”
Providing the tools needed to secure application delivery is only part of Microsoft’s move to defining containers as the standard package for Azure applications. Providing better ways to scale fleets of containers is another key requirement, as is improved networking. Russinovich’s focus on containers makes sense, as they allow you to wrap all the required components of a service and securely run it at scale.
With new software services building on improvements to Azure’s infrastructure, it’s clear that both sides of the Azure platform are working together to deliver the big picture, one where we write code, package it, and (beyond some basic policy-driven configuration) let Azure do the rest of the work for us. This isn’t something Microsoft will deliver overnight, but it’s a future that’s well on its way—one we need to get ready to use.
Original Link:https://www.infoworld.com/article/4112169/whats-next-for-azure-containers.html
Originally Posted: Thu, 01 Jan 2026 09:00:00 +0000












What do you think?
It is nice to know your opinion. Leave a comment.