Now Reading: Smoother Kubernetes sailing with AKS Automatic

Loading
svg

Smoother Kubernetes sailing with AKS Automatic

NewsSeptember 18, 2025Artifice Prime
svg9

Kubernetes is the default platform for cloud-native applications, but managing Kubernetes at scale isn’t trivial. New tools like Headlamp aim to reduce the overhead that comes with managing and deploying Kubernetes applications, but it is still easy to make mistakes and cause significant downtime. A recent survey from Komodor of enterprise Kubernetes usage showed that 79% of incidents in running environments are caused by system changes. On top of that, these outages take close to an hour to detect and resolve.

The result is significant expense, with major downtime costing about $1 million an hour. How can we keep those costs, and the loss of customer goodwill associated with outages, to a minimum?

Kubernetes tax avoidance

Microsoft calls these complexities “the Kubernetes tax”: the overhead that comes with running a platform built around a complex set of moving parts. On one hand, there’s an orchestrator that manages containers; on the other, a set of monitoring and observability tools; on yet another, the combination of operators and service mesh components that control networking and security. To get the most from Kubernetes, you need to be an expert or at least employ one.

Options to avoid much of this tax include using a managed platform like Azure Kubernetes Service (AKS) or letting a service provider like Azure Container Instances (ACI) do all the work so you can focus purely on your workloads. Both build on the lessons Microsoft has learned from running its own cloud-native applications at scale and supporting millions of customers around the world.

ACI lets you completely ignore the underlying Kubernetes platform, leaving Microsoft to manage its configuration so that all you need to do is provide the containers with your code. AKS needs a lot more Kubernetes knowledge, as you still need to configure and manage your applications, as well as choose from a menu of features and options. Both are useful, but the path between the two can be complex. If you outgrow ACI, there isn’t a beginner-friendly route to using AKS.

We need something that sits in between the two platforms and gives us the guardrails and guides needed to build a modern Kubernetes application. We need something that supports features that aren’t available in ACS and that automatically configures the key elements of an AKS implementation.

Introducing AKS Automatic

Microsoft recently announced a promising solution in the shape of AKS Automatic. Designed to provide a ready-to-run Kubernetes platform that offers support for best practices, it’s a way to quickly stand up a Kubernetes environment that’s both secure and reliable, avoiding the configuration issues that can quickly lead to expensive and time-consuming outages. It’s now generally available and ready for use.

AKS Automatic is intended to be opinionated. Your application will be running a cluster that’s built based on Microsoft’s experiences. Once you set up an AKS Automatic cluster, you’ll find a number of key services and features configured and ready to go. These include several monitoring features: Prometheus is set up to capture cluster metrics, along with logs in Container Insights and a set of Grafana dashboards to visualize your cluster operations.

Other preconfigured features help manage nodes, clusters, and scaling. This includes defaulting to Azure Linux for containers and using automated repair tools to keep your applications healthy. The service will automatically upgrade, so you’re always using the current Kubernetes release and the latest security updates, though breaking change detection ensures that upgrades don’t deploy if there are any significant changes to the Kubernetes APIs.

It’s important to note that the underlying platform is built on open source technologies, with support for existing Kubernetes APIs and extensions. It does come preloaded with features that extend how Kubernetes scales, including Kubernetes-based event-driven autoscaling (KEDA) and Karpenter. These let you scale nodes based on events and provide just-in-time nodes to help optimize capacity based on existing resources. Like Kubernetes, they’re open source and provide a foundation for improving the economics of a Kubernetes environment. For example, Karpenter is used to deliver node auto-provisioning out of the box. This chooses the best VM configuration for your workload.

Most instances will use a standard deployment, though if your application needs access to GPUs (for AI inferencing, for example), then AKS Automatic will choose appropriate VM types and will even configure the necessary drivers and operators.

The right Kubernetes for the job

Perhaps the most important aspect of AKS Automatic is that it’s designed to grow with you and with your application. You might be a lean startup without a platform engineering team, but you’re now able to use existing developer resources to stand up a Kubernetes environment and connect it to your GitHub Actions-based continuous integration/continuous delivery (CI/CD) pipeline. Like any piece of modern infrastructure, AKS Automatic will keep up to date, letting you concentrate on your code.

Prior to launch, I spoke to Brendan Burns, corporate vice president, Azure OSS and Cloud Native, who was one of the original Kubernetes developers and now leads much of Microsoft’s Kubernetes program. Burns notes:

[AKS Automatic is about] letting us take over more and more automation and allow teams to focus on their applications. The truth is that we have developed a ton of core competencies around running Kubernetes at scale and managing Kubernetes at scale, but, obviously, the customer wants to run their application, and they are the subject matter experts in their applications.

Building on real-world experience

It’s not only Microsoft’s own experience with Kubernetes that helped build AKS Automatic, but also the lessons learned from its customers. Azure’s support organization can see what causes problems, while Microsoft’s consultancy arm has helped build and configure environments for customers of all sizes. As Burns points out:

We’ve got years and years of customer support tickets showing us how people can effectively make problems for themselves. We’re taking that knowledge and that learning and encoding it into policy-based best practices. It actually means that we can prevent developers from making those same mistakes that other people have made in the past. And I think that’s pretty cool too, right? I’ve talked a lot about the open ecosystem, but this is another version of that where we’re actually learning from everybody else’s experience.

This approach means Microsoft can give AKS Automatic the settings and features that work for most users and applications. With them, you should be able to stand up a Kubernetes environment and deploy code from a registry that complies with the Open Container Initiative, using familiar tools like Helm.

A Kubernetes that grows with you

As your development team and your applications grow, you can start to evolve a platform engineering organization and add Kubernetes extensions without affecting your platform foundation, using Kubernetes existing devops and observability tools.

Changes in the way we build and run applications need to be considered. Burns says:

What we’re seeing is two things. One, there is this acceleration of the application development process, which means people are going from small scale to production even faster and [have] even less time to figure out the right ways to configure upgrades or autoscale or anything else like that. But they also want access to the Kubernetes open source ecosystem because that ecosystem is the place where AI is being innovated. [Two, you’re] using AI to enhance your productivity as a developer, so therefore, there are fewer people with you. In your startup at the beginning, you have even fewer operations people. And so, AKS Automatic fills a really strong need that blends the ease of use and automation of a platform but with the open source ecosystem.

Komodor’s survey makes it clear that Kubernetes is still hard to manage, especially for beginners. Giving experts the task of keeping it secure and up-to-date makes a lot of sense, especially for startups and organizations that are new to cloud-native development. AKS Automatic is a good middle ground between AKS and ACS. You have the ability to add customizations without incurring the full management overhead, with a configuration that’s right for your code.

Original Link:https://www.infoworld.com/article/4058764/smoother-kubernetes-sailing-with-aks-automatic.html
Originally Posted: Thu, 18 Sep 2025 09:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Smoother Kubernetes sailing with AKS Automatic

Quick Navigation