Now Reading: Cloud native explained: How to build scalable, resilient applications

Loading
svg

Cloud native explained: How to build scalable, resilient applications

NewsDecember 19, 2025Artifice Prime
svg4

What is cloud native? Cloud native defined

The term “cloud-native computing” encompasses the modern approach to building and running software applications that exploit the flexibility, scalability, and resilience of cloud computing. The phrase is a catch-all that encompasses not just the specific architecture choices and environments used to build applications for the public cloud, but also the software engineering techniques and philosophies used by cloud developers.

The Cloud Native Computing Foundation (CNCF) is an open source organization that hosts many important cloud-related projects and helps set the tone for the world of cloud development. The CNCF offers its own definition of cloud native:

Cloud native practices empower organizations to develop, build, and deploy workloads in computing environments (public, private, hybrid cloud) to meet their organizational needs at scale in a programmatic and repeatable manner. It is characterized by loosely coupled systems that interoperate in a manner that is secure, resilient, manageable, sustainable, and observable.

Cloud native technologies and architectures typically consist of some combination of containers, service meshes, multi-tenancy, microservices, immutable infrastructure, serverless, and declarative APIs — this list is not exhaustive.

This definition is a good start, but as cloud infrastructure becomes ubiquitous, the cloud native world is beginning to spread behind the core of this definition. We’ll explore that evolution as well, and look into the near future of cloud-native computing.

Cloud native architectural principles

Let’s start by exploring the pillars of cloud-native architecture. Many of these technologies and techniques were considered innovative and even revolutionary when they hit the market over the past few decades, but now have become widely accepted across the software development landscape.

Microservices. One of the huge cultural shifts that made cloud-native computing possible was the move from huge, monolithic applications to microservices: small, loosely coupled, and independently deployable components that work together to form a cloud-native application. These microservices can be scaled across cloud environments, though (as we’ll see in a moment) this makes systems more complex.

Containers and orchestration. In could-native architectures, individual microservices are executed inside containers — lightweight, portable virtual execution environments that can run on a variety of servers and cloud platforms. Containers insulate the developers from having to worry about the underlying machines on which their code will execute. That is, all they have to do is write to the container environment. 

Getting the containers to run properly and communicate with one another is where the complexity of cloud native computing starts to emerge. Initially, containers were created and managed by relatively simple platforms, the most common of which was Docker. But as cloud-native applications got more complex, container orchestration platforms that augmented Docker’s functionality emerged, such as Kubernetes, which allows you to deploy and manage multi-container applications at scale. Kubernetes is critical to cloud native computing as we know it — it’s worth noting that the CNCF was set up as a spinoff of the Linux Foundation on the same day that Kubernetes 1.0 was announced — and adhering to Kubernetes best practices is an important key to cloud native success. 

Open standards and APIs. The fact that containers and cloud platforms are largely defined by open standards and open source technologies is the secret sauce that makes all this modularity and orchestration possible, and standardized and documented APIs offer the means of communication between distributed components of a larger application. In theory, anyway, this standardization means that every component should be able to communicate with other components of an application without knowing about their inner workings, or about the inner workings of the various platform layers on which everything operates.

DevOps, agile methodologies, and infrastructure as code. Because cloud-native applications exist as a series of small, discrete units of functionality, cloud-native teams can build and update them using agile philosophies like DevOps, which promotes rapid, iterative CI/CD development. This enables teams to deliver business value more quickly and more reliably.

The virtualized nature of cloud environments also make them great candidates for infrastructure as code (IaC), a practice in which teams use tools like Terraform, Pulumi, and AWS CloudFormation, to manage infrastructure declaratively and version those declarations just like application code. IaC boosts automation, repeatability, and resilience across environments—all big advantages in the cloud world. IaC also goes hand-in-hand with the concept of immutable infrastructure—the idea that, once deployed, infastructure-level entities like virtual machines, containers, or network appliances don’t change, which makes them easier to manage and secure. IaC stores declarative configuration code in version control, which creates an audit log of any changes.

Chart listing five things to love and five things to fear when considiering cloud native

There’s a lot to love about cloud-native architectures, but there are also several things to be wary of when considering it.

Foundry

How the cloud-native stack is expanding

As cloud-native development becomes the norm, the cloud-native ecosystem is expanding; the CNCF maintains a graphical representation of what it calls the  cloud native landscape that hammers home to expansive and bewildering variety of products, services, and open source projects that contribute to (and seek to profit from) to cloud-native computing. And there are a number of areas where new and developing tools are complicating the picture sketched out by the pillars we discussed above.   

An expanding Kubernetes ecosystem. Kubernetes is complex, and teams now rely on an entire ecosystem of projects to get the most out of it: Helm for packaging, ArgoCD for GitOps-style deployments, and Kustomize for configuration management. And just as Kubernetes augmented Docker for enterprise-scale deployments. Kubernetes itself has been augmented and expanded by service mesh offerings like Istio and Linkerd, which offer fine-grained traffic control and improved security

Observability needs. The complex and distributed world of cloud-native computing requires in-depth observability to ensure that developers and admins have a handle on what’s happening with their applications. Cloud-native observability uses distributed tracing and aggregated logs to provide deep insight into performance and reliability. Tools like Prometheus, Grafana, Jaeger, and OpenTelemetry support comprehensive, real-time observability across the stack.

Serverless computing.  Serverless computing, particularly in its function-as-a-service guise, offers to strip needed compute resources down to their bare minimum, with functions running on service provider clouds using exactly as much as they need and no more. Because these services can be exposed as endpoints via APIs, they are increasingly integrated into distributed applications, operating side-by-side with functionality provided by containerized microservices. Watch out, though: the big FaaS providers (Amazon, Microsoft, and Google) would love to lock you in to their ecosystems.  

FinOps. Cloud computing was initially billed as a way to cut costs — no need to pay for an in-house data center that you barely use — but in practice it replaces capex with opex, and sometimes you can run up truly shocking cloud service bills if you aren’t careful. Serverless computing is one way to cut down on those costs, but financial operations, or FinOps, is a more systematic discipline that aims to aligns engineering, finance, and product to optimize cloud spending. FinOps best practices make use of those observability tools to best determine what departments and applications are eating up resources.

How cloud-native architecture is adapting to AI workloads

Enterprises deploy larger AI models and make use of more and more real-time inference services. That’s putting demands on cloud-native systems and forcing them to adapt to remain scalable and reliable.

For instance, organizations are re-engineering cloud environments around GPU-accelerated clusters, low-latency networking, and predictable orchestration. These needs align with established cloud-native patterns: containers package AI services consistently, while Kubernetes provides resilient scheduling and horizontal scale for inference workloads that can spike without warning.

Kubernetes itself is changing to better support AI inference, adding hardware-aware scheduling for GPUs, model-specific autoscaling behavior, and deeper observability into inference pipelines. These enhancements make Kubernetes a more natural platform for serving generative AI workloads.

AI’s resource demands are amplifying traditional cloud-native challenges. Observability becomes more complex as inference paths span GPUs, CPUs, vector databases, and distributed storage. FinOps teams contend with cost volatility from training and inference bursts. And security teams must track new risks around model provenance, data access, and supply-chain integrity.

Application frameworks for building distributed cloud-native apps

Microsoft’s Aspire is one of the most visible examples of a shift towards application frameworks to simplify how teams build distributed systems. Opinionated frameworks like Aspire provide structure, observability, and integration out of the box so developer don’t need to stitch together containers, microservices, and orchestration tooling by hand.

Aspire in particular is a prescriptive framework for cloud-native applications, bundling containerized services, environment configuration, health checks, and observability into a unified development model. Aspire provides defaults for service-to-service communication, configuration, and deployment, along with a built-in dashboard for visibility across distributed components.

While Aspire was originally aligned with Microsoft’s .NET platform,Redmond now sees it as having a  polyglot future. This positions Aspire as part of a broader trend: frameworks that help teams build cloud-native, service-oriented systems without being locked into a single language ecosystem. Several other frameworks are gaining traction: Dapr provides a portable runtime that abstracts many of the plumbing tasks in cloud-native distributed applications, and Orleans offers an actor-model-based framework for large-scale systems in the .NET world, and Akka gives JVM teams a mature, reactive toolkit for elastic, resilient services.

Frameworks and tools in the expanding cloud-native ecosystem

While frameworks like Aspire simplify how developers compose and structure distributed applications, most cloud-native systems still depend on a broader ecosystem of platforms and operational tooling. This deeper layer is where much of the complexity—and innovation—of cloud-native computing lives, particularly as Kubernetes continues to serve as the industry’s control plane for modern infrastructure.

Kubernetes provides the core abstractions for deploying and orchestrating containerized workloads at scale. Managed distributions such as Google Kubernetes Engine (GKE), Amazon EKS, Azure AKS, and Red Hat OpenShift build on these primitives with security, lifecycle automation, and enterprise support. Platform vendors are increasingly automating cluster operations—upgrades, scaling, remediation—to reduce the operational burden on engineering teams.

Surrounding Kubernetes is a rapidly expanding ecosystem of complementary frameworks and tools. Service meshes like Istio and Linkerd provide fine-grained traffic management, policy enforcement, and mTLS-based security across microservices. GitOps platforms such as Argo CD and Flux bring declarative, version-controlled deployments to cloud-native environments. Meanwhile, projects like Crossplane turn Kubernetes into a universal control plane for cloud infrastructure, letting teams provision databases, queues, and storage through familiar Kubernetes APIs. These tools illustrate how cloud-native development now spans multiple layers: developer-focused application frameworks like Aspire at the top, and a powerful, evolving Kubernetes ecosystem underneath that keeps modern distributed applications running.

Advantages and challenges for cloud-native development

Cloud native has become so ubiquitous that its advantages are almost taken for granted at this point, but it’s worth reflecting on the beneficial shift the cloud native paradigm represents. Huge, monolithic codebases that saw updates rolled out once every couple of years have been replaced by microservice-based applications that can be improved continuously. Cloud-based deployments, when managed correctly, make better use of compute resources and allow companies to offer their products as SaaS or PaaS services. 

But cloud-native deployments come with a number of challenges, too:

  • Complexity and operational overhead: You’ll have noticed by now that many of the cloud-native tools we’ve discussed, like service meshes and observability tools, are needed to deal with the complexity of cloud-native applications and environments. Individual microservices are deceptively simple, but coordinating them all in a distributed environment is a big lift.
  • Security: More services executing on more machines, communicating by open APIs, all adds up to a bigger attack surface for hackers. Containers and APIs each have their own special security needs, and a policy engine can be an important tool for imposing a security baseline on a sprawling cloud-native app. DevSecOps, which adds security to DevOps, has become an important cloud-native development practice to try to close these gaps.
  • Vendor lock-in: This may come as a surprise, since cloud-native is based on open standards and open source. But there are differences in how the big cloud and serverless providers works, and once you’ve written code with one provider in mind, it can be hard to migrate elsewhere.
  • A persistent skills gap: Cloud-native computing and development may have years under its belt at this point, but the number of developers who are truly skilled in this arena is a smaller portion of the workforce than you’d think. Companies face difficult choices in bridging this skills gap, whether that’s bidding up salaries, working to upskill current workers, or allowing remote work so they can cast a wide net. 

Cloud native in the real world

Cloud native computing is often associated with giants like Netflix, Spotify, Uber, and AirBNB, where many of its technologies were pioneered in the early ’10s. But the CNCF’s Case Studies page provides an in-depth look at how cloud native technologies are helping companies. Examples include the following:

Cloud-native infrastructure’s capability to quickly scale up to large workloads also make it an attractive platform for developing AI/ML applications: another one of those CNCF case studies looks at how IBM uses Kubernetes to train its Watsonx assistant. The big three providers are putting a lot of effort into pitching their platforms as the place for you to develop your own generative AI tools, with offerings like Azure AI Foundry,Google Firebase Studio, and Amazon Bedrock. It seems clear that cloud native technology is ready for what comes next.

Original Link:https://www.infoworld.com/article/2255318/what-is-cloud-native-the-modern-way-to-develop-software.html
Originally Posted: Fri, 19 Dec 2025 09:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Cloud native explained: How to build scalable, resilient applications

Quick Navigation