Cloud-native, or containerised, applications are now mainstream. As many as 82% of enterprises now have Kubernetes in production, according to the Cloud Native Computing Forum (CNCF). That is up from 66% in 2023. And a full 98% of organisations have at least some cloud-native applications, the industry body says.
But moving applications to cloud-native environments does not just mean creating new code. It also means adapting infrastructure. Compute, networking and data storage all need to work with container environments. By no means can all systems do this out of the box, especially when it comes to on-premise hardware.
At the same time, enterprise IT architects need to consider the requirements of legacy applications and virtual machines (VMs) that are not being updated. And enterprises will want to make the most efficient use of their storage hardware, regardless of their application environments.
Moving to containers means adapting a technology that was not designed for persistent storage to handle business-critical data.
Stateless states
Containerised applications started out as stateless, or ephemeral. The designers never intended containers to hold persistent data. They expected that microservices or containerised applications would use no non-volatile storage and discard the contents of memory, and even their settings, once they had completed their tasks.
Instead, containerised applications rely on an external data store, usually a database or cache.
There are advantages to this approach. These include simpler deployment, easier scaling, fault tolerance and recovery, and application portability. But most business applications, if not the majority, need persistent data.
“Most business applications require storage. In reality, unless you’re converting Fahrenheit to Celsius and back, you’re storing something somewhere,” says Dan Ciruli, vice-president and general manager for cloud native at Nutanix.
And the need to work with persistent data is all the more important, as enterprises look to containers as an alternative to conventional virtual machines.
But this means rethinking the way applications work. And it requires IT architects to update their storage systems to support modernised, cloud-native applications. This can be directly, where array manufacturers support containers, or through a control plane such as Nutanix or Everpure’s Portworx.
Almost inevitably, changes are being driven by AI, as enterprises look to support its data-heavy workloads in modern, cloud-native environments. But there are other drivers, too, including a trend to move virtualised applications to containers and the need for cost controls.
“Kubernetes might be over a decade old, but it’s continuing to evolve as AI transforms the way we handle data. Already, Kubernetes has moved beyond the days when it was built only for ephemeral, stateless applications,” says Michael Cade, global field chief technology officer at Veeam Software.
“Today, stateful applications such as databases, machine learning pipelines and streaming systems are now being treated as first-class citizens [in containerised environments] and have been given the specialised tools they need to thrive.”
Storage connections
Connecting storage to Kubernetes, though, relies on support from both application developers and hardware suppliers.
The main way to connect storage to container environments is through the container storage interface (CSI). CSI needs to be supported directly by the storage provider, be that the hardware manufacturer, a cloud service, or a software-defined storage (SDS) supplier.
As the CNCF’s Kubernetes page notes: “CSI was developed as a standard for exposing arbitrary block and file storage systems to containerised workloads on container orchestration systems like Kubernetes.” CSI allows third-party storage providers to write, and deploy, plug-ins for storage without changing the core Kubernetes code.
SDS technologies, for their part, also use CSI drivers, but run on commodity hardware rather than dedicated storage arrays, as well as hyper-converged infrastructure. It also includes open source options, such as OpenEBS, Longhorn and Ceph.
“Every environment needs a storage back end, with a CSI driver that connects it to Kubernetes. It’s up to the storage provider to provide the CSI driver,” says Nigel Poulton, an author and independent expert in Kubernetes and containers.
“Most CSI drivers create at least one StorageClass that maps to a tier of storage and its capabilities. For example, a CSI driver might create a StorageClass called ‘fast-replicated’ that maps to high-speed flash storage automatically replication to a remote location. Any application using this class automatically gets that tier and set of capabilities,” he adds.
This level of abstraction is highly useful for application developers, as they no longer have to worry about the physical capabilities of the storage system. That is handled by the CSI drivers.
“The CSI drivers enable us to give access to storage from the containerised application, but [for firms to] still administer the storage the way they do the storage that’s running under their VMs,” says Nutanix’s Ciruli. “And that’s a big advantage.” He also sees customers installing Kubernetes on bare metal clusters.
This also maintains separation between the Kubernetes workloads and the underlying storage hardware. On paper at least, enterprises can move their containerised applications to a different platform or supplier, or new storage hardware, without rewriting code and with minimal disruption.
In practice, large-scale moves of Kubernetes applications between platforms are still relatively rare. Enterprises tend to develop applications to run on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, or local hardware, depending on their business requirements.
Application portability, supported by CSI, is a useful insurance, even if there are enough differences between platforms to suggest caution.
“We really don’t need to become an expert in how EBS [Elastic Block Store] works versus Azure disk, or local SSD [solid-state drives] and how that works,” says Greg Muscarella, general manager for Portworx at Everpure. “If you have to manage those things, it becomes somewhat complex. Companies tend to focus on a single cloud environment.”
Few organisations, he suggests, have code where they could “push a button and move it to a different cloud”, not least because of differences between storage architectures from both hardware suppliers and cloud providers. However, enterprises are moving more applications to cloud-native environments. And this increasingly includes databases and applications that previously ran in conventional virtual machines.
New platforms
One of the most significant trends in application modernisation is to move both virtual machines and database-driven applications to containers. Cost, avoiding supplier lock-in and the need to consolidate on fewer platforms are all drivers.
“The line between ‘containerised’ and ‘virtualised’ is blurring,” suggests Veeam’s Cade. “For a long time, containers and VMs were seen as two separate siloes. But as stateful applications have developed, and since VMs are essentially a typical stateful workload, we’re seeing a significant rise in businesses running them directly within Kubernetes using platforms such as Red Hat OpenShift Virtualization.”
Poulton agrees. He sees more organisations moving virtualised workloads to containers, via tools such as KubeVirt. But, although organisations are porting over virtualised applications, and databases, IT architects need to be sure that all the application’s requirements are met by the storage layer.
“Databases have much more demanding requirements, including ordered startup, replication, automated failover and backup,” he cautions. “The two biggest changes are ensuring a CSI driver exists for the storage system and potentially deploying an operator.”
A Kubernetes operator provides details about a database’s specific requirements, and sometimes storage, too. Operator support is essential to allow databases to deliver enterprise workloads over Kubernetes. Again, the operator supports the modern application goal of separating the code from the storage array or cloud storage service.
Percona, for example, provides operators for MySQL, PostgreSQL and MongoDB, as well as Everest. “The operators are basically the game changers,” says Kate Obiidykhata, the company’s general manager for cloud native. “They encode the human DBA knowledge into the software, and you have all those most important resilience components, backup, failover, replication and upgrades automated.”
Operators, she adds, help enterprises to adopt hybrid architectures or multicloud strategies, allowing data portability without the need to rewrite applications. But workloads that operate on VMs will not automatically run on containers, she says. Firms will need to plan, and test, their deployments with care.
“There are specific playbooks that you should apply and methodologies that are obviously different from the classic database setup on VMs,” says Obiidykhata. “But it’s all doable, and many companies are now running those databases on Kubernetes. They just have a different playbook to mitigate those issues.”
Firms also need to factor in how they run their ported applications in production. Development, understandably, attracts much of the attention. But how systems run from “day two” onwards is critical. This includes storage provisioning and tiering, as well as backup, recovery and security.
The CSI drivers take care of much of the hard work, but enterprises are likely to look to invest in new hardware, or even storage from suppliers focused on cloud-native environments, to ease the migration to containers.
“This is usually by deploying new storage architectures, either via new storage products from existing vendors, but increasingly by engaging with new vendors,” says Poulton. Enterprises, he adds, might still be running older hardware systems, but they are unlikely to use them for Kubernetes.







