Kubernetes changed how software is built and operated. It gave us a programmable, consistent way to run containers and infrastructure.
But most of that progress happened in the cloud.
On-prem, many teams are still stuck with fragmented tooling, legacy platforms, and manual processes. These systems don’t fit modern workflows. And they slow everyone down.
Some teams stay on-prem for data sovereignty or regulatory reasons. Others are moving workloads back on-prem to regain control over cost, complexity, or operational risk.
Hybrid Environments Are Fragmented
Most teams now operate in hybrid environments: some workloads in the cloud, some on-prem. Parts of the stack are modern and automated. Others still depend on legacy systems and manual effort.
This mix makes daily operations harder:
- Too much overhead from running different systems side by side
 - Tools that break down when used across cloud and on-prem
 - Poor visibility and control over the parts they still manage themselves
 - Rising costs with no clear way to modernize
 
We think Kubernetes can solve a lot of this.
It gives you a single API to define workloads, network policies, storage, and security consistently across environments. It could give teams one way to run workloads across both cloud and on-prem.
But too often, on-prem Kubernetes is deployed on top of legacy virtualization stacks and treated like an application rather than the platform. That keeps the complexity in place:
- Two control planes to operate and patch
 - Extra layers between workloads and hardware
 - Conflicting lifecycle models between the hypervisor and Kubernetes
 
The result: Kubernetes ends up buried under old layers, and the hybrid mess stays.
Kubernetes As the Infrastructure Platform
We believe Kubernetes shouldn’t run on infrastructure: it should be the infrastructure.
It provides a vendor-neutral, programmable interface that can span environments and automate everything below it: networking, security policies, storage, and even VMs, databases, or vendor software as workloads.
But to get the full value from Kubernetes, it has to become the foundation, not just sit on top of legacy stacks.
To achieve this foundation, we need a modern way to run Kubernetes natively on bare metal.
Why We’re Building meltcloud
We were surprised to find that even in 2025, there’s still no way to run Kubernetes on your own hardware as easily as you can in the cloud: no GKE, AKS, or EKS for your data center. On-prem Kubernetes is still a DIY project, stitched together from dozens of tools and layers.
As engineers, we’ve spent the past decade helping teams move to the cloud and build on Kubernetes. Now we want to bring that same speed and simplicity back on-prem.
With meltcloud, we’re building a platform that makes Kubernetes work like it does in the cloud, but on your hardware:
- 
Cloud experience, on-prem Kubernetes that works like GKE, AKS, or EKS, but on your own hardware. From bootstrap to day-2, it behaves like the cloud.
 - 
Bare-metal native No hypervisor stack underneath. Run Kubernetes directly on the metal to cut complexity, cost, and failure surface.
 - 
Containers and VMs together Run both pods and virtual machines on one platform with KubeVirt, using the same Kubernetes APIs and tooling for everything.
 
Our goal is simple: make on-prem Kubernetes a commodity. Something you can use out of the box and just works, not something that needs a team of consultants to implement and operate.
The Cloud-Native Data Center
We call this idea the Cloud-Native Data Center: an environment where Kubernetes isn’t just installed, but runs as the base layer everything else builds on.
If you’re curious what this could look like, we’re writing more about it here: Cloud Native Data Center: Has the Cloud Delivered on Its Promise? (Part 1)