Welcome to the inaugural issue of The Control Plane. Once a month, straight to your inbox with well-researched topics from across the Cloud Native industry.
Sovereignty is an architecture, not a contract
Conversations that used to start with "which cloud provider?" now start with "which jurisdiction?" The EU Data Act took effect in September. DORA is being enforced. The sovereign cloud market just crossed $80 billion.
The uncomfortable part: most "sovereign cloud" offerings aren't sovereign in any meaningful technical sense. Your data sits in Frankfurt. But the control plane, IAM, DNS? Still Virginia. When AWS US-East-1 went down last October, "sovereign" European services went dark, identity providers sitting 6,000 km away in a different legal jurisdiction.
Sovereignty is an architecture decision, not a contract addendum. In this issue: what a sovereign Kubernetes architecture actually looks like, a Kyverno policy that geofences workloads, and a war story about hidden dependencies on foreign regions.
Let's get to it.
π οΈ The Deep Dive
Your Sovereign Cloud Has a Virginia Problem
The sovereign cloud market is projected to hit $80.4 billion in 2026 β a 35% jump from last year. Organizations are spending aggressively to comply with the EU Data Act, DORA, and a growing list of data residency laws from India to Saudi Arabia.
But spending isn't the same as compliance. The most common failure mode we see: data sits in the correct jurisdiction while the control plane β the API server, identity provider, and key management β lives somewhere else entirely. When AWS US-East-1 failed in October 2025, European services with data in Frankfurt went offline because their IAM depended on a region in Virginia.
If a foreign government can compel your cloud provider to hand over your encryption keys, your data residency is theater.
Decoupled Control Planes:
The architectural pattern gaining the most traction in 2026 is control plane decoupling β separating where your cluster is *managed* from where your workloads *run*.
The pattern works like this:
Seed Cluster - runs in your trusted zone (your HQ datacenter, a certified sovereign cloud region). It hosts the control plane components: API server, controller manager, scheduler, etcd. These run as pods inside the Seed.
User Clusters - run in the target zones (AWS Frankfurt, a vSphere cluster in Riyadh, an edge node in a factory). They contain *only* worker nodes and kubelets. No etcd. No API server. Sensitive workload data stays here and never crosses the boundary.
The Connection β worker nodes connect outbound to the Seed's API server through a secure tunnel. The Seed holds configuration. The User Cluster holds data. They never mix.
This gives you centralized governance with localized execution. One platform team manages the fleet from the Seed. Workload data never leaves the sovereign boundary. And if a jurisdiction becomes compromised, you sever the tunnel β the Seed revokes credentials and the User Cluster is isolated instantly.
KKP pioneered this pattern with its Seed/User cluster architecture, and the approach has since been adopted across the ecosystem, with SAP's Gardener (seed/shoot) and Red Hat's HyperShift (hosted control planes) implementing similar models.
Enforcing Residency with Policy
Architecture alone isn't enough. You need policy enforcement at the scheduler level to prevent workloads from landing on the wrong nodes. Kyverno makes this straightforward. This policy blocks any pod from scheduling outside EU regions:
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: enforce-eu-residency spec: validationFailureAction: Enforce rules: - name: require-eu-node-affinity match: resources: kinds: - Pod namespaces: - "sovereign-*" validate: message: >- Pods in sovereign namespaces must include a nodeAffinity that restricts scheduling to EU zones (eu-central-1, eu-west-1, eu-west-3). deny: conditions: any: - key: "" operator: LessThan value: 1
Any pod in a `sovereign-*` namespace that either omits the required node affinity or targets non-EU regions gets rejected by the admission controller. Residency enforced by the platform, not by convention.
Don't Forget Your Secrets
The other gap: encryption key management. The "Bring Your Own Key" model has evolved into Hold Your Own Key (HYOK). The cloud provider never touches your key encryption keys β not even in memory.
Use the KMS v2 provider API (stable since Kubernetes 1.29) to encrypt etcd data with keys stored in an HSM physically located inside your sovereign boundary. Pair it with External Secrets Operator to inject secrets from a locally-hosted Vault into pods at runtime. Secrets go straight to memory β never persisted to the provider's storage layer.
The Takeaway
Sovereignty means three things: own your control plane, own your identity, own your keys. If any of those three lives outside your jurisdiction, you have a residency policy, not sovereignty.
Audit your stack today. Where does your API server run? Where are your KEKs stored? Where does your IdP authenticate? If the answer to any of those is "a different country than my data," you have work to do.
π‘ The Edge Radar
Curated signals from the distributed frontier.
Must Read
EU Data Act: What Businesses Need to Knowβ Latham & Watkins The 30-day switching mandate and abolished egress fees are changing how enterprises architect multi-cloud. If you're locked into a single provider, this is your wake-up call.
The High Cost of Sovereignty in the Age of AI β IDC IDC predicts 60% of multinationals will split AI stacks across sovereign zones by 2028, tripling integration costs. The "sovereignty premium" is real β plan your FinOps accordingly.
Sovereignty & Multi-Cloud
Global Sovereign Cloud Market Hits $80B - ITdaily Europe is on track for second place behind China ($47.4B) in sovereign cloud spending. The growth is regulation-driven, not hype-driven β which means it's not slowing down.
Three Predictions for Sovereign Cloud in 2026 - Broadcom Broadcom (yes, that Broadcom) argues the "Sovereignty Washing" critique is valid β many offerings are data residency without operational sovereignty. Worth reading for the competitive context.
Security & Compliance
DORA Is Live: What Your Platform Team Needs to Know - Sysdig DORA's "Concentration Risk" requirement effectively mandates multi-cloud for financial services. If you serve banks or insurers, your architecture needs a documented exit strategy β tested, not theoretical.
π Kubermatic Releases
What shipped this month from the Kubermatic ecosystem.
KubeLB v1.3 β The load balancer orchestration layer for Kubernetes got a major upgrade. v1.3 introduces Web Application Firewall (WAF) support with ModSecurity integration, a migration path from ingress-nginx to Gateway API, and full supply chain security β all artifacts are now signed with Sigstore Cosign and ship with SBOMs.
KubeLB Security Patches (v1.3.1 / v1.2.2) β Critical updates addressing ingress-nginx configuration injection vulnerabilities, envoy-gateway code execution via Lua scripts, and cert-manager runtime vulnerabilities. If you run KubeLB, upgrade now.
KKP v2.29.4 β Kubernetes support extended to v1.34.4, v1.33.8, and v1.32.12. Also ships Gateway API as an alternative to NGINX Ingress for external traffic routing and upgrades NGINX Ingress controller to v1.14.3.
π Community & Events
Where we've been. Where we'll be. What to submit to.
Upcoming Events
KubeCon + CloudNativeCon Europe 2026 - March 23β26, Amsterdam The flagship cloud-native event. Kubermatic will be on the ground β stop by the booth (#820) for live demos.
Cloud Native Rejekts EU 2026 - March 22, Amsterdam The unconference that runs the day before KubeCon. Great talks that didn't make the main CFP.
September 2β4 in Hamburg. They want talks on containers, Kubernetes, cloud-native operations, and platform engineering. Crazy Bird Tickets are still on sale, will be ending soon!!
π¨ The Panic Room
A War Story from the trenches. Learn from failure.
The Day "Sovereign" Meant "Offline"
During the AWS US-East-1 outage (October 20, 2025), a European financial services company lost access to "sovereign" Kubernetes workloads in Frankfurt. Data and compute were in Germany β but IAM and service discovery had hard-coded dependencies on us-east-1. When Virginia went dark, Frankfurt couldn't authenticate sessions or scale pods.
The Fix: A local Keycloak instance inside the sovereign boundary, federated with their corporate IdP but keeping service account records local. Plus split-horizon DNS with local resolvers as primary.
The Lesson: Sovereignty has three layers β data, compute, and identity. Most teams nail the first two and forget the third. If your authentication path crosses a border, your sovereignty does too.