Changelog for Kubernetes 1.34
Versions
The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.
Current release leverages Kubernetes 1.34. Official release blogpost found here with corresponding official changelog.
Optional addons
Major changes
- The 4k, 8k, 16k, and v1-dynamic-40 storage classes are removed in this version. Existing volumes will not be affected. The ability to create legacy volumes will be removed. Please migrate manifests that specify storage classes to the storageclasses prefixed with
v2-, which have been available since Kubernetes 1.26 and have been the default since 2024-06-28 made public in the announcement.
Noteworthy changes in upcoming versions
Announcement of changes in future versions.
Scheduled for upcoming releases:
- We’ll remove the legacy
nodelocaldnswhere still deployed. Relevant only if the cluster was created before v1.26. - Ingress-nginx controller will be fully deprecated from our management, following the news.
- We will not handle migrations of ingresses, but aim to provide an API Gateway controller as addon.
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade.
Custom taints and labels on worker and control-plane nodes may be lost during the upgrade. We recommend auditing and reapplying any critical custom taints/labels via automation (e.g., cluster bootstrap, configuration management, or a post-upgrade job).
There is a label that is persistent across upgrades that can be used to direct workload to particular nodegroups. Example on how to use it:
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodegroup.node.cluster.x-k8s.io
operator: In
values:
- worker1
Snapshots are not working.
There is currently a limitation in the snapshot controller: it is not topology-aware. As a result, snapshot behavior may be unreliable for topology-sensitive volumes. Avoid depending on snapshots for cross-zone/region recovery until a topology-aware snapshot controller is available or confirm your storage driver’s snapshot semantics.