Istio 1.24 Upgrade Notes
Important changes to consider when upgrading to Istio 1.24.0.
When upgrading from Istio 1.23.x to Istio 1.24.x, please consider the changes on this page. These notes detail the changes which purposefully break backwards compatibility with Istio 1.23.x. The notes also mention changes which preserve backwards compatibility while introducing new behavior. Changes are only included if the new behavior would be unexpected to a user of Istio 1.23.x.
Updated compatibility profiles
To support compatibility with older versions, Istio 1.24 introduces a new 1.23 compatibility profile and updates its other profiles to account for changes in Istio 1.24.
This profile sets the following values:
ENABLE_INBOUND_RETRY_POLICY: "false"
EXCLUDE_UNSAFE_503_FROM_DEFAULT_RETRY: "false"
PREFER_DESTINATIONRULE_TLS_FOR_EXTERNAL_SERVICES: "false"
ENABLE_ENHANCED_DESTINATIONRULE_MERGE: "false"
PILOT_UNIFIED_SIDECAR_SCOPE: "false"
ENABLE_DEFERRED_STATS_CREATION: "false"
BYPASS_OVERLOAD_MANAGER_FOR_STATIC_LISTENERS: "false"
See the individual change and upgrade notes for more information.
Istio CRDs are templated by default and can be installed and upgraded via helm install istio-base
This changes how CRDs are upgraded. Previously, we recommended and documented:
- Install:
helm install istio-base
- Upgrade:
kubectl apply -f manifests/charts/base/files/crd-all.gen.yaml
or similar. - Uninstall:
kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete
This change allows:
- Install:
helm install istio-base
- Upgrade:
helm upgrade istio-base
- Uninstall:
kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete
Previously this only worked under certain conditions, and when certain install flags were used, could result in non-Helm-upgradable CRDs being generated that required manual intervention to fix.
As a necessary consequence of this, the labels on the CRDs are changed to be consistent with other Helm-installed resources.
If you previously installed or upgraded CRDs with kubectl apply
and not Helm, you can continue to do so.
If you previously installed CRDs with helm install istio-base
OR kubectl apply
, you can begin safely upgrading Istio CRDs
with helm upgrade istio-base
from this and all subsequent releases
after running the below kubectl commands as a one-time migration:
kubectl label $(kubectl get crds -l chart=istio -o name && kubectl get crds -l app.kubernetes.io/part-of=istio -o name) "app.kubernetes.io/managed-by=Helm"
kubectl annotate $(kubectl get crds -l chart=istio -o name && kubectl get crds -l app.kubernetes.io/part-of=istio -o name) "meta.helm.sh/release-name=istio-base"
(replace with actualistio-base
Helm release name)kubectl annotate $(kubectl get crds -l chart=istio -o name && kubectl get crds -l app.kubernetes.io/part-of=istio -o name) "meta.helm.sh/release-namespace=istio-system"
(replace with actual istio namespace)
If desired, the legacy labels can be generated by setting base.enableCRDTemplates=false
during helm install base
, but this option will be removed in a future release.
istiod-remote
chart replaced with remote
profile
Installing istio clusters with a remote/external control plane via Helm has never been officially documented or stable. This changes how clusters that use a remote istio instance are installed, in preparation for documenting this.
The istiod-remote
Helm chart has been merged with the regular istio-discovery
Helm chart.
Previously:
helm install istiod-remote istio/istiod-remote
With this change:
helm install helm install istiod istio/istiod --set profile=remote
Note that, as per the above upgrade note, installing istio-base
chart is now required in both local and remote clusters.
Sidecar
scoping changes
During processing of services, Istio has a variety of conflict resolution strategies.
Historically, these have subtly differed when a user has a Sidecar
resource defined, compared to when they do not.
This applied even if the Sidecar
resource with just egress: "*/*"
, which should be the same as not having one defined.
In this version, the behavior between the two has been unified:
Multiple services defined with the same hostname
Behavior before, without Sidecar
: prefer a Kubernetes Service
(rather than a ServiceEntry
), else pick an arbitrary one.
Behavior before, with Sidecar
: prefer the Service in the same namespace as the proxy, else pick an arbitrary one.
New behavior: prefer the Service in the same namespace as the proxy, then the Kubernetes Service (not ServiceEntry), else pick an arbitrary one.
Multiple Gateway API Route defined for the same service
Behavior before, without Sidecar
: prefer the local proxy namespace, to allow consumer overrides.
Behavior before, with Sidecar
: arbitrary order.
New behavior: prefer the local proxy namespace, to allow consumer overrides.
The old behavior can be retained, temporarily, by setting PILOT_UNIFIED_SIDECAR_SCOPE=false
.
Standardization of the peer metadata attributes
CEL expressions in the telemetry API must use the standard Envoy attributes instead of the custom Wasm extended attributes.
Peer metadata is now stored in filter_state.downstream_peer
and filter_state.upstream_peer
instead of filter_state["wasm.downstream_peer"]
andfilter_state["wasm.upstream_peer"]
.
Node metadata is stored in xds.node
instead of node
.
Wasm attributes must be fully qualified, e.g. use filter_state["wasm.istio_responseClass"]
instead of istio_responseClass
.
Presence operator can be used for backwards compatible expressions in a mixed proxy scenario, e.g. has(filter_state.downstream_peer) ? filter_state.downstream_peer.namespace : filter_state["wasm.downstream_peer"].namespace
to read the namespace of the peer.
The peer metadata uses baggage encoding with the following field attributes:
namespace
cluster
service
revision
app
version
workload
type
(e.g."deployment"
)name
(e.g."pod-foo-12345"
)