Gateway Connectivity

Follow this flow to install an Istio multicluster service mesh where the Kubernetes cluster services and the applications in each cluster are limited to remote communication using gateway IPs.

Instead of using a central Istio control plane to manage the mesh, in this configuration each cluster has an identical Istio control plane installation, each managing its own endpoints. All of the clusters are under a shared administrative control for the purposes of policy enforcement and security.

A single Istio service mesh across the clusters is achieved by replicating shared services and namespaces and using a common root CA in all of the clusters. Cross-cluster communication occurs over Istio Gateways of the respective clusters.

Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods
Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods


  • Two or more Kubernetes clusters with 1.10 or newer.

  • Authority to deploy the Istio control plane using Helm on each Kubernetes cluster.

  • The IP address of the istio-ingressgateway service in each cluster must be accessible from every other cluster.

  • A Root CA. Cross cluster communication requires mutual TLS connection between services. To enable mutual TLS communication across clusters, each cluster’s Citadel will be configured with intermediate CA credentials generated by a shared root CA. For illustration purposes, we use a sample root CA certificate available in the Istio installation under the samples/certs directory.

Deploy the Istio control plane in each cluster

  1. Generate intermediate CA certificates for each cluster’s Citadel from your organization’s root CA. The shared root CA enables mutual TLS communication across different clusters.

  2. Generate a multicluster-gateways Istio configuration file using helm:

    $ cat install/kubernetes/helm/istio-init/files/crd-* > $HOME/istio.yaml
    $ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
        -f @install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml@ >> $HOME/istio.yaml

    For further details and customization options, refer to the Installation with Helm instructions.

  3. Run the following commands in every cluster to deploy an identical Istio control plane configuration in all of them.

    • Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.

      $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
      $ kubectl create namespace istio-system
      $ kubectl create secret generic cacerts -n istio-system \
          --from-file=@samples/certs/ca-cert.pem@ \
          --from-file=@samples/certs/ca-key.pem@ \
          --from-file=@samples/certs/root-cert.pem@ \
    • Use the Istio installation yaml file generated in a previous step to install Istio:

      $ kubectl apply -f $HOME/istio.yaml

Setup DNS

Providing DNS resolution for services in remote clusters will allow existing applications to function unmodified, as applications typically expect to resolve services by their DNS names and access the resulting IP. Istio itself does not use the DNS for routing requests between services. Services local to a cluster share a common DNS suffix (e.g., svc.cluster.local). Kubernetes DNS provides DNS resolution for these services.

To provide a similar setup for services from remote clusters, we name services from remote clusters in the format <name>.<namespace>.global. Istio also ships with a CoreDNS server that will provide DNS resolution for these services. In order to utilize this DNS, Kubernetes’ DNS needs to be configured to point to CoreDNS as the DNS server for the .global DNS domain.

Create one of the following ConfigMaps, or update an existing one, in each cluster that will be calling services in remote clusters (every cluster in the general case):

For clusters that use kube-dns:

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
  name: kube-dns
  namespace: kube-system
  stubDomains: |
    {"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}

For clusters that use CoreDNS:

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
  name: coredns
  namespace: kube-system
  Corefile: |
    .:53 {
        kubernetes cluster.local {
           pods insecure
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    global:53 {
        cache 30
        proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})

Configure application services

Every service in a given cluster that needs to be accessed from a different remote cluster requires a ServiceEntry configuration in the remote cluster. The host used in the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the service’s name and namespace respectively.

To confirm that your multicluster configuration is working, we suggest you proceed to our simple multicluster using gateways example to test your setup.


Uninstall Istio by running the following commands on every cluster:

$ kubectl delete -f $HOME/istio.yaml
$ kubectl delete ns istio-system


Using Istio gateways, a common root CA, and service entries, you can configure a single Istio service mesh across multiple Kubernetes clusters. Once configured this way, traffic can be transparently routed to remote clusters without any application involvement. Although this approach requires a certain amount of manual configuration for remote service access, the service entry creation process could be automated.