Understand your Mesh with Istioctl Describe

In Istio 1.3, we included the istioctl experimental describe command. This CLI command provides you with the information needed to understand the configuration impacting a pod. This guide shows you how to use this experimental sub-command to see if a pod is in the mesh and verify its configuration.

The basic usage of the command is as follows:

$ istioctl experimental describe pod <pod-name>[.<namespace>]

Appending a namespace to the pod name has the same affect as using the -n option of istioctl to specify a non-default namespace.

This guide assumes you have deployed the Bookinfo sample in your mesh. If you haven’t already done so, start the application’s services and determine the IP and port of the ingress before continuing.

Verify a pod is in the mesh

The istioctl describe command returns a warning if the Envoy proxy is not present in a pod or if the proxy has not started. Additionally, the command warns if some of the Istio requirements for pods are not met.

For example, the following command produces a warning indicating a kube-dns pod is not part of the service mesh because it has no sidecar:

$ export KUBE_POD=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod -n kube-system $KUBE_POD
Pod: coredns-f9fd979d6-2zsxk
   Pod Ports: 53/UDP (coredns), 53 (coredns), 9153 (coredns)
WARNING: coredns-f9fd979d6-2zsxk is not part of mesh; no Istio sidecar
--------------------
2021-01-22T16:10:14.080091Z     error   klog    an error occurred forwarding 42785 -> 15000: error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5, uid : failed to execute portforward in network namespace "/var/run/netns/cni-3c000d0a-fb1c-d9df-8af8-1403e6803c22": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused[]
Error: failed to execute command on sidecar: failure running port forward process: Get "http://localhost:42785/config_dump": EOF

The command will not produce such a warning for a pod that is part of the mesh, the Bookinfo ratings service for example, but instead will output the Istio configuration applied to the pod:

$ export RATINGS_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')
$ istioctl experimental describe pod $RATINGS_POD
Pod: ratings-v1-7dc98c7588-8jsbw
   Pod Ports: 9080 (ratings), 15090 (istio-proxy)
--------------------
Service: ratings
   Port: http 9080/HTTP targets pod port 9080

The output shows the following information:

  • The ports of the service container in the pod, 9080 for the ratings container in this example.
  • The ports of the istio-proxy container in the pod, 15090 in this example.
  • The protocol used by the service in the pod, HTTP over port 9080 in this example.

Verify destination rule configurations

You can use istioctl describe to see what destination rules apply to requests to a pod. For example, apply the Bookinfo mutual TLS destination rules:

Zip
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@

Now describe the ratings pod again:

$ istioctl x describe pod $RATINGS_POD
Pod: ratings-v1-f745cf57b-qrxl2
   Pod Ports: 9080 (ratings), 15090 (istio-proxy)
--------------------
Service: ratings
   Port: http 9080/HTTP
DestinationRule: ratings for "ratings"
   Matching subsets: v1
      (Non-matching subsets v2,v2-mysql,v2-mysql-vm)
   Traffic Policy TLS Mode: ISTIO_MUTUAL

The command now shows additional output:

  • The ratings destination rule applies to request to the ratings service.
  • The subset of the ratings destination rule that matches the pod, v1 in this example.
  • The other subsets defined by the destination rule.
  • The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS.

Verify virtual service configurations

When virtual services configure routes to a pod, istioctl describe will also include the routes in its output. For example, apply the Bookinfo virtual services that route all requests to v1 pods:

Zip
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@

Then, describe a pod implementing v1 of the reviews service:

$ export REVIEWS_V1_POD=$(kubectl get pod -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
   1 HTTP route(s)

The output contains similar information to that shown previously for the ratings pod, but it also includes the virtual service’s routes to the pod.

The istioctl describe command doesn’t just show the virtual services impacting the pod. If a virtual service configures the service host of a pod but no traffic will reach it, the command’s output includes a warning. This case can occur if the virtual service actually blocks traffic by never routing traffic to the pod’s subset. For example:

$ export REVIEWS_V2_POD=$(kubectl get pod -l app=reviews,version=v2 -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod $REVIEWS_V2_POD
...
VirtualService: reviews
   WARNING: No destinations match pod subsets (checked 1 HTTP routes)
      Route to non-matching subset v1 for (everything)

The warning includes the cause of the problem, how many routes were checked, and even gives you information about the other routes in place. In this example, no traffic arrives at the v2 pod because the route in the virtual service directs all traffic to the v1 subset.

If you now delete the Bookinfo destination rules:

Zip
$ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@

You can see another useful feature of istioctl describe:

$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
   WARNING: No destinations match pod subsets (checked 1 HTTP routes)
      Warning: Route to subset v1 but NO DESTINATION RULE defining subsets!

The output shows you that you deleted the destination rule but not the virtual service that depends on it. The virtual service routes traffic to the v1 subset, but there is no destination rule defining the v1 subset. Thus, traffic destined for version v1 can’t flow to the pod.

If you refresh the browser to send a new request to Bookinfo at this point, you would see the following message: Error fetching product reviews. To fix the problem, reapply the destination rule:

Zip
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@

Reloading the browser shows the app working again and running istioctl experimental describe pod $REVIEWS_V1_POD no longer produces warnings.

Verifying traffic routes

The istioctl describe command shows split traffic weights too. For example, run the following command to route 90% of traffic to the v1 subset and 10% to the v2 subset of the reviews service:

Zip
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-90-10.yaml@

Now describe the reviews v1 pod:

$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
   Weight 90%

The output shows that the reviews virtual service has a weight of 90% for the v1 subset.

This function is also helpful for other types of routing. For example, you can deploy header-specific routing:

Zip
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-jason-v2-v3.yaml@

Then, describe the pod again:

$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
   WARNING: No destinations match pod subsets (checked 2 HTTP routes)
      Route to non-matching subset v2 for (when headers are end-user=jason)
      Route to non-matching subset v3 for (everything)

The output produces a warning since you are describing a pod in the v1 subset. However, the virtual service configuration you applied routes traffic to the v2 subset if the header contains end-user=jason and to the v3 subset in all other cases.

Verifying strict mutual TLS

Following the mutual TLS migration instructions, you can enable strict mutual TLS for the ratings service:

$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: ratings-strict
spec:
  selector:
    matchLabels:
      app: ratings
  mtls:
    mode: STRICT
EOF

Run the following command to describe the ratings pod:

$ istioctl x describe pod $RATINGS_POD
Pilot reports that pod enforces mTLS and clients speak mTLS

The output reports that requests to the ratings pod are now locked down and secure.

Sometimes, however, a deployment breaks when switching mutual TLS to STRICT. The likely cause is that the destination rule didn’t match the new configuration. For example, if you configure the Bookinfo clients to not use mutual TLS using the plain HTTP destination rules:

Zip
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@

If you open Bookinfo in your browser, you see Ratings service is currently unavailable. To learn why, run the following command:

$ istioctl x describe pod $RATINGS_POD
...
WARNING Pilot predicts TLS Conflict on ratings-v1-f745cf57b-qrxl2 port 9080 (pod enforces mTLS, clients speak HTTP)
  Check DestinationRule ratings/default and AuthenticationPolicy ratings-strict/default

The output includes a warning describing the conflict between the destination rule and the authentication policy.

You can restore correct behavior by applying a destination rule that uses mutual TLS:

Zip
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@

Conclusion and cleanup

Our goal with the istioctl x describe command is to help you understand the traffic and security configurations in your Istio mesh.

We would love to hear your ideas for improvements! Please join us at https://discuss.istio.io.

To remove the Bookinfo pods and configurations used in this guide, run the following commands:

ZipZipZipZip
$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl delete -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
$ kubectl delete -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!