In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh.
When proxying DNS, all DNS requests from an application will be redirected to the sidecar, which stores a local mapping of domain names to IP addresses. If the request can be handled by the sidecar, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard
/etc/resolv.conf DNS configuration.
While Kubernetes provides DNS resolution for Kubernetes
Services out of the box, any custom
ServiceEntys will not be recognized. With this feature,
ServiceEnty addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes
Services, the DNS response will be the same, but with reduced load on
kube-dns and increased performance.
This functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster.
This feature is not currently enabled by default. To enable it, install Istio with the following settings:
$ cat <<EOF | istioctl install -y -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyMetadata: # Enable basic DNS proxying ISTIO_META_DNS_CAPTURE: "true" # Enable automatic address allocation, optional ISTIO_META_DNS_AUTO_ALLOCATE: "true" EOF
This can also be enabled on a per-pod basis with the
DNS capture In action
To try out the DNS capture, first setup a
ServiceEntry for some external service:
apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: external-address spec: addresses: - 198.51.100.0 hosts: - address.internal ports: - name: http number: 80 protocol: HTTP
Without the DNS capture, a request to
address.internal would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured
$ curl -v address.internal * Trying 198.51.100.1:80...
Address auto allocation
In the above example, you had a predefined IP address for the service to which you sent the request. However, it’s common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream.
This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on
Host headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don’t have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple
ServiceEntrys for TCP services share the same port.
To work around these issues, the DNS proxy additionally supports automatically allocating addresses for
ServiceEntrys that do not explicitly define one. This is configured by the
When this feature is enabled, the DNS response will include a distinct, automatically assigned, address for each
ServiceEntry. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding
To try this out, configure another
apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: external-auto spec: hosts: - auto.internal ports: - name: http number: 80 protocol: HTTP resolution: STATIC endpoints: - address: 198.51.100.2
Now, send a request:
$ curl -v auto.internal * Trying 240.240.0.1:80...
As you can see, the request is sent to an automatically allocated address,
240.240.0.1. These addresses will be picked from the
240.240.0.0/16 reserved IP address range to avoid conflicting with real services.