Multi-Mesh Deployments for Isolation and Boundary Protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

Oct 2, 2019 | By Vadim Eisenberg - IBM

Various compliance standards require protection of sensitive data environments. Some of the important standards and the types of sensitive data they protect appear in the following table:

StandardSensitive data
PCI DSSpayment card data
FedRAMPfederal information, data and metadata
HIPAApersonal health data
GDPRpersonal data

PCI DSS, for example, recommends putting cardholder data environment on a network, separate from the rest of the system. It also requires using a DMZ, and setting firewalls between the public Internet and the DMZ, and between the DMZ and the internal network.

Isolation of sensitive data environments from other information systems can reduce the scope of the compliance checks and improve the security of the sensitive data. Reducing the scope reduces the risks of failing a compliance check and reduces the costs of compliance since there are less components to check and secure, according to compliance requirements.

You can achieve isolation of sensitive data by separating the parts of the application that process that data into a separate service mesh, preferably on a separate network, and then connect the meshes with different compliance requirements together in a multi-mesh deployment. The process of connecting inter-mesh applications is called mesh federation.

Note that using mesh federation to create a multi-mesh deployment is very different than creating a multicluster deployment, which defines a single service mesh composed from services spanning more than one cluster. Unlike multi-mesh, a multicluster deployment is not suitable for applications that require isolation and boundary protection.

In this blog post I describe the requirements for isolation and boundary protection, and outline the principles of multi-mesh deployments. Finally, I touch on the current state of mesh-federation support and automation work under way for Istio.

Isolation and boundary protection

Isolation and boundary protection mechanisms are explained in the NIST Special Publication 800-53, Revision 4, Security and Privacy Controls for Federal Information Systems and Organizations, Appendix F, Security Control Catalog, SC-7 Boundary Protection.

In particular, the Boundary protection, isolation of information system components control enhancement:

Various compliance standards recommend isolating environments that process sensitive data from the rest of the organization. The Payment Card Industry (PCI) Data Security Standard recommends implementing network isolation for cardholder data environment and requires isolating this environment from the DMZ. FedRAMP Authorization Boundary Guidance describes authorization boundary for federal information and data, while NIST Special Publication 800-37, Revision 2, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy recommends protecting of such a boundary in Appendix G, Authorization Boundary Considerations:

Boundary protection, in particular, means:

Multi-mesh deployments facilitate division of a system into subsystems with different security and compliance requirements, and facilitate the boundary protection. You put each subsystem into a separate service mesh, preferably on a separate network. You connect the Istio meshes using gateways. The gateways monitor and control cross-mesh traffic at the boundary of each mesh.

Features of multi-mesh deployments

While expose-nothing by default and boundary protection are required to facilitate compliance and improve security, non-uniform naming and common trust may not exist are required when connecting meshes of different organizations, or of an organization that cannot enforce uniform naming or cannot or may not establish common trust between the meshes.

An optional feature that you may want to use is service location transparency: consuming services send requests to the exposed services in remote meshes using local service names. The consuming services are oblivious to the fact that some of the destinations are in remote meshes and some are local services. The access is uniform, using the local service names, for example, in Kubernetes, reviews.default.svc.cluster.local. Service location transparency is useful in the cases when you want to be able to change the location of the consumed services, for example when some service is migrated from private cloud to public cloud, without changing the code of your applications.

The current mesh-federation work

While you can perform mesh federation using standard Istio configurations already today, it requires writing a lot of boilerplate YAML files and is error-prone. There is an effort under way to automate the mesh federation process. In the meantime, you can look at these multi-mesh deployment examples to get an idea of what a generated federation might include.

Summary

In this blog post I described the requirements for isolation and boundary protection of sensitive data environments by using Istio multi-mesh deployments. I outlined the principles of Istio multi-mesh deployments and reported the current work on mesh federation in Istio.

I will be happy to hear your opinion about multi-mesh and multicluster at discuss.istio.io.

Share this post