From multi-mesh to now multi-cluster, Meshery is continuously expanding its capability to give developers, operators, and security engineers more control over their infrastructure. In this post, we'll take a look behind the scenes at how each component in Meshery's architecture plays a role in the management of many Kubernetes clusters.
Philosophy behind Meshery's multi-cluster management approach
While designing Meshery for the world of many clouds, and many Kubernetes clusters, much care has been taken to ensure that Meshery is an extensible management platform, ready for handling new types of infrastructure and new use cases rapidly through its plugin model. Under the hood, Meshery Server acts as a delegator of operations by figuring out which Meshery Adapter registered its capability against the given operation. The operation is then sent to that given component (like one of Meshery’s adapters,e.g. Istio adapter) via a gRPC call. When the operation involves a Kubernetes cluster(s), the kubeconfig(s) is sent as a parameter to the RPCs. It is then the job of the handling adapter to respect that and perform the operation across the passed clusters from kubeconfigs as needed. The operations not requiring a kubeconfig are managed through the same RPC, with the only difference being that the handling component would ignore the kubeconfigs
field altogether making the system work not just for Kubernetes, but for other cloud native use cases. This approach of reusing the same RPC for different types of requests is pretty common and sometimes debatable with the other approach of being strict with the RPCs. This is what makes Meshery completely pluggable and extensible.
Using multi cluster with Meshery
From a client's perspective, there are two uses of the multi context feature in general. While deploying a MeshMap design or performing any other operation on their cluster(s), selecting any number of Kubernetes contexts will allow them to uniformly and parallely perform the operation across the clusters. And while visualizing the state of their cluster(s), the same context switcher will allow them to filter across the clusters whose view they want to see.
All cluster specific operations are now applied over a number of clusters uniformly. So if you have 10 clusters to manage and 8 of those start with the exact same set of pods, deployments, services, etc then Meshery can help you to apply these operations quickly and easily.
It is as simple as selecting the specific cluster(s) from the Kubernetes context switcher in the navbar, and then applying whatever operation you wanted to, whether that be deploying a sample app, a Prometheus daemonset, or a MeshMap design.
Just before applying the operation, you will be prompted with a confirmation modal which will provide the information about which cluster(s) that operation will be performed against. As the User interface improves, this same modal will also convey more useful information about the operation they are going to perform.
Using MeshMap visualizer
You can switch between views of your cluster in visualizer mode while using MeshMap.
Managing Meshery on multiple clusters
Users can perform cluster related operations from the settings page like adding more clusters, removing data from existing clusters and removing existing clusters.
Meshery also deploys Meshery operator across the cluster it’s about to manage. This operator manages the lifecycle of a Meshery broker and MeshSync. MeshSync pumps the blood into Meshery’s core, in other words, it is responsible for watching all different types of resources by establishing a watch stream over each of them. MeshSync then pumps that data into the NATS server, of which Meshery server itself is a client. From there, Meshery server gets all the relevant data related to activities in the cluster.
By default, Meshery wants to be as much aware about your infrastructure as possible to provide value so it deploys its operator across each detected cluster. But you can fine tune this configuration by going over each one of them from the table as shown.
If you disconnect your cluster and do not want to persist the data from that cluster then you can perform a fine-grained deletion by deleting all MeshSync data (which are the Kubernetes objects) for that specific cluster.
## Future of multi-clusterMeshery as an extension point to your infrastructure provides out-of-the-box value by adding components which can be Kubernetes specific, service mesh specific or custom components to add new functionality. We can now add multi-cluster specific components to provide more abstraction. This model can be used along with Meshery’s multi-mesh capabilities to give an overall multi-mesh multi-cluster experience to the user. For instance, your Istio service mesh spanning across multiple clusters can be abstracted and managed by Meshery using custom components such as VirtualGateway and VirtualDestinationRules. In this case, Meshery’s Istio adapter will handle the logic of converting a VirtualGateway into gateways across the clusters. This abstraction provides high value by powering the service mesh to span across the clusters while the Ops team can configure the mesh with minimal effort.
Just like the example above, many such Meshery extension points are in Meshery to add logic into and add useful functionality. And as more of such extension points are added, Meshery will continue to give more and more power to your cloud native infrastructure.