The data plane in Istio intercepts each packet in the request and performs health checks, routing, load balancing, authorization, authentication, and generation of observable signals. Service proxies are transparently placed in-band, and applications are oblivious of the data plane's presence when they conduct service-to-service calls. Intra-cluster communication and inbound (ingress), and outgoing (egress) cluster network traffic are handled by data planes. Application service traffic is directed first to the service proxy for processing, whether it is entering or leaving the mesh (ingressing or egressing). Istio's traffic is transparently intercepted and redirected to the service proxy using iptables rules.
Istio's control plane provides a single point of administration for service proxies, which require programmatic configuration due to the need to manage a large number of them efficiently and have their configuration updated in real-time as services are rescheduled around your environment (i.e., container cluster). Control planes offer policy and configuration for the mesh's services, transforming a collection of isolated, stateless proxies into a service mesh. The control planes do not directly touch any network packets in the mesh. They operate out-of-band. Control planes usually feature a command line interface and a user interface, both of which provide access to a centralized API for regulating proxy behavior holistically. Changes to control plane configuration can be automated using its APIs (for example, using a CI/CD pipeline), however configuration is usually version-controlled and updated in practise. To summarize, Istio's control plane:
Pilot is the head of the ship in an Istio mesh. Pilot keeps in sync with the underlying platform (e.g., Kubernetes) by tracking and representing the state and location of running services to the data plane. Pilot communicates with the service discovery system in your environment and generates configuration for data plane service proxy.
As Istio evolves, Pilot's focus will shift from interfacing with underlying platforms towards scalable serving of proxy configuration. It provides Envoy-compatible configuration by integrating configuration and endpoint information from multiple sources and translating it into xDS objects. Galley, another component, will eventually take responsibility for interfacing directly with underlying platforms.
Galley is Istio's configuration aggregation and dissemination component. As its role progresses, it will insulate the rest of Istio's components from the underlying platform and user-supplied configuration by ingesting and validating configuration. Galley uses the Mesh Configuration Protocol (MCP) as a mechanism for serving and distributing configuration.
Mixer is a stand-alone control plane component that abstracts infrastructure backends from the rest of Istio. Infrastructure backends include things like Stackdriver and New Relic. Precondition checking, quota management, and telemetry reporting are all responsibilities of the Mixer.
Mixer is used by service proxies and gateways to execute precondition checks to assess whether a request should be allowed to proceed (check) or has exceeded quota depending on communication between the caller and the service, and to report telemetry once a request has completed (report). Mixer uses a set of native and third-party adapters to interface to infrastructure backends. Which telemetry is sent to which backend at what time is determined by adapter configuration. Mixer's adapters, which act as an attribute processing and routing engine, can be used by service mesh operators as a point of integration and intermediation with their infrastructure backends.
Citadel gives Istio the ability to deliver strong service-to-service and end-user authentication via mutual TLS, as well as built-in identity and credential management. Citadel's certificate authority (CA) component handles key and certificate generation, deployment, rotation, revocation, and approving and signing certificate signing requests (CSRs) sent by Citadel agents. Citadel has the (optional) ability to interact with an Identity Directory during the certificate approval process.
Citadel offers a pluggable architecture that allows alternative certificate authorities (CAs) to sign workload certificates instead of Citadel's self-generated, self-signed signing key and certificate. Istio's CA pluggability enables and facilitates:
To mediate both inbound and outbound traffic for all services in the service mesh, Istio uses an extended version of Envoy, a high-performance proxy written in C++. Istio utilizes Envoy's capabilities like dynamic service discovery, load balancing, TLS termination, HTTP/2 and gRPC proxying, circuit breakers, health checks, staged rollouts with %-based traffic split, fault injection, and rich metrics.
Envoy is deployed as a sidecar to the relevant service in the same Kubernetes pod. This allows Istio to extract a multitude of signals about traffic behavior as attributes, which it can use in Mixer to enforce policy decisions and be sent to monitoring systems to provide information about the behavior of the entire mesh.
You can easily add Istio capabilities to an existing deployment without having to re-architect or rewrite code using the sidecar proxy model. This is one of the most compelling reasons to use Istio. The promises of immediate view to top-level service metrics, detailed traffic control, and automated authentication and encryption across all services without having to do either:
Using the canonical sample application BookInfo you can see how service proxies come into play and form a mesh.
In Kubernetes, automatic proxy injection is implemented as a webhook using a Kubernetes API Server with the Mutating Webhook Admission Controller. It is stateless, relying solely on the injection template and mesh configuration configmaps and the to-be-injected pod object. It can be easily horizontally scaled, either manually via the deployment object or automatically via a Horizontal Pod Autoscaler.
Istio addresses one of the most well-known distributed systems issues: the lack of homogeneous, reliable, and unchanging networks. It accomplishes this by using lightweight proxies deployed between your application containers and the network.
Ingress and egress gateways were first introduced in Istio 0.8. Ingress and egress gateways are symmetrically similar and serve as reverse and forward proxies for traffic entering and leaving the mesh, respectively. Istio Gateways' behavior, like that of other Istio components, is defined and controlled through configuration, allowing you to specify which traffic to let in and out of the service mesh, at what rate, and so on.
Configuration of ingress gateways allow you to define traffic entryways into the service mesh for incoming traffic to flow through. Consider that ingressing traffic into the mesh is a reverse proxy situation - akin to traditional web server load balancing. The configuration for egressing traffic out of the mesh is a forward proxy situation (similar to traditional) in which you determine which traffic to allow out of the mesh and where it should be routed.
For example, the following Gateway configuration sets up a proxy to act as a load balancer exposing port 80 and 9080 (http), 443 (https), and port 2379 (TCP) for ingress. The gateway will be applied to the proxy running on a pod with labels app: my-gateway-controller. While Istio will configure the proxy to listen on these ports, it is the responsibility of the user to ensure that external traffic to these ports are allowed into the mesh.
1apiVersion: networking.istio.io/v1alpha32kind: Gateway3metadata:4 name: my-gateway5spec:6 selector:7 app: my-gateway-controller8 servers:9 - port:10 number: 8011 name: http12 protocol: HTTP13 hosts:14 - uk.bookinfo.com15 - eu.bookinfo.com16 tls:17 httpsRedirect: true # sends 301 redirect for http requests18 - port:19 number: 44320 name: https21 protocol: HTTPS22 hosts:23 - uk.bookinfo.com24 - eu.bookinfo.com25 tls:26 mode: SIMPLE #enables HTTPS on this port27 serverCertificate: /etc/certs/servercert.pem28 privateKey: /etc/certs/privatekey.pem29 - port:30 number: 908031 name: http-wildcard32 protocol: HTTP33 hosts:34 - "*"35 - port:36 number: 2379 # to expose internal service via external port 237937 name: mongo38 protocol: MONGO39 hosts:40 - "*"
Traffic can exit an Istio service mesh in two ways - directly from the sidecar or funnelled through an egress gateway, where traffic policy may be applied.
You can configure the ConfigMap of the istio-sidecar-injector to allow traffic headed for an external source, bypassing the egress gateway. Set the following configuration in the sidecar injector, which will identify cluster-local networks and keep traffic destined locally within the mesh, while forwarding traffic for all other destinations externally.
1--set global.proxy.includeIPRanges="10.0.0.1/24"
External requests bypass the sidecar and route directly to the intended destination once this configuration is deployed and istio proxies are updated. Only internal requests within the cluster will be intercepted and managed by the Istio sidecar.
Istio monitoring and route rules can be applied to traffic leaving the mesh through an egress gateway. It also enables communication between applications running in a cluster where the nodes lack public IP addresses, preventing the mesh's applications from accessing the Internet. The nodes (and the applications running on them) can access external services in a regulated manner by defining an egress gateway, directing all egress traffic through it, and allocating public IPs to the egress gateway nodes.
Why use Istio Gateways and not Kubernetes Ingresses?
In general, the Istio v1alpha3 APIs leverage Gateways for richer functionality as Kubernetes Ingress has proven insufficient for Istio applications. In comparison to Kubernetes Ingress, Istio Gateways can function as a pure L4 TCP proxy and support all protocols supported by Envoy.
Another factor to examine is the division of trust domains between organizational teams. Kubernetes Ingress API combines specification for L4 to L7, which makes it challenging for different teams in organizations with separate trust domains (such as SecOps, NetOps, ClusterOps and Developers) to own Ingress traffic management.