VK Cloud logo
Updated atMarch 19, 2024   02:14 PM

Addons

Various addons (additional services) are available for Cloud Containers clusters. They can be selected in any combination and installed either when creating a cluster using Terraform, or later in already an existing cluster. The installation process is automated and requires minimal user intervention.

Features of installing addons

  • Addons are installed on the worker nodes of the cluster and consume their computing resources.

    The following are the system requirements of addons based on the standard values requests and limits for Kubernetes resources in the addon setup code. When using non-standard values, the system requirements of addons will change.

  • Addons can be installed on a dedicated group of worker nodes or on Kubernetes worker nodes selected by the scheduler. Using the first approach allows you to exclude the influence of addons on the operation of production services deployed in the cluster.

    The computing resources of a dedicated group of worker nodes should be sufficient for all addons, even if each addon consumes the maximum resources specified in the system requirements. It is recommended to set up automatic scaling for such a group of nodes.

  • There are three options for installing addons:

    • Standard installation on Kubernetes worker nodes selected by the scheduler with a change in the addon configuration code.
    • Installation on dedicated worker nodes with a change in the addon configuration code.
    • Quick installation on Kubernetes worker nodes selected by the scheduler without changing the addon setup code (with default settings).

    Not all addons support all three installation options.

    The installation process is described in the section Configuring and installing addons.

Available addons

Capsule

Kubernetes clusters allow you to organize the logical division of Kubernetes resources at the level of individual namespaces. However, this may not be enough to achieve resource separation and isolation in complex scenarios. For example, let's say you want to provide isolated sets of resources to multiple development teams so that they are not accessible to each other. A typical solution to this problem is to create several separate clusters for each team. With this approach, as the number of teams increases, the number of clusters also increases, which complicates the administration of these clusters.

Capsule allows you to organize isolated sets of resources within one cluster using tentants. An individual tenant represents namespaces assigned to a group of users combined with restrictions on the creation and consumption of Kubernetes resources. The Capsule policy engine not only monitors compliance with resource usage policies within a tenant, but also ensures the isolation of one tenant from another. Thus, it becomes possible to organize the work of several teams within one multi-tenant cluster without the need to administer additional clusters.

cert-manager

cert-manager helps to manage certificates in Kubernetes clusters:

  • Issue certificates, including self-signed certificates. To do this, cert-manager sends requests to sources acting as certificate authority (CA).

    Examples of the sources:

    • Cybersecurity solutions providers such as Venafi.
    • Certificate providers, such as Let’s Encrypt.
    • Storage for secrets, such as HashiCorp Vault.
    • Local containers containing the public part of a certificate and private key.
  • Automatically reissue expiring certificates.

A certificate issued with cert-manager will be available to other Kubernetes resources. For example, it can be used by Ingress.

Docker Registry

Docker Registry is designed to host and store Docker images. It works in a high availability (HA) configuration. Registry images can be used when deploying services in a cluster.

See Connecting to the Docker registry for details.

Fluent Bit for Cloud Logging (logaas-integration)

Fluent Bit in combination with special filters, written in Lua, allows you to organize the delivery of logs from the Cloud Containers cluster to the Cloud Logging service for further analysis of these logs.

The sources of the logs are kubelet services and pods located on cluster nodes. For more information about how the addon works, see the section about installing it.

Ingress Controller (NGINX)

Ingress controller based on NGINX works as a reverse proxy and allows to organize single entry point for services in cluster which work via HTTP or HTTPS.

If you have a controller, it is sufficient to create Ingress resource to make such services available from outside the Cloud Containers cluster.

The pre-installed Ingress controller integrates tightly with the VK Cloud platform. For more information, see Network in cluster.

Istio

Istio is a framework that implements the service mesh concept, which allocates a separate layer for interaction between application services. Using Istio provides traffic management for services without changing the code of the services (sidecar containers are used). Istio benefits:

  • Expanded secure traffic transfer capabilities:

    • Traffic policies can be configured.
    • TLS can be used to communicate between services.
  • Expanded traffic monitoring capabilities.

  • Complex routing and balancing of traffic between services can be done.

Jaeger

In distributed systems based on microservices, requests are constantly exchanged. The Jaeger platform created for distributed query tracing. Jaeger tracks the flow of requests through microservices and allows you to:

  • collect information about the interrelationships of the system components in terms of the flow of requests;
  • detect query problems or bottlenecks in the system architecture related to processing the request stream.

Such a tool is necessary because query-related factors can significantly affect the behavior and performance of these systems as a whole. It is not enough to provide monitoring only for individual microservices.

Jaeger performs query tracing based on the data it receives from microservices. Therefore, it is necessary to integrate into microservices tool stack OpenTelemetry to send data about requests. You can get acquainted with the integration of OpenTelemetry into a microservice application using the example of Hot R.O.D.

Kiali

Kiali is a web interface for working with Istio. It allows to manage, monitor and visualize a service mesh.

Kube Prometheus Stack

The system for monitoring the status of the cluster and the services deployed in it is implemented on the basis of Prometheus and visualization tool Grafana.

See Cluster Monitoring for details.