VK Cloud logo
Updated at April 15, 2024   08:50 AM

Fluent Bit for Cloud Logging (logaas-integration)

Preparatory steps

Connect the Cloud Logging service to the project, if it has not been done yet. To do this, contact technical support.

Installing the addon

Several installation options are available for the addon:

  • standard installation;
  • quick installation.

Regardless of the selected installation option, the addon will be installed as DaemonSet to all nodes of the cluster, including the master nodes.

Take into account the total maximum system requirements of addons that will be placed on groups of worker nodes. If necessary, perform manual scaling for groups of worker nodes or configure automatic scaling before install.

  1. Install the addon:

    1. Go to VK Cloud personal account.

    2. Select project, where the cluster will be placed.

    3. Go to ContainersKubernetes clusters.

    4. Click on the name of the desired cluster.

    5. Go to Addons tab.

    6. If there are already installed addons in the cluster, click on the Add addon button.

    7. Click the Install addon button on the logaas-integration addon card.

    8. Select the desired addon version from the drop-down list.

    9. Edit if necessary:

      • the selected version;

      • application name;

      • the name of the namespace where the addon will be installed;

      • addon settings code.

    10. Click the Install addon button.

      The installation of the addon in the cluster will begin. This process can take a long time.

  2. (Optional) View logs in the Cloud Logging service to make sure that the addon is working properly.

Editing the addon setup code during installation

Editing the addon code is applicable for a standard installation.

The full addon setup code along with the description of the fields is available:

  • in your personal account;
  • in the configuration_values attribute from the data source vkcs_kubernetes_addon if Terraform is used.

Also on GitHub the Fluent Bit configuration code is available, which serves as the basis for this addon.

Read more about pipeline and configuration file settings in the official documentation of Fluent Bit.

Fine-tuning the behavior of the addon when working with different severity levels

Before sending logs to the Cloud Logging service, the addon performs the following actions:

  1. Determines the severity level of individual log entries. This is done using Fluent Bit parsers.

  2. Adds additional metadata to the logs, which makes it easier to work with Cloud Containers cluster logs (for example, searching for the necessary logs in Cloud Logging). Special Fluent Bit filters are used for this, written in Lua. This metadata contains, among other things, the severity level of logged events.

You can fine-tune the behavior of the addon when working with severity levels using:

Set one or more rules for custom CustomFilter filters in the addon code so that only logs with the specified minimum severity level get into Cloud Logging. These rules can be configured at the level of a specific namespace and at the level of specific pods in the namespace:

customFilter:  - namespace: <namespace>    rules: # One rule for the namespace      - min_level: <letter designation of the minimum severity level>  - namespace: <the name of another namespace>    rules: # A few rules for the pods in the namespace      - podprefix: <the prefix of the pod name>        min_level: <letter designation of the minimum severity level>      - podprefix: <the name of another namespace>        min_level: <letter designation of the minimum severity level>  - namespace: <the name of another namespace>    rules: # A combination of namespace rules and pod rules      - min_level: <letter designation of the minimum severity level>      - podprefix: <the prefix of the pod name>        min_level: <letter designation of the minimum severity level>      - podprefix: <the prefix of the pod name>        min_level: <letter designation of the minimum severity level>

It is the prefix that is configured for the pods so that you can receive logs from several replica pods that relate to the same workload.