Написать в техподдержку Позвонить нам
Admin Panel Logout

In this article:

    Load balancers

    Description

    A load balancer is a method of distributing jobs (traffic) across multiple servers in order to optimize resource utilization, reduce query service time, scale out a cluster (dynamically adding and removing devices), and providing fault tolerance.

    Balancer Management

    To work with container balancers, you need the kubectl utility.

    You can control various options of the NGINX Ingress Controller using ConfigMap or annotations. In the case of using ConfigMap, these parameters will be applied globally to all Ingress resources, in the case of annotations - only to the Ingress that uses this annotation. The following table shows the correspondence between the available annotations and ConfigMap keys.

    Matching annotations

    annotation ConfigMap key Description Default value
    kubernetes.io/ingress.class N / A Determines which Ingress controller should handle the Ingress resource. Set the nginx value to have the NGINX Ingress controller handle it. N / A
    nginx.org/proxy-connect-timeout proxy-connect-timeout Sets the value of the proxy_connect_timeout directive. 60s
    nginx.org/proxy-read-timeout proxy-read-timeout Sets the value of the proxy_read_timeout directive. 60s
    nginx.org/client-max-body-size client-max-body-size Sets the value of the client_max_body_size directive. 1m
    nginx.org/proxy-buffering proxy-buffering Enables or disables buffering of responses from the proxy server. True
    nginx.org/proxy-buffers proxy-buffers Sets the value of the proxy_buffers directive. Depends on the platform.
    nginx.org/proxy-buffer-size proxy-buffer-size Sets the value of the proxy_buffer_size directive Depends on the platform.
    nginx.org/proxy-max-temp-file-size proxy-max-temp-file-size Sets the value of the proxy_max_temp_file_size directive. 1024m
    nginx.org/proxy-hide-headers proxy-hide-headers Sets the value of one or more proxy_hide_header directives. Example: "nginx.org/proxy-hide-headers": "header-a, header-b" N / A
    nginx.org/proxy-pass-headers proxy-pass-headers Sets the value of one or more proxy_pass_header directives. Example: "nginx.org/proxy-pass-headers": "header-a, header-b" N / A
    N / A server-names-hash-bucket-size Sets the value of the server_names_hash_bucket_size directive. Depends on the size of the processor's cache line.
    N / A server-names-hash-max-size Sets the value of the server_names_hash_max_size directive. 512
    N / A http2 Enables HTTP / 2 on SSL-enabled servers. False
    nginx.org/redirect-to-https redirect-to-https Sets a 301 redirect rule based on the http_x_forwarded_protoheader value in the server block to force incoming traffic to go over HTTPS. Useful for SSL termination in load balancer in front of Ingress controller - see 115 False
    ingress.kubernetes.io/ssl-redirect ssl-redirect Sets an unconditional 301 redirect rule for all inbound HTTP traffic to force inbound traffic over HTTPS. True
    N / A log-format Sets a custom log format. See the template file.
    nginx.org/hsts hsts Enables HTTP Strict Transport Security (HSTS): The HSTS header is added to responses from backends. The preload directive is included in the header. False
    nginx.org/hsts-max-age hsts-max-age Sets the value of the HSTS header max-age directive. 2592000 (1 month)
    nginx.org/hsts-include-subdomains hsts-include-subdomains Adds the includeSubDomains directive to the HSTS header. False
    N / A ssl-protocols Sets the value of the ssl_protocols directive. TLSv1 TLSv1.1 TLSv1.2
    N / A ssl-prefer-server-ciphers Enables or disables the ssl_prefer_server_ciphers directive. False
    N / A ssl-ciphers Sets the value of the ssl_ciphers directive. HIGH:! ANULL:! MD5
    N / A ssl-dhparam-file Sets the contents of the dhparam. The controller will create a file and set the value of the ssl_dhparam directive with the path to the file. N / A
    N / A set-real-ip-from Sets the value of the set_real_ip_from directive. N / A
    N / A real-ip-header Sets the value of the real_ip_header directive. X-Real-IP
    N / A real-ip-recursive Enables or disables the real_ip_recursive directive. False
    nginx.org/server-tokens server-tokens Enables or disables server_tokensdirective. In addition, you can use NGINX Plus to provide a custom string value, including an empty string value that disables the Server field. True
    N / A main-snippets Sets a custom snippet in the main context. N / A
    N / A http-snippets Sets a custom snippet in the context of http. N / A
    nginx.org/location-snippets location-snippets Sets a custom fragment in the context of a location. N / A
    nginx.org/server-snippets server-snippets Sets a custom snippet in the server context. N / A
    nginx.org/lb-method lb-method Sets the load balancing method. By default, "" defines the looping method. ""
    nginx.org/listen-ports N / A Configures the HTTP ports that NGINX will listen on. [80]
    nginx.org/listen-ports-ssl N / A Configures the HTTPS ports that NGINX will listen on. [443]
    N / A worker-processes Sets the value of the worker_processes directive. auto
    N / A worker-rlimit-nofile Sets the value of the worker_rlimit_nofile directive. N / A
    N / A worker-connections Sets the value of the worker_connections directive. 1024
    N / A worker-cpu-affinity Sets the value of the worker_cpu_affinity directive. N / A
    N / A worker-shutdown-timeout Sets the value of the worker_shutdown_timeout directive. N / A
    nginx.org/keepalive keepalive Sets the value of the keepalive directive. Please note that proxy_set_header Connection ""; added to the generated config when the value is> 0. 0
    N / A proxy-protocol Includes PROXY protocol for incoming connections. False
    nginx.org/rewrites N / A Configuring URL rewriting. N / A
    nginx.org/ssl-services N / A Enables HTTPS when connecting to service endpoints. N / A
    nginx.org/websocket-services N / A Activates websockets for the server. N / A
    nginx.org/max-fails max-fails Sets the value of the max_fails parameter of the server directive. 1
    nginx.org/fail-timeout fail-timeout Sets the value of the fail_timeout parameter of the server directive. 10s

    When creating a new Kubernetes cluster, there is already a pre-installed Ingress Controller:

    After completing the steps to create a cluster under Virtual Networks - Load Balancers, a load balancer is automatically created for the Ingress Controller:

    This feature eliminates the separate step of creating a balancer for the cluster.

    Important

    Certain parameters can only be configured using ConfigMap, or only using annotations.

    Configuration with ConfigMaps

    You should edit the nginx-config.yaml file by setting the necessary parameters.

    Next, you should apply this file on the Kubernetes cluster:

     $ kubectl apply -f nginx-config.yaml

    This will change the NGINX Ingress Controller configuration.

    If you need to update some parameters, you need to modify the nginx-config.yaml file and run the following command again:

     $ kubectl apply -f nginx-config.yaml

    Configuration using annotations

    If you need to customize settings for a specific Ingress, the easiest way is to use annotations. Values used in annotations take precedence over ConfigMap.

    For example (cafe-ingress-with-annotations.yaml):

     apiVersion: extensions / v1beta1
    kind: Ingress
    metadata:
    name: cafe-ingress-with-annotations
    annotations:
    nginx.org/proxy-connect-timeout: "30s"
    nginx.org/proxy-read-timeout: "20s"
    nginx.org/client-max-body-size: "4m"
    nginx.org/location-snippets: |
    if ($ ssl_client_verify = SUCCESS) {
    set $ auth_basic off;
    }
    if ($ ssl_client_verify! = SUCCESS) {
    set $ auth_basic "Restricted";
    }
    auth_basic $ auth_basic;
    auth_basic_user_file "/var/run/secrets/nginx.org/auth-basic-file";
    nginx.org/server-snippets: |
    ssl_verify_client optional;
    spec:
    rules:
    - host: cafe.example.com
    http:
    paths:
    - path: / tea
    backend:
    serviceName: tea-svc
    servicePort: 80
    - path: / coffee
    backend:
    serviceName: coffee-svc
    servicePort: 80

    Balancer with internal IP address

    This example shows how to create a service that is accessible via LoadBalancer, but without an external IP address. Annotation service.beta.kubernetes.io/openstack-internal-load-balancer: activates this behavior: instead of the public IP address, the internal one will be allocated. This can be useful in hybrid scenarios where the service consumers are applications on the internal network outside the Kubernetes cluster.

     $ kubectl create -f load-balancer-internal / service-internal-lb.yaml

    Manifesto

     ---
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-internal-lb
    labels:
    k8s-app: nginx-backend
    annotations:
    service.beta.kubernetes.io/openstack-internal-load-balancer:
    spec:
    type: LoadBalancer
    externalTrafficPolicy: Cluster
    selector:
    k8s-app: nginx-backend
    ports:
    - port: 80
    name: http
    targetPort: 80
    ---
    apiVersion: apps / v1
    kind: Deployment
    metadata:
    name: nginx-backend
    spec:
    replicas: 2
    selector:
    matchLabels:
    app: nginx-webservice
    minReadySeconds: 5
    strategy:
    type: RollingUpdate
    rollingUpdate:
    maxUnavailable: 1
    maxSurge: 1
    template:
    metadata:
    labels:
    app: nginx-webservice
    spec:
    containers:
    - name: nginx
    image: library / nginx: 1.15-alpine
    ports:
    - containerPort: 80

    After installing the manifest:

     $ watch kubectl get service
    NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE
    nginx-internal-lb 10.0.0.10 192.168.0.181 80: 30000 / TCP 5m

    Balancer with static IP address

    This example shows how to create a service accessible by LoadBalancer using a reserved public IP address.

    Manifesto

     ---
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-existing-lb-ip-2
    labels:
    k8s-app: nginx-backend
    spec:
    type: LoadBalancer
    externalTrafficPolicy: Cluster
    loadBalancerIP: 95.163.250.115
    selector:
    k8s-app: nginx-backend
    ports:
    - port: 80
    name: http
    targetPort: 80
    ---
    apiVersion: apps / v1
    kind: Deployment
    metadata:
    name: nginx-backend
    spec:
    replicas: 2
    selector:
    matchLabels:
    app: nginx-webservice
    minReadySeconds: 5
    strategy:
    type: RollingUpdate
    rollingUpdate:
    maxUnavailable: 1
    maxSurge: 1
    template:
    metadata:
    labels:
    app: nginx-webservice
    spec:
    containers:
    - name: nginx
    image: library / nginx: 1.15-alpine
    ports:
    - containerPort: 80

    Balancer using sessions

    This example shows how to create a service available with LoadBalancer that redirects traffic to target pods not using round-robin balancing, but based on previous requests from the same client. This can help solve many problems with traditional stafetul web applications. SessionAffinity: ClientIP parameter activates the so-called Session Affinity, i.e. all requests of the same user will go to the same pod as long as this one is alive.

     $ kubectl create -f load-balancer-sticky / service-sticky-lb.yaml

    Manifesto

     ---
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-internal-lb
    labels:
    k8s-app: nginx-backend
    annotations:
    service.beta.kubernetes.io/openstack-internal-load-balancer:
    spec:
    type: LoadBalancer
    sessionAffinity: ClientIP
    externalTrafficPolicy: Cluster
    selector:
    k8s-app: nginx-backend
    ports:
    - port: 80
    name: http
    targetPort: 80
    ---
    apiVersion: apps / v1
    kind: Deployment
    metadata:
    name: nginx-backend
    spec:
    replicas: 2
    selector:
    matchLabels:
    app: nginx-webservice
    minReadySeconds: 5
    strategy:
    type: RollingUpdate
    rollingUpdate:
    maxUnavailable: 1
    maxSurge: 1
    template:
    metadata:
    labels:
    app: nginx-webservice
    spec:
    containers:
    - name: nginx
    image: library / nginx: 1.15-alpine
    ports:
    - containerPort: 80

    After installing the manifest

     $ watch kubectl get service
    NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE
    nginx-internal-lb 10.0.0.10 192.168.0.181 80: 30000 / TCP 5m


    Was this article helpful?