VK Cloud logo
Updated at April 15, 2024   08:50 AM

Configuring Application Auto-Deployment to a Kubernetes Cluster

In this article, we will look at how to set up auto-deployment of an application to a Kubernetes cluster.

Before that:

  1. Install and configure Docker.
  2. Install and configure GitLab.
  3. Install and configure Harbor.

Set up GitLab-runner

GitLab-runner is an environment for autobuilding GitLab projects. To set up autobuild, install and register runner with GitLab. You can register a specific runner for each project (specific runner) or a common runner for multiple projects (shared runner). Let's set up a common runner.

For this:

  1. Log in to the GitLab web interface with administrator rights:

  1. Copy the registration token and run the following in the console on the server where GitLab-runner is installed:
root@ubuntu-std3-2-4-40gb:~# docker exec -it gitlab-runner gitlab-runner register -n --url https://<SERVER_DNS_NAME>/ --executor docker --registration-token ua2k238fbMtAxMBBRf_z -- description "shared-runner" --docker-image="docker:dind" --tag-list "shared_runner" --docker-privileged --docker-volumes /var/run/docker.sock:/var/run/docker .sock

As a result, runner will be displayed in the web interface:

  1. Set the runtime variables. To do this, select Settings / CI CD and next to Variables click Expand:

  1. Set a few variables that will be used later in the autobuild file .gitlab-ci.yml:

Variables:

  • DOCKER_USER - user to access the repository in Harbor. In our example, k8s.
  • DOCKER_PASSWORD - The password for the k8s user that you entered when you created the user in Harbor.

Please note that Masked is enabled for the password - thanks to this, when you try to display text in a variable in the script, it is masked and the password is not visible.

  • DOCKER_REGISTRY is the name of the host where the Harbor is located. In our example, <SERVER_DNS_NAME>.

Set up the autobuild file

Go to the folder with the downloaded repository and in a text editor create a .gitlab-ci.yml file with the following content:

image:docker:lateststages:  - builds  - test  - releasevariables:  REGISTRY_URL: https://$DOCKER_REGISTRY:8443  IMAGE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME  RELEASE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:latestbefore_script:   - docker login $REGISTRY_URL -u $DOCKER_USER -p $DOCKER_PASSWORDbuild:  stage:builds  tags:    - shared_runner  script:   - cd app && docker build --pull -t $IMAGE .   - docker push $IMAGErelease:  stage: release  tags:    - shared_runner  script:    - docker pull $IMAGE    - docker tag $IMAGE $RELEASE    - docker push $RELEASE  only:- master

Consider this file.

General Purpose Variables:

  • image - specifies the docker image in which the build will run. Since we are building a Docker image, we need an image that contains the utilities we need to build. Usually image docker:latest is used.
  • stages - describes the stages of image assembly. In our example, the test stage is skipped.

Variables used for work:

  • before_script - the stage that is executed first. We log in to the register using the variables that are specified in the GitLab runner settings.
  • build - image build. Standard build of a Docker image using a Dockerfile in the repository.
  • release - final image generation section. In our example, we simply take the image built in the previous stage, add the latest tag to it, and upload it to the repository.

Upload the created file to the repository:

ash-work:k8s-conf-demo git add .ash-work:k8s-conf-demo git commit -m "create .gitlab-ci.yml"[master 55dd5fa] create .gitlab-ci.yml1 file changed, 1 insertion(+), 1 deletion(-)ash-work:k8s-conf-demo git pushEnumeration of objects: 5, done.Object count: 100% (5/5), done.Compressing changes uses up to 4 streamsObject compression: 100% (3/3), done.Object Writing: 100% (3/3), 299 bytes | 299.00 KiB/s, done.3 total (2 changes), 0 reused (0 changes)To testrom.ddns.net:ash/k8s-conf-demo.git7c91eab..55dd5fa master -> master

As soon as the .gitlab-ci.yml file appears in the repository, GitLab will automatically start building it.

You can see how the build is going in the GitLab web interface in the project, CI / CD / Pipelines:

By clicking on running, you can see the current progress of the build:

By clicking on the build stage, you can see the build console and what is happening in it. Example for the build stage:

Example for the release stage:

The console logs show that both build and release completed successfully. The assembled image was posted in the Harbor repository, which can be seen in the corresponding web interface:

Deploying an application to a Kubernetes cluster

After successfully building the project, we will set up auto-deployment of the application to the Kubernetes cluster. For example, we use Kubernetes cluster from VK Cloud.

After the cluster is deployed in the cloud, a configuration file of the form kubernetes-cluster-5011_kubeconfig.yaml is loaded on the local computer, intended for authorization in the cluster for utilities like kubectl.

  1. Connect the configuration file:
ash-work:~ export KUBECONFIG=kubernetes-cluster-5011_kubeconfig.yaml
  1. Make sure that authorization is successful and the cluster is healthy:
ash-work:~ kubectl cluster-infoKubernetes master is running at https://89.208.197.244:6443CoreDNS is running at https://89.208.197.244:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The cluster responds.

3 Grant the cluster access rights to the Harbor image repository. To do this, create the following secret:

ash-work:~ kubectl create secret docker-registry myprivateregistry --docker-server=https://<SERVER_DNS_NAME>:8443 --docker-username=k8s --docker-password=<PASSWORD>secret/myprivateregistry created.

where <SERVER_DNS_NAME> is the Harbor server name, <PASSWORD> is the Harbor user k8s password.

  1. Verify that the secret was successfully created:
ash-work:~ kubectl get secret myprivateregistry --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode{"auths":{"https://<SERVER_DNS_NAME>:8443":{"username":"k8s","password":"<PASSWORD>","auth":"sdasdsdsdsdsdsdsdsdsdssd=="}}}%
  1. Create a deployment.yml file with the following content:
apiVersion: apps/v1kind: Deploymentmetadata:name: myappspec:selector:match Labels:run: myapptemplate:metadata:labels:run: myappspec:containers:-name:myappimage: <SERVER_DNS_NAME>:8443/k8s/k8s-conf-demo:latestimagePullPolicy: Alwaysenv:- name: HTTP_PORTvalue: "8081"ports:- containerPort: 8081imagePullSecrets:-name: myprivateregistry
  1. Apply this file:
ash-work:~ kubectl create -f deployment.yamldeployment.apps/myapp-deployment created
  1. After a while, make sure that the container has risen:
ash-work:~ kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-deployment-66d55bcbd5-s86m6 1/1 Running 0 39s
  1. Create a service.yml file:
apiVersion: v1kind: Servicemetadata:name: myapp-svclabels:run: myappspec:ports:-protocol:TCPport: 8081targetPort: 8081selector:run: myapp
  1. Create a service:
ash-work:~ kubectl create -f service.yamlservice/myapp-svc created
  1. To provide access to the application from the external network, configure the ingress controller. To do this, create a ingress.yaml file:
apiVersion: extensions/v1beta1kind: Ingressmetadata:name: myapp-ingressspec:rules:host: echo.comhttp:paths:- path: /backend:serviceName: myapp-svcservicePort: 8081

In this file, specify the domain, when accessed, the transition to the application will be performed. You can specify any domain, we will write it locally for tests.

  1. Apply the ingress controller:
ash-work:~ kubectl create -f ingress.yamlingress.extensions/myapp-ingress created
  1. View the state of the ingress controller:
ash-work:~ kubectl describe ingress myapp-ingressName: myapp-ingressNamespace:defaultaddress:Default backend: default-http-backend:80 (<none>)Rules:Host Path Backends---- ---- --------echo.com/ myapp-svc:8081 (10.100.69.71:8081)Annotations:events:Type Reason Age From Message---- ------ ---- ---- -------Normal CREATE 45s nginx-ingress-controller Ingress default/myapp-ingressNormal UPDATE 5s nginx-ingress-controller Ingress default/myapp-ingress

The external IP address associated with the ingress controller can be viewed in the web interface of the Mail.ru cloud. It's called the load balancer IP address for the Ingress Controller. Let's designate it as <INGRESS_EXTERNAL_IP>.

  1. Let's test the application:
ash-work:~ curl --resolve echo.com:80:<INGRESS_EXTERNAL_IP> http://echo.com/handlerOK%

The --resolve option is responsible for the local resolve when requesting curl, since we came up with the domain ourselves and there is no real resolve.

Thus, we have deployed the application to the Kubernetes cluster manually.

  1. Delete the created:
ash-work:~ kubectl delete -f ingress.yamlingress.extensions "myapp-ingress" deletedash-work:~ kubectl delete -f service.yamlservice "myapp-svc" deletedash-work:~ kubectl delete -f deployment.yamldeployment.apps "myapp" deleted

Deploying an application to a Kubernetes cluster using GitLab CI/CD

GitLab supports Kubernetes cluster integration by default. To set up the integration, get a few cluster options.

For this:

  1. Get the API URL:
ash-work:~ kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'https://89.208.197.244:6443
  1. Get a list of cluster secrets:
ash-work:~ kubectl get secretsNAME TYPE DATA AGEdashboard-sa-token-xnvmp kubernetes.io/service-account-token 3 41hdefault-token-fhvxq kubernetes.io/service-account-token 3 41hmyprivateregistry kubernetes.io/dockerconfigjson 1 39hregcred kubernetes.io/dockerconfigjson 1 39h
  1. Get the PEM certificate of the default-token-* secret:
ash-work:~ kubectl get secret default-token-fhvxq -o jsonpath="{['data']['ca\.crt']}" | base64 --decode-----BEGIN CERTIFICATE-----MIIC9DCCAdygAwIBAgIQQf4DP2XYQaew1MEtxJtVBzANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBdrdWJlcm5ldGVzLWNsdXN0ZXItNTAxMTAeFw0xOTEyMDkxNTI2MDlaFw0yNDEyMDgxNTI2MDlaMCIxIDAeBgNVBAMMF2t1YmVybmV0ZXMtY2x1c3Rlci01MDExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA47Nd4cEMEdtWyxo3VEm02wB+k7HytzchyYOlxJdYhQV4yjWR8MpAd9JKWgOdJ/qzitIjYdr0cKCIdLxRmKWGJJhTYZ4yBQS3XJ52n6bpV1Nzj0Xsq9Bxs7OgG1T4oZn7FXY4ZrJ10w0swa0w5AbU2LbpprWsNki2uFkUusgtUSLSSwe90yVKT5ZnW3kUrmMZlY3ys4KLhDbACS5xs03t10apRjfRq4WQ0ja+AYkzvwnpiX5nnJk2YCn31c4tVUSuoblzoWEokD2vDLzZaHAg53Payp2PUP7S5kMCjfrRIEBO7SULve/P/7GRJEHzzOREn/qMSOWK5u1Ok1yk4ARP4wIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwICBDANBgkqhkiG9w0BAQsFAAOCAQEAYxdbkMTOL4/pbQmJNIY54y+L8eGqEzTcis9KAZmoD4t4A88r1xZ/dp/3PVhFaOutQh6H7FzNDEiWbTFUa3edGXBmL4jB05Tmepj1iYEY6Nv5KGoSZutZ11Y8W+77pu9zKzzbtXMyCsYpPWrPyXiP1Z1zY6F4MtAQGF9ONh9lDVttkFjaerKR4y4/E/X+e2Mi2dsyJmVHCrZTHozy8oZayC//JfzS+pK92IvcwlBgp9q4VO+lmkozWzWcO5mjk/70t7w5UHNpJOxeMzbhx6JkWZ9bN+Ub7RHN1PUeNfZJKHEgSZw8M+poK3SqsyGMQ13geGXpM85VQvrqCW43YfgjiQ==-----END CERTIFICATE-----
  1. Now create a gitlab-admin-service-account.yaml file that describes GitLab's access rights to the cluster. File contents:
apiVersion: v1kind: ServiceAccountmetadata:name: gitlab-adminnamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: gitlab-adminroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects: subjects-kind: ServiceAccountname: gitlab-admin  namespace: kube-system
  1. Apply rights:
ash-work:~ kubectl apply -f gitlab-admin-service-account.yamlserviceaccount/gitlab-admin createdclusterrolebinding.rbac.authorization.k8s.io/gitlab-admin created

And get the cluster access token:

ash-work:~ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')Name: gitlab-admin-token-kcmd8Namespace: kube-systemLabels: <none>Annotations: kubernetes.io/service-account.name: gitlab-adminkubernetes.io/service-account.uid: d9aa6095-6086-4430-b1ae-711df5765064Type: kubernetes.io/service-account-tokenData====ca.crt: 1087 bytesnamespace: 11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJnaXRsYWItYWRtaW4tdG9rZW4ta2NtZDgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZ2l0bGFiLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDlhYTYwOTUtNjA4Ni00NDMwLWIxYWUtNzExZGY1NzY1MDY0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmdpdGxhYi1hZG1pbiJ9.CaBJMUdWwTPGBla1OZZnsftdUue1-XSyF-SEaHhNdWaUkX_5aUi4uZrgx0UGLbSOFkTmij2_lv1lAkm9-W4VCi4z9cVjw41o6TA6279rx_HEammNzFV8v1HvpSkMXH8wVzaoLwtVQehM7fozykgv4y3wmHAe-T0vXNRN48FYmDXReRSdGuldV--OZLZeOVGrRIkttXoMoSVW_LbnOiBJU4NUQq4dNpvklQkLTSBowu-E0lDJJoMQjniSO1j8H8fmy7Micpgy20Hi1RIoJWfPj-EY3CyhjMht8iTIokQHgHgpCY_RQPexJqHiXTQgyZ93WNw8foIfISduNXyynfGzmQ
  1. Go to the GitLab admin interface and click Add Kubernetes Cluster:

  1. Select the Add Existing cluster tab, enter the previously remembered parameters (API URL, PEM, Token) and click Add Kubernetes Cluster:

  1. Cluster added:

Place the files deployment.yaml, service.yaml, ingress.yaml in the deployments folder of the project.

Add the deploy section to the .gitlab-ci.yml file:

image:docker:lateststages:- build- test- release- deployvariables:REGISTRY_URL: https://$DOCKER_REGISTRY:8443IMAGE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAMERELEASE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:latestbefore_script:- docker login $REGISTRY_URL -u $DOCKER_USER -p $DOCKER_PASSWORDbuild:stage: buildtags:- shared_runnerscript:- cd app && docker build --pull -t $IMAGE .- docker push $IMAGErelease:stage: releasetags:- shared_runnerscript:- docker pull $IMAGE- docker tag $IMAGE $RELEASE- docker push $RELEASEonly:- masterdeploy:stage: deploybefore_script:- apk add --no-cache curl- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/ amd64/kubectl- chmod +x ./kubectltags:- shared_runnerenvironment:name: productionscript:- echo $KUBECONFIG- export KUBECONFIG=$KUBECONFIG- ./kubectl create secret docker-registry myprivateregistry --docker-server=$REGISTRY_URL --docker-username=$DOCKER_USER --docker-password=$DOCKER_PASSWORD --dry-run -o yaml | ./kubectl apply -f -- ./kubectl apply -f manifests/deployment.yaml- ./kubectl apply -f manifests/service.yaml- ./kubectl apply -f manifests/ingress.yaml- ./kubectl rollout restart deployment

Consider the deploy section.

In the before_script section, curl is installed into the system, with its help the latest stable version of kubectl is downloaded.

Script section: Since the cluster is managed by GitLab, there are preset variables - KUBECONFIG stores the name of the cluster access configuration file.

Since the namespace is set automatically, in this namespace you need to create a secret with a login and password to access our register, which stores the application image compiled at the release stage.

Next, the deploy, service, and ingress controller manifests are applied.

The last command restarts the deployment to download the new version of the application.

The result of executing the deploy section:

  1. Check what was created in the cluster. We look at the namespace:
ash-work:~ kubectl get namespacesNAME STATUS AGEdefault Active 45hgitlab-managed-apps Active 67mingress-nginx Active 45hk8s-conf-demo-1-production Active 57mkube-node-lease Active 45hkube-public Active 45hkube-system Active 45hmagnum-tiller Active 45h
  1. Our namespace is k8s-conf-demo-1-production. Let's look at pods, services and ingress:
ash-work:~ kubectl get pods -n k8s-conf-demo-1-productionNAME READY STATUS RESTARTS AGEmyapp-65f4bf95b5-m9s8l 1/1 Running 0 39mash-work:~ kubectl get services -n k8s-conf-demo-1-productionNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmyapp-svc ClusterIP 10.254.243.199 <none> 8081/TCP 32mash-work:~ kubectl get ingress -n k8s-conf-demo-1-productionNAME HOSTS ADDRESS PORTS AGEmyapp-ingress echo.com 80 32mash-work:~
  1. Check the health of the application:
ash-work:~ curl --resolve echo.com:<INGRESS_EXTERNAL_IP> http://echo.com/handlerOK%
  1. To test auto-deploy, modify the application code a little. In the app/app.py repository file, change the return 'OK' line to return 'HANDLER OK'.

  2. Commit the changes:

ash-work:k8s-conf-demo git add . && git commit -m "update" && git push[master b863fad] update1 file changed, 1 insertion(+), 1 deletion(-)Enumeration of objects: 7, done.Object count: 100% (7/7), done.Compressing changes uses up to 4 streamsObject compression: 100% (4/4), done.Write objects: 100% (4/4), 359 bytes | 359.00 KiB/s, done.Total 4 (changes 3), reused 0 (changes 0)
  1. Wait for the CI/CD execution to finish and check the application output again:
ash-work:~ curl --resolve echo.com:<INGRESS_EXTERNAL_IP> http://echo.com/handlerHANDLER OK%

The auto-deployment of the new version was successful.