VK Cloud logo
Updated atMarch 27, 2024   02:33 PM

Configuring Application Auto-Deployment to a Kubernetes Cluster

In this article, we will look at how to set up auto-deployment of an application to a Kubernetes cluster.

Before that:

  1. Install and configure Docker.
  2. Install and configure GitLab.
  3. Install and configure Harbor.

Set up GitLab-runner

GitLab-runner is an environment for autobuilding GitLab projects. To set up autobuild, install and register runner with GitLab. You can register a specific runner for each project (specific runner) or a common runner for multiple projects (shared runner). Let's set up a common runner.

For this:

  1. Log in to the GitLab web interface with administrator rights:

  1. Copy the registration token and run the following in the console on the server where GitLab-runner is installed:
root@ubuntu-std3-2-4-40gb:~# docker exec -it gitlab-runner gitlab-runner register -n --url https://<SERVER_DNS_NAME>/ --executor docker --registration-token ua2k238fbMtAxMBBRf_z -- description "shared-runner" --docker-image="docker:dind" --tag-list "shared_runner" --docker-privileged --docker-volumes /var/run/docker.sock:/var/run/docker .sock

As a result, runner will be displayed in the web interface:

  1. Set the runtime variables. To do this, select Settings / CI CD and next to Variables click Expand:

  1. Set a few variables that will be used later in the autobuild file .gitlab-ci.yml:

Variables:

  • DOCKER_USER - user to access the repository in Harbor. In our example, k8s.
  • DOCKER_PASSWORD - The password for the k8s user that you entered when you created the user in Harbor.

Please note that Masked is enabled for the password - thanks to this, when you try to display text in a variable in the script, it is masked and the password is not visible.

  • DOCKER_REGISTRY is the name of the host where the Harbor is located. In our example, <SERVER_DNS_NAME>.

Set up the autobuild file

Go to the folder with the downloaded repository and in a text editor create a .gitlab-ci.yml file with the following content:

1image:docker:latest
2
3stages:
4  - builds
5  - test
6  - release
7
8variables:
9  REGISTRY_URL: https://$DOCKER_REGISTRY:8443
10  IMAGE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
11  RELEASE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:latest
12
13before_script:
14   - docker login $REGISTRY_URL -u $DOCKER_USER -p $DOCKER_PASSWORD
15
16build:
17  stage:builds
18  tags:
19    - shared_runner
20  script:
21   - cd app && docker build --pull -t $IMAGE .
22   - docker push $IMAGE
23
24release:
25  stage: release
26  tags:
27    - shared_runner
28  script:
29    - docker pull $IMAGE
30    - docker tag $IMAGE $RELEASE
31    - docker push $RELEASE
32  only:
33- master

Consider this file.

General Purpose Variables:

  • image - specifies the docker image in which the build will run. Since we are building a Docker image, we need an image that contains the utilities we need to build. Usually image docker:latest is used.
  • stages - describes the stages of image assembly. In our example, the test stage is skipped.

Variables used for work:

  • before_script - the stage that is executed first. We log in to the register using the variables that are specified in the GitLab runner settings.
  • build - image build. Standard build of a Docker image using a Dockerfile in the repository.
  • release - final image generation section. In our example, we simply take the image built in the previous stage, add the latest tag to it, and upload it to the repository.

Upload the created file to the repository:

1ash-work:k8s-conf-demo git add .
2ash-work:k8s-conf-demo git commit -m "create .gitlab-ci.yml"
3[master 55dd5fa] create .gitlab-ci.yml
41 file changed, 1 insertion(+), 1 deletion(-)
5ash-work:k8s-conf-demo git push
6Enumeration of objects: 5, done.
7Object count: 100% (5/5), done.
8Compressing changes uses up to 4 streams
9Object compression: 100% (3/3), done.
10Object Writing: 100% (3/3), 299 bytes | 299.00 KiB/s, done.
113 total (2 changes), 0 reused (0 changes)
12To testrom.ddns.net:ash/k8s-conf-demo.git
137c91eab..55dd5fa master -> master

As soon as the .gitlab-ci.yml file appears in the repository, GitLab will automatically start building it.

You can see how the build is going in the GitLab web interface in the project, CI / CD / Pipelines:

By clicking on running, you can see the current progress of the build:

By clicking on the build stage, you can see the build console and what is happening in it. Example for the build stage:

Example for the release stage:

The console logs show that both build and release completed successfully. The assembled image was posted in the Harbor repository, which can be seen in the corresponding web interface:

Deploying an application to a Kubernetes cluster

After successfully building the project, we will set up auto-deployment of the application to the Kubernetes cluster. For example, we use Kubernetes cluster from VK Cloud.

After the cluster is deployed in the cloud, a configuration file of the form kubernetes-cluster-5011_kubeconfig.yaml is loaded on the local computer, intended for authorization in the cluster for utilities like kubectl.

  1. Connect the configuration file:
ash-work:~ export KUBECONFIG=kubernetes-cluster-5011_kubeconfig.yaml
  1. Make sure that authorization is successful and the cluster is healthy:
1ash-work:~ kubectl cluster-info
2Kubernetes master is running at https://89.208.197.244:6443
3CoreDNS is running at https://89.208.197.244:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
4
5To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The cluster responds.

3 Grant the cluster access rights to the Harbor image repository. To do this, create the following secret:

1ash-work:~ kubectl create secret docker-registry myprivateregistry --docker-server=https://<SERVER_DNS_NAME>:8443 --docker-username=k8s --docker-password=<PASSWORD>
2secret/myprivateregistry created.

where <SERVER_DNS_NAME> is the Harbor server name, <PASSWORD> is the Harbor user k8s password.

  1. Verify that the secret was successfully created:
1ash-work:~ kubectl get secret myprivateregistry --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
2{"auths":{"https://<SERVER_DNS_NAME>:8443":{"username":"k8s","password":"<PASSWORD>","auth":"sdasdsdsdsdsdsdsdsdsdssd=="}}}%
  1. Create a deployment.yml file with the following content:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4name: myapp
5spec:
6selector:
7match Labels:
8run: myapp
9template:
10metadata:
11labels:
12run: myapp
13spec:
14containers:
15-name:myapp
16image: <SERVER_DNS_NAME>:8443/k8s/k8s-conf-demo:latest
17imagePullPolicy: Always
18env:
19- name: HTTP_PORT
20value: "8081"
21ports:
22- containerPort: 8081
23imagePullSecrets:
24-name: myprivateregistry
  1. Apply this file:
1ash-work:~ kubectl create -f deployment.yaml
2deployment.apps/myapp-deployment created
  1. After a while, make sure that the container has risen:
1ash-work:~ kubectl get pods
2NAME READY STATUS RESTARTS AGE
3myapp-deployment-66d55bcbd5-s86m6 1/1 Running 0 39s
  1. Create a service.yml file:
1apiVersion: v1
2kind: Service
3metadata:
4name: myapp-svc
5labels:
6run: myapp
7spec:
8ports:
9-protocol:TCP
10port: 8081
11targetPort: 8081
12selector:
13run: myapp
  1. Create a service:
1ash-work:~ kubectl create -f service.yaml
2service/myapp-svc created
  1. To provide access to the application from the external network, configure the ingress controller. To do this, create a ingress.yaml file:
1apiVersion: extensions/v1beta1
2kind: Ingress
3metadata:
4name: myapp-ingress
5spec:
6rules:
7host: echo.com
8http:
9paths:
10- path: /
11backend:
12serviceName: myapp-svc
13servicePort: 8081

In this file, specify the domain, when accessed, the transition to the application will be performed. You can specify any domain, we will write it locally for tests.

  1. Apply the ingress controller:
1ash-work:~ kubectl create -f ingress.yaml
2ingress.extensions/myapp-ingress created
  1. View the state of the ingress controller:
1ash-work:~ kubectl describe ingress myapp-ingress
2Name: myapp-ingress
3Namespace:default
4address:
5Default backend: default-http-backend:80 (<none>)
6Rules:
7Host Path Backends
8---- ---- --------
9echo.com
10/ myapp-svc:8081 (10.100.69.71:8081)
11Annotations:
12events:
13Type Reason Age From Message
14---- ------ ---- ---- -------
15Normal CREATE 45s nginx-ingress-controller Ingress default/myapp-ingress
16Normal UPDATE 5s nginx-ingress-controller Ingress default/myapp-ingress

The external IP address associated with the ingress controller can be viewed in the web interface of the Mail.ru cloud. It's called the load balancer IP address for the Ingress Controller. Let's designate it as <INGRESS_EXTERNAL_IP>.

  1. Let's test the application:
1ash-work:~ curl --resolve echo.com:80:<INGRESS_EXTERNAL_IP> http://echo.com/handler
2OK%

The --resolve option is responsible for the local resolve when requesting curl, since we came up with the domain ourselves and there is no real resolve.

Thus, we have deployed the application to the Kubernetes cluster manually.

  1. Delete the created:
1ash-work:~ kubectl delete -f ingress.yaml
2ingress.extensions "myapp-ingress" deleted
3ash-work:~ kubectl delete -f service.yaml
4service "myapp-svc" deleted
5ash-work:~ kubectl delete -f deployment.yaml
6deployment.apps "myapp" deleted

Deploying an application to a Kubernetes cluster using GitLab CI/CD

GitLab supports Kubernetes cluster integration by default. To set up the integration, get a few cluster options.

For this:

  1. Get the API URL:
1ash-work:~ kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
2https://89.208.197.244:6443
  1. Get a list of cluster secrets:
1ash-work:~ kubectl get secrets
2NAME TYPE DATA AGE
3dashboard-sa-token-xnvmp kubernetes.io/service-account-token 3 41h
4default-token-fhvxq kubernetes.io/service-account-token 3 41h
5myprivateregistry kubernetes.io/dockerconfigjson 1 39h
6regcred kubernetes.io/dockerconfigjson 1 39h
  1. Get the PEM certificate of the default-token-* secret:
1ash-work:~ kubectl get secret default-token-fhvxq -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
2-----BEGIN CERTIFICATE-----
3MIIC9DCCAdygAwIBAgIQQf4DP2XYQaew1MEtxJtVBzANBgkqhkiG9w0BAQsFADAi
4MSAwHgYDVQQDDBdrdWJlcm5ldGVzLWNsdXN0ZXItNTAxMTAeFw0xOTEyMDkxNTI2
5MDlaFw0yNDEyMDgxNTI2MDlaMCIxIDAeBgNVBAMMF2t1YmVybmV0ZXMtY2x1c3Rl
6ci01MDExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA47Nd4cEMEdtW
7yxo3VEm02wB+k7HytzchyYOlxJdYhQV4yjWR8MpAd9JKWgOdJ/qzitIjYdr0cKCI
8dLxRmKWGJJhTYZ4yBQS3XJ52n6bpV1Nzj0Xsq9Bxs7OgG1T4oZn7FXY4ZrJ10w0s
9wa0w5AbU2LbpprWsNki2uFkUusgtUSLSSwe90yVKT5ZnW3kUrmMZlY3ys4KLhDbA
10CS5xs03t10apRjfRq4WQ0ja+AYkzvwnpiX5nnJk2YCn31c4tVUSuoblzoWEokD2v
11DLzZaHAg53Payp2PUP7S5kMCjfrRIEBO7SULve/P/7GRJEHzzOREn/qMSOWK5u1O
12k1yk4ARP4wIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQE
13AwICBDANBgkqhkiG9w0BAQsFAAOCAQEAYxdbkMTOL4/pbQmJNIY54y+L8eGqEzTc
14is9KAZmoD4t4A88r1xZ/dp/3PVhFaOutQh6H7FzNDEiWbTFUa3edGXBmL4jB05Tm
15epj1iYEY6Nv5KGoSZutZ11Y8W+77pu9zKzzbtXMyCsYpPWrPyXiP1Z1zY6F4MtAQ
16GF9ONh9lDVttkFjaerKR4y4/E/X+e2Mi2dsyJmVHCrZTHozy8oZayC//JfzS+pK9
172IvcwlBgp9q4VO+lmkozWzWcO5mjk/70t7w5UHNpJOxeMzbhx6JkWZ9bN+Ub7RHN
181PUeNfZJKHEgSZw8M+poK3SqsyGMQ13geGXpM85VQvrqCW43YfgjiQ==
19-----END CERTIFICATE-----
  1. Now create a gitlab-admin-service-account.yaml file that describes GitLab's access rights to the cluster. File contents:
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4name: gitlab-admin
5namespace: kube-system
6---
7apiVersion: rbac.authorization.k8s.io/v1beta1
8kind: ClusterRoleBinding
9metadata:
10name: gitlab-admin
11roleRef:
12apiGroup: rbac.authorization.k8s.io
13kind: ClusterRole
14name: cluster-admin
15subjects: subjects
16-kind: ServiceAccount
17name: gitlab-admin
18  namespace: kube-system
  1. Apply rights:
1ash-work:~ kubectl apply -f gitlab-admin-service-account.yaml
2serviceaccount/gitlab-admin created
3clusterrolebinding.rbac.authorization.k8s.io/gitlab-admin created

And get the cluster access token:

1ash-work:~ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')
2Name: gitlab-admin-token-kcmd8
3Namespace: kube-system
4Labels: <none>
5Annotations: kubernetes.io/service-account.name: gitlab-admin
6kubernetes.io/service-account.uid: d9aa6095-6086-4430-b1ae-711df5765064
7
8
9Type: kubernetes.io/service-account-token
10
11
12Data
13====
14ca.crt: 1087 bytes
15namespace: 11 bytes
16
17
18token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJnaXRsYWItYWRtaW4tdG9rZW4ta2NtZDgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZ2l0bGFiLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDlhYTYwOTUtNjA4Ni00NDMwLWIxYWUtNzExZGY1NzY1MDY0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmdpdGxhYi1hZG1pbiJ9.CaBJMUdWwTPGBla1OZZnsftdUue1-XSyF-SEaHhNdWaUkX_5aUi4uZrgx0UGLbSOFkTmij2_lv1lAkm9-W4VCi4z9cVjw41o6TA6279rx_HEammNzFV8v1HvpSkMXH8wVzaoLwtVQehM7fozykgv4y3wmHAe-T0vXNRN48FYmDXReRSdGuldV--OZLZeOVGrRIkttXoMoSVW_LbnOiBJU4NUQq4dNpvklQkLTSBowu-E0lDJJoMQjniSO1j8H8fmy7Micpgy20Hi1RIoJWfPj-EY3CyhjMht8iTIokQHgHgpCY_RQPexJqHiXTQgyZ93WNw8foIfISduNXyynfGzmQ
  1. Go to the GitLab admin interface and click Add Kubernetes Cluster:

  1. Select the Add Existing cluster tab, enter the previously remembered parameters (API URL, PEM, Token) and click Add Kubernetes Cluster:

  1. Cluster added:

Place the files deployment.yaml, service.yaml, ingress.yaml in the deployments folder of the project.

Add the deploy section to the .gitlab-ci.yml file:

1image:docker:latest
2
3stages:
4- build
5- test
6- release
7- deploy
8
9variables:
10REGISTRY_URL: https://$DOCKER_REGISTRY:8443
11IMAGE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
12RELEASE: $DOCKER_REGISTRY:8443/$DOCKER_USER/$CI_PROJECT_NAME:latest
13before_script:
14- docker login $REGISTRY_URL -u $DOCKER_USER -p $DOCKER_PASSWORD
15
16build:
17stage: build
18tags:
19- shared_runner
20script:
21- cd app && docker build --pull -t $IMAGE .
22- docker push $IMAGE
23
24release:
25stage: release
26tags:
27- shared_runner
28script:
29- docker pull $IMAGE
30- docker tag $IMAGE $RELEASE
31- docker push $RELEASE
32only:
33- master
34
35deploy:
36stage: deploy
37before_script:
38- apk add --no-cache curl
39- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/ amd64/kubectl
40- chmod +x ./kubectl
41tags:
42- shared_runner
43environment:
44name: production
45script:
46- echo $KUBECONFIG
47- export KUBECONFIG=$KUBECONFIG
48- ./kubectl create secret docker-registry myprivateregistry --docker-server=$REGISTRY_URL --docker-username=$DOCKER_USER --docker-password=$DOCKER_PASSWORD --dry-run -o yaml | ./kubectl apply -f -
49- ./kubectl apply -f manifests/deployment.yaml
50- ./kubectl apply -f manifests/service.yaml
51- ./kubectl apply -f manifests/ingress.yaml
52- ./kubectl rollout restart deployment

Consider the deploy section.

In the before_script section, curl is installed into the system, with its help the latest stable version of kubectl is downloaded.

Script section: Since the cluster is managed by GitLab, there are preset variables - KUBECONFIG stores the name of the cluster access configuration file.

Since the namespace is set automatically, in this namespace you need to create a secret with a login and password to access our register, which stores the application image compiled at the release stage.

Next, the deploy, service, and ingress controller manifests are applied.

The last command restarts the deployment to download the new version of the application.

The result of executing the deploy section:

  1. Check what was created in the cluster. We look at the namespace:
1ash-work:~ kubectl get namespaces
2NAME STATUS AGE
3default Active 45h
4gitlab-managed-apps Active 67m
5ingress-nginx Active 45h
6k8s-conf-demo-1-production Active 57m
7kube-node-lease Active 45h
8kube-public Active 45h
9kube-system Active 45h
10magnum-tiller Active 45h
  1. Our namespace is k8s-conf-demo-1-production. Let's look at pods, services and ingress:
1ash-work:~ kubectl get pods -n k8s-conf-demo-1-production
2NAME READY STATUS RESTARTS AGE
3myapp-65f4bf95b5-m9s8l 1/1 Running 0 39m
4ash-work:~ kubectl get services -n k8s-conf-demo-1-production
5NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
6myapp-svc ClusterIP 10.254.243.199 <none> 8081/TCP 32m
7ash-work:~ kubectl get ingress -n k8s-conf-demo-1-production
8NAME HOSTS ADDRESS PORTS AGE
9myapp-ingress echo.com 80 32m
10ash-work:~
  1. Check the health of the application:
1ash-work:~ curl --resolve echo.com:<INGRESS_EXTERNAL_IP> http://echo.com/handler
2OK%
  1. To test auto-deploy, modify the application code a little. In the app/app.py repository file, change the return 'OK' line to return 'HANDLER OK'.

  2. Commit the changes:

1ash-work:k8s-conf-demo git add . && git commit -m "update" && git push
2[master b863fad] update
31 file changed, 1 insertion(+), 1 deletion(-)
4
5Enumeration of objects: 7, done.
6Object count: 100% (7/7), done.
7Compressing changes uses up to 4 streams
8Object compression: 100% (4/4), done.
9Write objects: 100% (4/4), 359 bytes | 359.00 KiB/s, done.
10Total 4 (changes 3), reused 0 (changes 0)
  1. Wait for the CI/CD execution to finish and check the application output again:
1ash-work:~ curl --resolve echo.com:<INGRESS_EXTERNAL_IP> http://echo.com/handler
2HANDLER OK%

The auto-deployment of the new version was successful.