VK Cloud logo
Updated atMarch 20, 2024   06:18 AM

Quick start

Quickstart will help you get started with the service and become familiar with its features.

After going through all the steps of the quickstart, you will:

  1. Create a small Kubernetes cluster.
  2. Learn how to connect to it.
  3. Become familiar with Kubernetes and addons for it:
    1. Connect management and monitoring tools.
    2. Load the Docker images into the Docker registry.
    3. Deploy simple applications based on the downloaded images, with the ability to use Cloud Storage.
    4. Provide access to the deployed applications using the Ingress controller.
    5. Make sure that these applications actually work.

1. Preparatory steps

1.1. Create a cluster

  1. Go to personal account VK Cloud.

  2. Select project, where the cluster will be placed.

  3. Go to ContainersKubernetes clusters.

  4. If there are no clusters in the selected project yet, click Create cluster.

    Otherwise, click Add.

  5. In the “Configuration” step:

    1. Select the cluster configuration Dev environment with the newest version of Kubernetes.

    2. Click the Next Step button.

  6. In the “Create cluster” step, set:

    1. Cluster name: for example, vk-cloud-k8s-quickstart.

    2. Virtual machine type — Master: STD3-2-8.

    3. Availability zone: Moscow (MS1).

    4. Network: Create new network.

    5. Assign external IP: make sure this option is selected.

    6. Leave the other settings unchanged.

    7. Press the Next step button.

  7. In the “Node group” step, set:

    1. Node type: STD3-4-8.

    2. Availability zone: Moscow (MS1).

    3. Leave the other settings unchanged.

    4. Click the Create cluster button.

Wait for the cluster to complete, this process may take a while.

1.2. Install addons in the cluster

  1. Install the docker-registry addon.

    Write down the data for accessing the Docker registry.

  2. Install the kube-prometheus-stack addon.

    Write down the password to access the Grafana web interface.

  3. Install the ingress-nginx addon with default parameters.

    Write down the floating IP address for the load balancer.

Further, the following values will be used in the commands and configuration files for the example. Replace them with the ones that are relevant to you.

Parameter
Value
IP address of the load balancer
for the Ingress controller
192.0.2.2
URL of the Docker registry endpoint
192.0.2.22:5000
Login of the Docker registry user
registry
Password of the Docker registry user
registry-password-123456
The password of the user admin for Grafana
grafana-password-123456

1.3. Configure the environment to work with the cluster from

Set up the host from which you will work with the cluster. This can be a real computer or a virtual machine.

Install the following tools on the host:

1.4. Connect to the cluster

  1. Add the Administrator Kubernetes role in personal account for the user on whose behalf the connection to the cluster will be performed:

    1. Go to VK Cloud personal account.
    2. Select the project where the previously created cluster is located.
    3. Go to Manage access.
    4. Expand the menu of the desired user and select Edit.
    5. Select the Kubernetes Administrator role from the drop-down list.
    6. Save your changes.
  2. Activate API access for this user.

  3. Get kubeconfig for the cluster in VK Cloud personal account:

    1. Go to Containers → Kubernetes Clusters.
    2. Find the desired cluster in the list, then select Get Kubeconfig to access the cluster in its menu.
  4. Move kubeconfig to the ~/.kube directory, so you don't have to specify additional arguments when using kubectl.

    The commands below assume that kubeconfig has been downloaded into the ~/Downloads directory under the name mycluster_kubeconfig.yaml.

    1mkdir ~/.kube && \
    2mv ~/Downloads/mycluster_kubeconfig.yaml ~/.kube/config
  5. Check that kubectl can connect to the cluster:

    1. Run the command:

      kubectl cluster-info
    2. Enter the user password from your VK Cloud account.

    If the cluster works properly and kubectl is configured to work with it, similar information will be displayed:

    1Kubernetes control plane is running at...
    2CoreDNS is running at...
    3
    4To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

2. Get access to cluster monitoring tools

An addon with monitoring tools was installed in the cluster based on Prometheus and Grafana have been enabled. Also Kubernetes Dashboard is available for all Cloud Containers clusters, which allows you to not only manage the cluster, but also monitor it.

  1. In a separate terminal session, run the command:

    kubectl -n prometheus-monitoring port-forward service/kube-prometheus-stack-grafana 8001:80
  2. Open the Grafana web interface:

    1. In your browser, go to the URL http://127.0.0.1:8001/.
    2. Authorize with the login/password pair admin/grafana-password-123456.
    3. If a password change is requested, change it.
  3. Select Dashboards → Browse from the side menu of any pre-configured dashboard to get information about the cluster resources.

3. Upload the necessary images to the Docker registry

The Docker Registry addon was installed in the cluster which will store the Docker images.

To put your own images in the Docker cluster registry:

  1. Add the Docker registry to the list of trusted registries:

    1. Add the following parameter to the Docker daemon.json configuration file with the URL of the Docker registry endpoint:

      1{
      2  ...
      3
      4  "insecure-registries": [
      5    "192.0.2.22:5000"
      6  ],
      7
      8  ...
      9}

      The location of this file for different Docker Engine installations is given in official Docker documentation.

    2. Restart the Docker Engine.

      Do one of the following:

      • Run one of the commands to perform restart:

        sudo systemd restart docker
        sudo service docker restart
      • Restart the Docker Engine from the Docker Desktop GUI (if installed).

  2. Build a Docker image:

    1. Create a directory for the files and navigate to it:

      mkdir ~/image-build && cd ~/image-build
    2. Place the following files in this directory:

    3. Run the build process:

      docker build . -t 192.0.2.22:5000/nginx-k8s-demo:latest

    Wait until the image build is complete.

  3. Place the built image in the Docker registry:

    1. Log in to the registry:

      docker login 192.0.2.22:5000 --username registry --password registry-password-123456
    2. Push the image to the registry:

      docker push 192.0.2.22:5000/nginx-k8s-demo:latest
    3. Check that the image is in the registry:

      curl -k -X GET -u registry:registry-password-123456 https://192.0.2.22:5000/v2/_catalog

      Output should give you the similar information:

      {"repositories":["nginx-k8s-demo"]}
    4. Create a Kubernetes secret so you can access the uploaded image from Kubernetes:

      kubectl create secret docker-registry k8s-registry-creds --docker-server=192.0.2.22:5000 --docker-username=registry --docker-password=registry-password-123456

4. Deploy demo applications

Based on the nginx-k8s-demo image loaded in the Docker registry, two applications will be deployed: tea and coffee. For each of the applications the following will be created:

  • Persistent Volume Claim, so that data volumes can be mounted inside the application.
  • Deployment, in which will be set:
    • Number of replicas.
    • Volume to mount in pod.
  • Service to provide access to the application. The Ingress controller will forward incoming requests to this Service.

To deploy the applications:

  1. Create a directory for the files and navigate to it:

    mkdir ~/k8s-deployments && cd ~/k8s-deployments
  2. Place the following files in this directory:

  3. Deploy the applications:

    kubectl apply -f deploy-coffee.yaml -f deploy-tea.yaml
  4. Check if the deployment is correct for:

    Use one of the ways:

    • kubectl: run the command.

      kubectl get pv
    • Grafana: open the Kubernetes → Compute Resources → Persistent Volumes dashboard.

    • Kubernetes Dashboard: open the Cluster → Persistent Volumes dashboard.

    You will see information that 1GB persistent volumes, that have been created with Persistent Volume Claim for deployments tea and coffee, are present.

5. Configure Ingress for demo applications

The Ingress Controller addon was installed in the cluster NGINX was enabled to route incoming user requests to the applications deployed in the cluster.

For Ingress controller to route requests to the corresponding Service resources, through which the tea and coffee demo applications were published, do the following:

  1. Place the following file in the ~/k8s-deployments directory:

  2. Deploy the Ingress resource:

    kubectl apply -f deploy-ingress.yaml
  3. Check if the deployment is correct by running the following kubectl command:

    kubectl get ingress

    You will see information that there is a working Ingress resource.

6. Check that all the created resources in the cluster are working

To verify that the example is working, run curl requests to the IP address 192.0.2.2 of the load balancer. The Ingress controller associated with the load balancer will then deliver these requests to the appropriate applications.

Requests for:

curl --resolve cafe.example.com:80:192.0.2.2 http://cafe.example.com/tea

Output should give you the similar information:

1Server address: 10.100.109.3:8080
2Server name: tea-8697dc7b86-s5vgn
3Date: 24/Aug/2022:09:27:34 +0000
4URI: /tea
5Request ID: ed83bd555afd25c103bfa05ee12cbfff
6Remote address (NGINX Ingress Controller): <IP address of Ingress controller>
7X-Forwarded-For (Request source): <IP address of host that sourced the request>
8
9K8S Persistent Volume status: present

This result demonstrates that:

  1. You can run applications using Docker images from the Docker cluster registry.
  2. You can mount storage to pods using Persistent Volume Claim.
  3. The Ingress controller provided with the cluster is configured correctly because it shows the real IP address of the request source.

Delete unused resources

A running cluster consumes computing resources. If you no longer need it:

What's next?