VK Cloud logo
Updated atMarch 19, 2024   02:14 PM

Working with persistent volumes

Persistent volumes can be connected to simple demo applications in various ways. Next, Persistent Volume Claims (PVCs) will be used to connect them. An Ingress resource will be created to test the functionality of the applications and the volumes connected to them.

1. Preparatory steps

  1. Create a Kubernetes cluster of the most current version.

    When creating the cluster:

    • Select the Assign external IP option.
    • Create one group of worker nodes with virtual machine type STD3-2-8 in the MS1 availability area with total computing resources: 2 vCPU, 8 GB RAM (or more). This is necessary to be able to schedule all the required objects.

    For example, you can create one group of nodes with virtual machine type STD3-2-8.

    Other cluster parameters are at your discretion.

  2. Make sure that the NGINX Ingress addon (ingress-nginx) is installed in a cluster with default parameters. It will be required to provide access to demo applications.

  3. Make sure that you can connect to the cluster using kubectl.

  4. Install curl if the utility is not already installed.

2. Create demo applications and connect persistent volumes to them

The following will demonstrate how to create several NGINX-based web applications to display web pages written to the persistent volumes connected to those applications. The NGINX nginxdemos/nginx-hello image is used, which displays web pages from the /usr/share/nginx/html directory, so all persistent volumes will be mounted in the application pods via this path.

You can create one or more demo applications, depending on which way you want to connect the persistent volumes.

Connecting block storages

Block stores are connected to the cluster with Cinder CSI.

When using this type of storage:

  • only one pod can access storage (multiple pods cannot use block storage at the same time);
  • as a consequence, the ReadWriteOnce mode must be used to access the storage.

This example will create:

  1. Disk in the cloud compute service of the VK Cloud platform.

  2. A persistent volume corresponding to this disk.

  3. Static PVC, using a persistent volume that has already been created.

  4. Application tea as a single pod deployment, and its corresponding service.

    For this application there will also be an initialization container (initContainer) which will write the web page to the persistent volume.

To connect a persistent volume using static PVC:

  1. Create a network HDD.

    When creating, specify:

    • Disk name: any name, such as disk-tea.
    • Source: empty disk.
    • Disk type: network HDD (ceph-hdd).
    • Availability zone: MS1.
    • Size: 1 GB.

    Leave other options and settings unchanged.

  2. Copy the ID of the created disk, for example f6d8bf3b-aaaa-bbbb-cccc-4ece8e353246.

  3. Examine the connection features:

    1. The storage sizes specified in the parameters spec.capacity.storage for the PersistentVolume resource and spec.resources.requests.storage for the PersistentVolumeClaim resource must match the size of the corresponding disk. In this example it is 1 GB.
    2. For the PersistentVolumeClaim resource, use an empty value in the storageClassName storage class parameter.
    3. The storage access mode is specified in the spec.accessModes parameter for the PersistentVolume resource.
    4. The availability zones of the disk and the worker node on which the pod (application) will be located must match. Otherwise an attempt to mount a persistent volume corresponding to the disk on this node will fail. In this example, the pod will be placed on a group of worker nodes in the MS1 availability zone and use a disk from the same zone.
    5. ReclaimPolicy Retain is used for the permanent volume. The Delete policy is not used so that you can monitor the state of the disk manually and not accidentally delete it.
  4. Create a manifest for the tea application.

    For the PersistentVolume resource, specify the ID of the created disk in the spec.cinder.volumeID parameter.

  5. Apply this manifest to the cluster to create all necessary resources:

    kubectl apply -f ./tea.yaml

Connecting file storages

File storages are connected to the cluster using a persistent volume that is configured to use the existing storage via the desired protocol, such as NFS.

When using this type of storage:

  • Multiple pods can access the storage at once;
  • as a consequence, the ReadWriteMany mode must be used to access the storage.

This example will create:

  1. NFS file storage in the Cloud Servers service.
  2. A persistent volume corresponding to this storage.
  3. Static PVC using an already created persistent volume.
  4. Application milkshake as a StatefulSet of two pods, as well as the corresponding services.

To connect an NFS persistent volume using a static PVC:

  1. Create file storage.

    When creating it, specify:

    • Name of file storage: any name, such as storage-milkshake.
    • Storage size: `10 GB'.
    • Protocol: NFS.
    • Network: network and subnet where the Kubernetes cluster is located. This information can be found on the cluster page.
    • File storage network: existing network. If a suitable network is not on the list, select Create new network.
  2. View information about the created file storage.

    Save the value of the Connection point parameter.

  3. Examine the specifics of the connection:

    1. The storage sizes specified in the spec.capacity.storage and spec.resources.requests.storage parameters for the PersistentVolumeClaim resource must match the size of the created file storage. In this example it's 10 GB.

    2. For the PersistentVolumeClaim resource, use an empty value in the storageClassName storage class parameter.

    3. For the PersistentVolume resource:

      1. The storage access mode is specified in the spec.accessModes parameter for the PersistentVolume resource.
      2. The spec.mountOptions parameter set must contain an nfsvers entry with version 4.0.
    4. Instead of an initialization container to write a web page to a persistent volume, a single-run Kubernetes task (job) is used. This approach works because in this case all pods will have access to the same persistent volume.

    5. ReclaimPolicy Retain is used for the persistent volume because the Recycle policy will not allow instant removal of the volume when it becomes unnecessary. Clearing a volume of data takes a long time. The Delete policy is not used so that you can monitor the state of the storage manually and not accidentally delete it.

  4. Create a manifest for the milkshake application.

    For the PersistentVolume resource, specify:

    • IP address from the Connection point of the file storage as the value of the spec.nfs.server parameter.
    • Data after the IP address (/shares/...) as a value of the spec.nfs.path parameter.
  5. Apply this manifest to the cluster to create all necessary resources:

    kubectl apply -f ./milkshake.yaml

3. Check the functionality of demo applications and persistent volumes

  1. Create a manifest for the Ingress resource through which application requests will go.

  2. Apply this manifest to the cluster to create all necessary resources:

    kubectl apply -f ./cafe-ingress.yaml
  3. Define the public IP address of the Ingress controller.

  4. Check the availability of the applications with curl using the IP address of the Ingress controller.

    Run the command:

    curl --resolve cafe.example.com:80:<Ingress IP address> http://cafe.example.com/tea

    A response should be displayed:

    The tea pod says Hello World to everyone! This file is located on the statically claimed persistent volume.

Delete unused resources

  1. If the Kubernetes resources you created are no longer needed, delete them.

    1kubectl delete -f ./cafe-ingress.yaml
    2kubectl delete -f ./milkshake.yaml
    3kubectl delete -f ./juice.yaml
    4kubectl delete -f ./coffee.yaml
    5kubectl delete -f ./tea.yaml
  2. Remove unused storage:

    1. If the disk used by the tea application is no longer needed — delete it.

    2. If the NFS repository used by the milkshake application is no longer needed — delete it.

    All other Cinder stores created with dynamic PVCs will be deleted automatically.

  3. A running cluster consumes computing resources. If you no longer need it: