VK Cloud logo
Updated at April 15, 2024   08:50 AM

Storage in cluster

Data in a Kubernetes Cloud Containers cluster can be stored in several ways: directly in a container or on volumes. There are problems with storing data in a container:

  • If the container crashes or stops, data is lost.
  • Container data is inaccessible to other containers, even if all containers are in the same pod.

To solve these problems, Cloud Containers volumes are used. Volumes have different lifecycles depending on the usage scenario:

  • Ephemeral volumes (EVs) have the same lifecycle as a pod. When a pod using such a volume ceases to exist, the volume is also deleted. Ephemeral volumes can only be used by a single pod, so volumes are declared directly in the pod's manifest.

  • Persistent volumes (PVs) have their own lifecycle, independent of the pod's lifecycle. Due to the separation of lifecycles, such volumes can be reused later with other pods. Pods and other workloads use Persistent Volume Claim (PVC) to handle persistent volumes.

Cloud Containers clusters are tightly integrated with the VK Cloud platform to handle PVs:

Supported VK Cloud storage types

  • Block storages:

    • Based on Ceph. To ensure fault tolerance and data integrity, the storage consists of three replicas located in different server racks. The storage can use HDD or SSD disks.

    • Based on high-performance NVMe SSD disks (High-IOPS SSD). Such storage is connected via iSCSI. Hardware RAID-10 is used to provide fault tolerance and data integrity at the storage level.

  • File Storage connected via NFS and CIFS.

Working with Container Storage Interface (CSI)

Kubernetes clusters use OpenStack Cinder to integrate with block storage in VK Cloud.

The storage types available in a Kubernetes cluster via Cinder CSI correlate with VK Cloud block storage as follows:

  • Ceph HDD corresponds to ceph-hdd in Cinder.
  • Ceph SSD corresponds to ceph-ssd in Cinder.
  • High-IOPS SSD corresponds to high-iops in Cinder.

Using Cinder CSI allows you to:

  • Statically and dynamically provision PV that is based on block storage.

  • Automatically remount persistent volumes:

    • When the pod using the volume or the worker node hosting the pod fails (assuming the pod is restored to that node or another node).
    • When you migrate a pod using a volume from one worker node to another.
  • Manage the storage that is used by PV:

    • When a volume is dynamically provisioned, the disk corresponding to that volume in VK Cloud will be automatically created.
    • If the Delete reclaim policy is set for a volume, the associated volume and the corresponding disk in VK Cloud will be deleted after the PVC is deleted.

Available reclaim policies for persistent volumes

A reclaim policy can be set for a PV. The policy will be triggered when the PVC associated with that volume is deleted:

  • Keep the volume (Retain). The PV and its associated storage will not be deleted.

    This policy applies to both block and file storage. Use it for PVs with sensitive data to protect the data if the PVC is accidentally deleted. If necessary, you can manually clean up and delete PVs with this policy and their associated storage.

  • Delete the volume (Delete). The PV and its associated storage will be deleted.

    This policy is only applicable to block storage.

Pre-configured storage classes

When using dynamic provisioning of a persistent volume, a storage class should be specified. The default storage class is not configured in VK Cloud Kubernetes clusters. It is possible either to set the default class manually, or explicitly specify the required class when creating a PVC.

There are preconfigured storage classes that use Cinder CSI for block storage.

The classes provide different types of storage in multiple regions and availability zones. Each storage class has a distinct reclaim policy confugured for it.

Storage class
name
Cinder CSI
storage type
Availability
zone
Reclaim
Policy
csi-ceph-hdd-gz1
ceph-hdd
GZ1
Delete
csi-ceph-hdd-gz1-retain
ceph-hdd
GZ1
Retain
csi-ceph-hdd-ms1
ceph-hdd
MS1
Delete
csi-ceph-hdd-ms1-retain
ceph-hdd
MS1
Retain
csi-ceph-hdd-me1
ceph-hdd
ME1
Delete
csi-ceph-hdd-me1-retain
ceph-hdd
ME1
Retain
csi-ceph-ssd-gz1
ceph-ssd
GZ1
Delete
csi-ceph-ssd-gz1-retain
ceph-ssd
GZ1
Retain
csi-ceph-ssd-ms1
ceph-ssd
MS1
Delete
csi-ceph-ssd-ms1-retain
ceph-ssd
MS1
Retain
csi-ceph-ssd-me1
ceph-ssd
ME1
Delete
csi-ceph-ssd-me1-retain
ceph-ssd
ME1
Retain
csi-high-iops-gz1
high-iops
GZ1
Delete
csi-high-iops-gz1-retain
high-iops
GZ1
Retain
csi-high-iops-ms1
high-iops
MS1
Delete
csi-high-iops-ms1-retain
high-iops
MS1
Retain
csi-high-iops-me1
high-iops
ME1
Delete
csi-high-iops-me1-retain
high-iops
ME1
Retain

All storage classes listed:

  • Allow volume expansion (allowVolumeExpansion: true).
  • Use immediate volume provisioning and binding (volumeBindingMode: Immediate).

See also