Persistence
there are currently 3 types of storage:
shared file system
File volumes behave like docker host mounts and are fully managed for you including redundancy and backups. They can be accessed by multiple pods simultaneously and support concurrent read/writes to some degree. This is useful for configs and other shared data that doesn't have high performance requirements. Since the intent is to emulate docker volume behavior, this is the default choice if you don't know which volume type you need.
there's two pools:
- gfs is the default pool, it is SSD backed and 3 times redundant. this is a safe choice.
- red is a large magnetic disk pool specifically for data science, is is not backed up and will loose data on hardware failure
block storage
Block storage uses the pools as blocks instead of files. Usually it will be automatically formated and mounted as ext4 but can also be used as raw block device available in /dev/volumes/ by not specifying any volumeMounts. Note that block storage cannot migrate between datacenter segments. It can only be used by pods within the same segments. Block storage is pre-allocated and immediately counts towards your plans storage limit when allocated. It can be scaled up at any time by re-applying the object with a larger size. To reduce the size of a volume you must delete and recreate it.
magnetic disk is not available as blocks. the redundant SSD cluster is called rbd
ephemeral nvme
The root filesystem is stored on pcie attached local nvme. There is no redundancy and all data will be lost on pod restart or migration. This is useful for caches which require very high write performance.
creating your first volume
docker supports volumes natively, and the cli works out of the box
k8s uses PersistentVolumeClaim to create volumes
docker supports volumes natively, and the compose spec works out of the box
docker-compose.yaml | |
---|---|
using a non default pool
WebDAV Access
volumes can be accessed via webdav via https://files.kraudcloud.com or davs://files.kraudcloud.com depending on your client.
You will find username/password in your profile under "Files Login".
GFS and Confidential Compute
GFS is available on confidential containers, but due to lack of virtio DAX the transport is 9p instead of virtio. Unix file permissions will not correctly synchronize between confidential and non-confidential containers, and IOPS are constrained to roughly half that of regular containers.