Storage
On-disk files in a container are ephemeral and can’t be shared with other members of a pod. For some applications, this is not an issue, but for many persistent storage is required.
Charmed Kubernetes® makes it easy to add and configure different types of persistent storage for your Kubernetes cluster, as outlined below. For more detail on the concept of storage volumes in Kubernetes, please see the Kubernetes documentation.
Ceph storage
Charmed Kubernetes can make use of Ceph to provide persistent storage volumes. The following sections assume you have already deployed a Charmed Kubernetes cluster and you have internet access to the Juju Charm Store.
Deploy Ceph
Check that the current Juju model is the one where you wish to deploy Ceph
juju switch
Begin by adding a minimum number of Ceph monitor nodes:
juju deploy -n 3 ceph-mon
For the storage nodes we will also need to specify storage volumes for the backing cloud to add. This is done by using the --storage
option. The ceph-osd
charm defines two useful types of storage, osd-devices
for the volumes which will be formatted and used to provide storage, and osd-journals
for storage used for journalling.
The format for the --storage
option is <storage pool>,<size>,<number>
. The storage pools available are dependent on and defined by the backing cloud. However, by omitting the storage type, the default pool for that cloud will be chosen (E.g. for AWS, the default pool is EBS storage).
So, for example, to deploy three ceph-osd
storage nodes, using the default storage pool, with 2x 32G volumes of storage per node, and one 8G journal, we would use the command:
juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1
Note: For a more detailed explanation of Juju’s storage pools and options, please see the relevant Juju documentation.
Note that actually deploying these charms with storage may take some time, but you can continue to run other Juju commands in the meantime.
The ceph-osd
and ceph-mon
deployments should then be connected:
juju add-relation ceph-osd ceph-mon
Relate to Charmed Kubernetes
Making Charmed Kubernetes aware of your Ceph cluster requires 2 Juju relations.
juju add-relation ceph-mon:admin kubernetes-master
juju add-relation ceph-mon:client kubernetes-master
Create storage pools
By default, the kubernetes-master
charm will create the required pools defined
in the storage class. To view the default options, run:
juju list-actions ceph-mon --schema --format json | jq '.["create-pool"]'
If you’re happy with this, you can skip the section. Otherwise, if you want to change these, you can delete the pools:
juju run --unit ceph-mon/0 "ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'"
juju run-action ceph-mon/0 delete-pool pool-name=xfs-pool --wait
juju run-action ceph-mon/0 delete-pool pool-name=ext4-pool --wait
Then recreate them, using the options listed from the list-actions
command ran
earlier. For example:
juju run-action ceph-mon/0 create-pool name=xfs-pool replicas=6 --wait
unit-ceph-mon-0:
id: c12f0688-f31b-4956-8314-abacd2d6516f
status: completed
timing:
completed: 2018-08-20 20:49:34 +0000 UTC
enqueued: 2018-08-20 20:49:31 +0000 UTC
started: 2018-08-20 20:49:31 +0000 UTC
unit: ceph-mon/0
juju run-action ceph-mon/0 create-pool name=ext4-pool replicas=6 --wait
unit-ceph-mon-0:
id: 4e82d93d-546f-441c-89e1-d36152c082f2
status: completed
timing:
completed: 2018-08-20 20:49:45 +0000 UTC
enqueued: 2018-08-20 20:49:41 +0000 UTC
started: 2018-08-20 20:49:43 +0000 UTC
unit: ceph-mon/0
Verification
Now you can look at your Charmed Kubernetes cluster to verify things are working. Running:
kubectl get sc,po
… should return output similar to:
NAME PROVISIONER AGE
storageclass.storage.k8s.io/ceph-ext4 csi-rbdplugin 7m
storageclass.storage.k8s.io/ceph-xfs (default) csi-rbdplugin 7m
NAME READY STATUS RESTARTS AGE
pod/csi-rbdplugin-attacher-0 1/1 Running 0 7m
pod/csi-rbdplugin-cnh9k 2/2 Running 0 7m
pod/csi-rbdplugin-lr66m 2/2 Running 0 7m
pod/csi-rbdplugin-mnn94 2/2 Running 0 7m
pod/csi-rbdplugin-provisioner-0 1/1 Running 0 7m
Scaling out
To check existing storage allocation, use the command:
juju storage
If extra storage is required, it is possible to add extra ceph-osd
units as
desired:
juju add-unit ceph-osd -n 2
Once again, it is necessary to attach appropriate storage volumes as before. In this case though, the storage needs to be added on a per-unit basis.
Confirm the running units of ceph-osd
juju status ceph-osd
Add additional storage to existing or new units with the add-storage
command. For example, to add two volumes of 32G to the unit ceph-osd/2
:
juju add-storage ceph-osd/2 --storage osd-devices=32G,2
Using a separate Juju model
In some circumstances it can be useful to locate the persistent storage in a different Juju model, for example to have one set of storage used by different clusters. The only change required is in adding relations between Ceph and Charmed Kuberentes.
For more information on how to achieve this, please see the Juju documentation on cross-model relations.
NFS
It is possible to add simple storage for Kubernetes using NFS. In this case, the storage is implemented on the root disk of units running the nfs
charm.
Deploy NFS
Make use of Juju constraints to allocate an instance with the required amount of storage. For example, for 200G of storage:
juju deploy nfs --constraints root-disk=200G
Relate to Charmed Kubernetes
The NFS units can be related directly to the Kubernetes workers:
juju add-relation nfs kubernetes-worker
Verification
Now you can look at your Charmed Kubernetes cluster to verify things are working. Running:
kubectl get sc,po
… should return output similar to:
NAME PROVISIONER AGE
storageclass.storage.k8s.io/default (default) fuseim.pri/ifs 3m
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-778dcffbc8-2725b 1/1 Running 0 3m
Scaling out
If extra storage is required, it is possible to add extra nfs
units as desired. For example, to add three new units, each with 100G of storage:
juju add-unit nfs -n 3 --constraints root-disk=100G
There is no requirement that these additional units should have the same amount of storage space as previously.