Volume Cloning Procedure
In cases where the stateful set or persistent storage workload is unable to claim its Volume due to some unknown errors, this procedure illustrates a procedure to replicate or clone an already existing volume and use it specially for a new workload.
This procedure can be applied to any of these components to re-claim the PV+PVC pair and replicate in case there are problems in accessing the PVC or PV
- activeMQ
- redis
- mongo
- minio
- postgres
This procedure is only valid for the ability to create CSI Clone of the existing PVCs. In case the Longhorn or any other Storage Volume is down or can't operate successfully, it will fail to create a replicated PVC.
CSI-Volume Cloning
Kubernetes allows volume cloning using CSI Driver ( longhorn in case of Expertflow Solution ) .
Create a PVC clone of the original PVC
Determine the source PVC by executing
kubectl -n ef-external get pvc
with a sample output as given below
# kubectl -n ef-external get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
activemq-data-ef-amq-0 Bound pvc-5271c1fa-5a7e-47b1-a835-2743fba1c4fe 8Gi RWO longhorn 44h
data-ef-postgresql-0 Bound pvc-5b5c9b5f-bbf1-424c-9179-ca660978fc80 32Gi RWO longhorn 44h
datadir-mongo-mongodb-0 Bound pvc-5e83bdf8-13f5-422e-837f-8b33c244a736 8Gi RWO longhorn 44h
grafana Bound pvc-7d1846f9-155f-4ff1-af40-7a0ee4963175 1Gi RWO longhorn 44h
minio Bound pvc-d2002a4c-fe4b-4ff5-88d1-54d480447d41 8Gi RWO longhorn 44h
redis-data-redis-master-0 Bound pvc-c1797237-0832-4ac8-8bc7-198d8e3e62c3 4Gi RWO longhorn 44h
Create replica PVC
to replicate or clone a PVC, create a manifest as given below and change the parameters accordingly ( saved as replicated-clone.yaml )
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-mongo-mongodb-0-clone
namespace: ef-external
spec:
storageClassName: longhorn
dataSource:
name: datadir-mongo-mongodb-0
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Create a PVC clone by
kubectl apply -f replicated-clone.yaml
wait for sometime to get it replicated. Based on the dataset size of the source PVC, it may take sometime to finish cloning. it will remain in "PENDING" state untill fully cloned.
# kubectl -n ef-external get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-mongo-mongodb-0 Bound pvc-dd7730b1-1436-4068-b00b-5470211efc0e 8Gi RWO longhorn 6m18s
datadir-mongo-mongodb-0-clone Pending longhorn 4s
and once cloned successfully, it will show the status like
# kubectl -n ef-external get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-mongo-mongodb-0 Bound pvc-dd7730b1-1436-4068-b00b-5470211efc0e 8Gi RWO longhorn 6m46s
datadir-mongo-mongodb-0-clone Bound pvc-2c71c1e7-b2cc-46e3-809e-c945fbbbd70d 8Gi RWO longhorn 32s
Recreate the Workload
Once the clone is up and in Bound State, create a new Mongo-DB deployment and configure the helm command to use an already existing PVC
helm upgrade --install=true --namespace=ef-external --create-namespace --set persistence.existingClaim="datadir-mongo-mongodb-0-clone" --values=./kubernetes/external/bitnami/mongodb/values-small.yaml mongo ./kubernetes/external/bitnami/mongodb/
Once the mongodb instance comes up as ready, you can check the data as complete copy of the original PVC.