WARNING: this guide is for references only, as it is not verified and tested. Currently it is in DRAFT mode. Any actions performed using this guide may result in data loss.
Backups using Velero require at least one of the storage locations available where backup can be saved. Below is the list of these options that are needed and must be catered to before proceeding with a backup solution.
Backup Type
Deployment Aspects
Distributed
DR
Minio as S3 Compatible Cloud Storage
any Kubernetes deployment with persistent storage enabled
helm for Kubernetes-based deployment
alternatively, docker-compose-based deployments can also be used if Kubernetes is not possible at the DR site with docker volume used to persist backups.
minio can be deployed without Kubernetes or docker-compose and requires advanced expertise to maintain it.
All minio should be deployed at the DR sites. deploying at the same location is not recommended
Yes
Yes
DigitialOcean S3 Object Storage
A valid account with billing enabled
the backup endpoint is reachable from deployment
Yes
N/A
hostpath
native to Kubernetes.
Backups will be available on the host file system
customer should take regular backups off this file system to a safer location
No
No
Deploy Velero
Velero is the application used for managing all the backups and should be deployed on the cluster where the CIM solution is deployed.
For HA-based deployments, the Velero should be deployed on any of the Control-Plane nodes
To use the host-Path of the host for the backup location. This section expects that you have completed the Velero deployment in step 1. Please follow these steps.
for a hostpath based backups, this should be considered only for single-node deployment
Create a fake Velero entry so that we can add the local-path plugin and initialize it
make sure the backup location is added and made available by looking through the logs
CODE
kubectl logs deployment/velero -n velero
get the backup locations where backups will be saved
CODE
velero backup-location get
Option # 2 Minio as S3 Compatible Storage
Minio As S3 Compatible Storage
Minio as S3 Compatible Storage
if S3 compatible cloud storage is available, please use appropriate version of Velero’s plugins for them. In the following section, we will be using onsite s3 storage using minio. Further details are available here
We will deploy minio as S3 storage compatible storage locally and direct all backups via velero to the minio storage. Minio can be deployed in different regions to make a complete DR fault tolerant backup/restore solution.
Kubernetes based deployment
If minio is to be deployed on any of Kubernetes cluster, please use this helm command to deploy the minio
and wait for the confirmation message from the above command to finish the deployment on your production cluster.
verify the backup-location status
CODE
velero get backup-locations
CODE
~# velero get backup-locations
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws velero Available 2024-07-16 10:38:40 +0000 UTC ReadWrite true
Add DR1 site as backup location for Velero ( Optional )
This is where all your backups will be saved in case you don't provide any specific location
you can verify the status of backup locations using
CODE
velero get backup-locations
CODE
# velero get backup-locations
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws velero Available 2024-07-16 10:47:40 +0000 UTC ReadWrite true
primary aws velero Available 2024-07-16 10:47:40 +0000 UTC ReadWrite
secondary aws velero Available 2024-07-16 10:47:51 +0000 UTC ReadWrite
Option# 3 Using DigitalOcean’s S3 Object Storage
Using DigitalOcean S3 Object Storage
DigitalOcean Spaces Object Storage
Create a bucket by navigating to the “Spaces Object Storage” and click “Create Spaces Bucket “, select appropriate region and bucket name
Things to note:
#1 Region ( in short form )
#2 bucket name
Create AccessKey and Secret Key
In the left panel, click on “API” Section and then click on the “New Spaces Key” under “spaces key” tab
Give a name of the new key
Make note of both AccessKey and SecretKey shown below
and wait for the confirmation message from the above command to finish the deployment on your production cluster.
verify the backup-location status
CODE
velero get backup-locations
CODE
~# velero get backup-locations
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws efvelero Available 2024-07-16 10:38:40 +0000 UTC ReadWrite true
Install the velero on the target cluster using the same procedure, as performed on the source cluster.
verify all the backups
get the list of backups to verify that all the backups from the source cluster are available
CODE
velero get backups
All the backups taken on the source cluster will be visible on the target cluster.
Sample out velero backup-locations
CODE
# velero get backup-locations # identify the backup-location is active and available
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws efvelero Available 2024-07-31 12:10:28 +0000 UTC ReadWrite true
#
# verify the backups are available on the target cluster.
# velero get backups
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
mongo-sts Completed 0 0 2024-07-31 10:25:24 +0000 UTC 29d default app.kubernetes.io/name=mongodb
pod-mongo Completed 0 0 2024-07-31 09:35:30 +0000 UTC 29d default app.kubernetes.io/name=mongodb
#
Restore ( Mongo,Minio and PostgreSQL )
Procedure given below is shown only for MongoDB, this will also work for minio, postgresql unless mentioned explicitly.
# kubectl -n ef-external exec -it mongo-mongodb-client -- bash
# connect ot the mongoDB pod
I have no name!@mongo-mongodb-client:/$ mongosh --host mongo-mongosh.mongo-temp.svc.cluster.local
Current Mongosh Log ID: 66aa242e88fa0b646052e35a
Connecting to: mongodb://mongo-mongosh:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: getaddrinfo ENOTFOUND mongo-mongosh
I have no name!@mongo-mongodb-client:/$ mongosh --host mongo-mongodb
Current Mongosh Log ID: 66aa2438bf862f8baeab12d9
Connecting to: mongodb://mongo-mongodb:27017/?directConnection=true&appName=mongosh+1.6.0
Using MongoDB: 6.0.2
Using Mongosh: 1.6.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2024-07-31T11:35:22.589+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2024-07-31T11:35:24.858+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options]
2024-07-31T11:35:24.859+00:00: vm.max_map_count is too low
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
test> show dbs;
admin 40.00 KiB
config 72.00 KiB
local 72.00 KiB
velero 40.00 KiB
test> use velero;
switched to db velero
velero> show collections;
backup
velero> db.backup.find();
[
{ _id: ObjectId("66a8f5ee48f503df96f8d44e"), Color: 'Yellow' },
{ _id: ObjectId("66a8f5f348f503df96f8d44f"), Color: 'green' },
{ _id: ObjectId("66a8f5f848f503df96f8d450"), Color: 'blue' }
]
velero> exit
I have no name!@mongo-mongodb-client:/$ exit
exit
#
Final steps
Once the pod is in running state, with PVC bound to the correct PV, you can delete the pod, and while keeping the PVC/PV pair intact.
CODE
kubectl -n ef-external delete pod mongo-mongodb-0
kubectl -n ef-external get pvc
Sample output
CODE
# kubectl -n mongo-temp delete pod mongo-mongodb-0
pod "mongo-mongodb-0" deleted
# kubectl -n ef-external get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-mongo-mongodb-0 Bound pvc-cb0f09ae-6faf-4030-923a-58bdc1e2f773 8Gi RWO openebs-hostpath 118m
#
Deploy the mongoDB on the target cluster with its values.yaml file updated to new PVC in place
CODE
persistence:
## @param persistence.enabled Enable MongoDB(®) data persistence using PVC
##
enabled: true
## @param persistence.medium Provide a medium for `emptyDir` volumes.
## Requires persistence.enabled: false
##
medium: ""
## @param persistence.existingClaim Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`)
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
## Ignored when mongodb.architecture=replicaset
##
existingClaim: "datadir-mongo-mongodb-0"
Create Backup
You can create a backup for
cluster
namespace
deployment
POD
label
Namespace Level Backup
Configure Velero to take a backup of the ef-external namespace. This will always go to the default backup location