Back Up and Restore NetQ
The following sections describe how to back up and restore your NetQ data and VMs for on-premises deployments. Most NetQ data for cloud deployments is backed up automatically, but the backup and restore process is required for cloud VMs to retain site configuration key data.
- You must run backup and restore scripts with sudo privileges.
- When you upgrade from NetQ 4.11 to 4.13, any pre-existing validation data will be lost.
Back Up Your NetQ Data
Follow the process below for your deployment type to back up your NetQ data:
- Retrieve the
vm-backuprestore.sh
script:
a. Log in to the NVIDIA Application Hub.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. In the search field, enter NetQ.
e. Locate the latest NetQ Upgrade Backup Restore file and select Download.
f. If prompted, read the license agreement and proceed with the download.
- Copy the
vm-backuprestore.sh
script to your NetQ server in standalone deployments, or to each node in cluster deployments:
username@hostname:~$ scp ./vm-backuprestore.sh cumulus@10.10.10.10:/home/cumulus/
cumulus@10.10.10.10's password:
vm-backuprestore.sh
Then copy the vm-backuprestore.sh
script to the /usr/sbin/
directory on your NetQ servers:
cumulus@netq-server:~$ sudo cp ./vmbackuprestore.sh /usr/sbin/
- Log in to your NetQ server and set the script to executable. Do this for each node in your deployment:
cumulus@netq-appliance:/home/cumulus# chmod +x /usr/sbin/vm-backuprestore.sh
- On your NetQ server (or the master node in cluster deployments), run the
/usr/sbin/vm-backuprestore.sh --backup
command. This command backs up each node in your deployment and combines the data into a single .tar file. Take note of the config key in the output of this command. You will enter it when you restore your data:
cumulus@netq-appliance:~$ sudo /usr/sbin/vm-backuprestore.sh --backup
[sudo] password for cumulus:
Fri Jan 17 05:44:13 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Stopping pods...
Fri Jan 17 05:44:13 2025 - Stopping pods in namespace default
Fri Jan 17 05:44:19 2025 - Scaling all pods to replica 0
Fri Jan 17 05:44:38 2025 - Waiting for all pods to go down in namespace: default
Fri Jan 17 05:45:39 2025 - Stopping pods in namespace ingress-nginx
Fri Jan 17 05:45:43 2025 - Scaling all pods to replica 0
Fri Jan 17 05:45:57 2025 - Waiting for all pods to go down in namespace: ingress-nginx
Fri Jan 17 05:45:57 2025 - Stopping pods in namespace monitoring
Fri Jan 17 05:46:01 2025 - Scaling all pods to replica 0
Fri Jan 17 05:46:14 2025 - Waiting for all pods to go down in namespace: monitoring
Fri Jan 17 05:46:14 2025 - All pods are down
Fetching master and worker IPs...
Running backup on all nodes...
Running backup on master node (10.188.46.221)...
Fri Jan 17 05:46:14 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:15 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_15_UTC-a1ad8571-2184-42e2-b9a3-0fe7be8e1043.tar
Backup is successful
Running backup on worker node (10.188.46.193)...
Fri Jan 17 05:46:19 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Fri Jan 17 05:46:19 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:19 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_19_UTC-0309b675-2359-48e9-83d8-cfeac9585ba2.tar
Backup is successful
Running backup on worker node (10.188.44.55)...
Fri Jan 17 05:46:44 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Fri Jan 17 05:46:44 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:45 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_45_UTC-ecb3b55a-f660-42ee-8dc6-c16b22b6584e.tar
Backup is successful
Combining tars from all nodes...
Adding the latest master tar...
Fetching the latest tar from worker node (10.188.46.193)...
Fetching the latest tar from worker node (10.188.44.55)...
Creating combined tar at /opt/backuprestore/combined_backup_20250117054718.tar...
Cleaning up temporary files...
Combined tar created at /opt/backuprestore/combined_backup_20250117054718.tar
The config key is EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixWMnkyRVRwbkxVVXBTVDFsSXUzM3NzRlNkMFE5S0Y3OFlVRVdBWUU5K244PQ==, alternately the config key is available in file /tmp/config-key
Starting pods on master node...
Fri Jan 17 05:48:25 2025 - Scaling all pods to replica 1
Fri Jan 17 05:50:01 2025 - Waiting for all pods to come up
Fri Jan 17 05:58:14 2025 - All pods are up
- Copy the newly created tarball from the server and restore the data on your new VM.
cumulus@netq-appliance:~$ sudo scp /opt/backuprestore/combined_backup_20250117054718.tar username:password@<destination>
- Retrieve the
vm-backuprestore.sh
script:
a. Log in to the NVIDIA Application Hub.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. In the search field, enter NetQ.
e. Locate the latest NetQ Upgrade Backup Restore file and select Download.
f. If prompted, read the license agreement and proceed with the download.
- Copy the
vm-backuprestore.sh
script to your NetQ server in standalone deployments, or to each node in cluster deployments:
username@hostname:~$ scp ./vm-backuprestore.sh cumulus@10.10.10.10:/home/cumulus/
cumulus@10.10.10.10's password:
vm-backuprestore.sh
Then copy the vm-backuprestore.sh
script to the /usr/sbin/
directory on your NetQ servers:
cumulus@netq-server:~$ sudo cp ./vmbackuprestore.sh /usr/sbin/
- Log in to your NetQ server and set the script to executable. Do this for each node in your deployment:
cumulus@netq-appliance:/home/cumulus# chmod +x /usr/sbin/vm-backuprestore.sh
- On your NetQ server (or the master node in cluster deployments), run the
/usr/sbin/vm-backuprestore.sh --backup
command. This command backs up each node in your deployment and combines the data into a single .tar file. Take note of the config key in the output of this command. You will enter it when you restore your data:
cumulus@netq-appliance:~$ sudo /usr/sbin/vm-backuprestore.sh --backup
[sudo] password for cumulus:
Fri Jan 17 05:44:13 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Stopping pods...
Fri Jan 17 05:44:13 2025 - Stopping pods in namespace default
Fri Jan 17 05:44:19 2025 - Scaling all pods to replica 0
Fri Jan 17 05:44:38 2025 - Waiting for all pods to go down in namespace: default
Fri Jan 17 05:45:39 2025 - Stopping pods in namespace ingress-nginx
Fri Jan 17 05:45:43 2025 - Scaling all pods to replica 0
Fri Jan 17 05:45:57 2025 - Waiting for all pods to go down in namespace: ingress-nginx
Fri Jan 17 05:45:57 2025 - Stopping pods in namespace monitoring
Fri Jan 17 05:46:01 2025 - Scaling all pods to replica 0
Fri Jan 17 05:46:14 2025 - Waiting for all pods to go down in namespace: monitoring
Fri Jan 17 05:46:14 2025 - All pods are down
Fetching master and worker IPs...
Running backup on all nodes...
Running backup on master node (10.188.46.221)...
Fri Jan 17 05:46:14 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:15 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_15_UTC-a1ad8571-2184-42e2-b9a3-0fe7be8e1043.tar
Backup is successful
Running backup on worker node (10.188.46.193)...
Fri Jan 17 05:46:19 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Fri Jan 17 05:46:19 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:19 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_19_UTC-0309b675-2359-48e9-83d8-cfeac9585ba2.tar
Backup is successful
Running backup on worker node (10.188.44.55)...
Fri Jan 17 05:46:44 2025 - Please find detailed logs at: /var/log/vm-backuprestore.log
Fri Jan 17 05:46:44 2025 - Starting backup of data, the backup might take time based on the size of the data
Fri Jan 17 05:46:45 2025 - Creating backup tar /opt/backuprestore/backup-netq-cluster-onprem-4.12.0-2025-01-17_05_46_45_UTC-ecb3b55a-f660-42ee-8dc6-c16b22b6584e.tar
Backup is successful
Combining tars from all nodes...
Adding the latest master tar...
Fetching the latest tar from worker node (10.188.46.193)...
Fetching the latest tar from worker node (10.188.44.55)...
Creating combined tar at /opt/backuprestore/combined_backup_20250117054718.tar...
Cleaning up temporary files...
Combined tar created at /opt/backuprestore/combined_backup_20250117054718.tar
The config key is EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixWMnkyRVRwbkxVVXBTVDFsSXUzM3NzRlNkMFE5S0Y3OFlVRVdBWUU5K244PQ==, alternately the config key is available in file /tmp/config-key
Starting pods on master node...
Fri Jan 17 05:48:25 2025 - Scaling all pods to replica 1
Fri Jan 17 05:50:01 2025 - Waiting for all pods to come up
Fri Jan 17 05:58:14 2025 - All pods are up
- Copy the newly created tarball from the server and restore the data on your new VM.
cumulus@netq-appliance:~$ sudo scp /opt/backuprestore/combined_backup_20250117054718.tar username:password@<destination>
- On your NetQ server (or the master node in cluster deployments), run the
netq bootstrap reset purge-db
command to deactivate the current premises. Use thenetq config show cli premises
command to verify that the status of the premises is inactive.
Restore Your NetQ Data
To restore your NetQ data, perform a new NetQ VM installation and follow the steps to restore your NetQ data when you run the netq install
command.