A newer version of this product documentation is available. If you are redirected to the main page of the user guide, then this page might have been renamed or removed.

NVIDIA NetQ 4.12 Release Notes

Download 4.12 Release Notes xls    Download all 4.12 release notes as .xls

4.12.0 Release Notes

Open Issues in 4.12.0

Issue ID Description Affects Fixed
4466349
When you upgrade an HA cluster deployment from a version that is not part of the supported upgrade path, the upgrade might fail and the UI might not load due to expired control plane certificates on the worker nodes.

To check whether the certificates have expired, run sudo su followed by kubeadm certs check-expiration. If the output displays a date in the past, your certificates are expired. To update the certificates, run kubeadm certs renew all on each worker node in the cluster. Next, restart the control plane components with crictl stop CONTAINER_ID, followed by systemctl restart kubelet.
4.8.0-4.13.0
4371014
In the full-screen switch card, the interface charts display incorrect values for transmit (Tx) and receive (Rx) byte rates. The actual values are slightly higher than the displayed values. 4.12.0-4.13.0
4280023
After backing up and restoring your NetQ data, any modifications to default suppression rules will be lost. 4.12.0-4.13.0
4181296
NetQ might become unresponsive when someone with a non-admin (user) role attempts to create or clone workbenches, add cards to a workbench, create validations, or run a flow analysis. 4.11.0-4.12.0 4.13.0
4162383
When you upgrade a NetQ VM with devices in the inventory that have been rotten for 7 or more days, NetQ’s global search field might fail to return results for individual devices. To work around this issue, decommission rotten devices and ensure they are running the appropriate NetQ agent version. 4.12.0 4.13.0
4157785
When you add a new switch to the NetQ inventory, the NetQ UI might not display interface statistics or interface validation data for the new switch for up to one hour.
To work around this issue, adjust the poll period to 60 seconds on the new switch with the netq config add agent command service-key ports poll-period 60 command. When interface data is displayed in the NetQ UI, change it back to the default value of 3600 with the netq config add agent command service-key ports poll-period 3600 command.
4.12.0 4.13.0
4155900
When a fan’s sensor state is “high”, NetQ correctly displays the count information on the sensor health card. When the card is expanded to the detailed view, fans with a “high” sensor state will not be included among the fans with problematic states. 4.12.0 4.13.0
4131550
When you run a topology validation, the full-screen topology validation view might not display the latest results. To work around this issue, refresh the page. 4.12.0-4.13.0
4124724
External notifications for DPU RoCE threshold-crossing events are not supported. To work around this issue, use the UI or CLI to view DPU RoCE threshold-crossing events. 4.12.0 4.13.0
4100882
When you attempt to export a file that is larger than 200MB, your browser might crash or otherwise prevent you from exporting the file. To work around this issue, use filters in the UI to decrease the size of the dataset that you intend to export. 4.12.0-4.13.0
3985598
When you configure multiple threshold-crossing events for the same TCA event ID on the same device, NetQ will only display one TCA event for each hostname per TCA event ID, even if both thresholds are crossed or status events are triggered. 4.11.0-4.13.0
3800434
When you upgrade NetQ from a version prior to 4.9.0, What Just Happened data that was collected before the upgrade is no longer present. 4.9.0-4.13.0
3772274
After you upgrade NetQ, data from snapshots taken prior to the NetQ upgrade will contain unreliable data and should not be compared to any snapshots taken after the upgrade. In cluster deployments, snapshots from prior NetQ versions will not be visible in the UI. 4.9.0-4.13.0
3769936
When there is a NetQ interface validation failure for admin state mismatch, the validation failure might clear unexpectedly while one side of the link is still administratively down. 4.9.0-4.13.0
3613811
LCM operations using in-band management are unsupported on switches that use eth0 connected to an out-of-band network. To work around this issue, configure NetQ to use out-of-band management in the mgmt VRF on Cumulus Linux switches when interface eth0 is in use. 4.8.0-4.13.0

Fixed Issues in 4.12.0

Issue ID Description Affects
4001098
When you use NetQ LCM to upgrade a Cumulus Linux switch from version 5.9 to 5.10, and if the upgrade fails, NetQ rolls back to version 5.9 and reverts the cumulus user password to the default password. After rollback, reconfigure the password with the nv set system aaa user cumulus password <password> command. 4.11.0
4000939
When you upgrade a NetQ VM with devices in the inventory that have been rotten for 7 or more days, NetQ inventory cards in the UI and table output might show inconsistent results and might not display the rotten devices. To work around this issue, decommission the rotten device and ensure it’s running the appropriate NetQ agent version. 4.11.0
3995266
When you use NetQ LCM to upgrade a Cumulus Linux switch with NTP configured using NVUE in a VRF that is not mgmt, the upgrade fails to complete. To work around this issue, first unset the NTP configuration with the nv unset service ntp and nv config apply commands, and reconfigure NTP after the upgrade completes. 4.11.0
3981655
When you upgrade your NetQ VM, some devices in the NetQ inventory might appear as rotten. To work around this issue, restart NetQ agents on devices or upgrade them to the latest agent version after the NetQ VM upgrade is completed. 4.11.0
3858210
When you upgrade your NetQ VM, DPUs in the inventory are not shown. To work around this issue, restart the DTS container on the DPUs in your network. 4.10.0-4.11.0
3854467
When a single NetQ cluster VM is offline, the NetQ kafka-connect pods are brought down on other cluster nodes, preventing NetQ data from collecting data. To work around this issue, bring all cluster nodes back into service. 4.10.0-4.11.0