Before You Install
This overview is designed to help you understand the various NetQ deployment and installation options.
Installation Overview
Consider the following deployment options and requirements before you install the NetQ system.
| Single Server | Cluster | Scale Cluster |
|---|---|---|
| On-premises only | On-premises only | On-premises only |
Network size: small
|
Network size: medium
|
Network size: large
|
| KVM or VMware hypervisor | KVM or VMware hypervisor | KVM or VMware hypervisor |
| No high-availability option | High availability | High availability |
| System requirements: On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk |
System requirements (per node): On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk |
System requirements (per node): On-premises: 48 virtual CPUs, 512GB RAM, 3.2TB SSD disk |
Not supported:
|
Not supported:
|
Not supported:
|
*When switches are configured with both OpenTelemetry (OTLP) and the NetQ agent, switch support per deployment model is reduced by half.
Server Arrangement: Single or Cluster
In all deployment models, NetQ agents reside on the switches and hosts they monitor in your network.
Single Server
A standalone server is easier to set up, configure, and manage, but limits your ability to scale your network monitoring. Deploying multiple servers allows you to limit potential downtime and increase availability by having more than one server that can run the software and store the data. Select the standalone, single-server arrangement for smaller, simpler deployments.
Cluster of Servers
NVIDIA offers two types of cluster deployments: cluster and scale cluster. Both deployments are available on-premises and offer high-availability to provide redundancy in case of node failure.
The cluster implementation comprises three servers: one master and two workers nodes. NetQ supports high availability using a virtual IP address. Even if the master node fails, NetQ services remain operational. This deployment supports networks with up to 100 switches.
The scale cluster deployment supports large networks and allows you to adjust NetQ’s network monitoring capacity by adding additional nodes to your cluster as your network expands. For example, you can deploy a three-node scale cluster that accommodates up to 1,000 switches. When you add switches to your network, the extensible framework allows you to add additional nodes to support a greater number of switches. NVIDIA recommends this option for networks comprising 100 or more switches with 100 or more interfaces per switch.
In both cluster deployments, the majority of nodes must be operational for NetQ to function. For example, a three-node cluster can tolerate a one-node failure, but not a two-node failure. Similarly, a 5-node cluster can tolerate a two-node failure, but not a three-node failure. If the majority of failed nodes are Kubernetes control plane nodes, NetQ will no longer function. For more information, refer to the etcd documentation.
Large networks have the potential to generate a large amount of data. For large networks, NVIDIA does not recommend using the NetQ CLI; additionally, tabular data in the UI is limited to 10,000 rows. If you need to review a large amount of data, NVIDIA recommends downloading and exporting the tabular data as a CSV or JSON file and analyzing it in a spreadsheet program.
Base Command Manager
NetQ is also available through NVIDIA’s cluster management software, Base Command Manager. Refer to the Base Command Manager administrator and containerization manuals for instructions on how to launch and configure NetQ using Base Command Manager.
Next Steps
After you’ve decided on your deployment type, you’re ready to install NetQ.