NVIDIA® NetQ™ is a network operations tool set that provides visibility into your overlay and underlay networks, enabling troubleshooting in real-time. NetQ delivers data and statistics about the health of your data center—from the container, virtual machine, or host, all the way to the switch and port. NetQ correlates configuration and operational status, and tracks state changes while simplifying management for the entire Linux-based data center. With NetQ, network operations change from a manual, reactive, node-by-node approach to an automated, informed, and agile one. Visit Network Operations with NetQ to learn more.
This user guide provides documentation for network administrators who are responsible for deploying, configuring, monitoring, and troubleshooting the network in their data center or campus environment.
For a list of the new features in this release, see What's New. For bug fixes and known issues, refer to the release notes.
What's New
This page summarizes new features and improvements for the NetQ 4.12 release. For a complete list of open and fixed issues, see the release notes.
What’s New in NetQ 4.12
NetQ 4.12.0 includes the following new features and improvements:
The NetQ 4.12 server is compatible with the NetQ 4.12 agent. You can install NetQ agents on switches and servers running:
Cumulus Linux 5.0.0 or later (Spectrum switches)
Ubuntu 22.04, 20.04
Release Considerations
NetQ 4.12 is not backward compatible with previous NetQ agent versions. You must install NetQ agent version 4.12 after upgrading your NetQ server to 4.12.
When you upgrade to NetQ 4.12, any pre-existing event and validation data will be lost.
If you upgrade a NetQ server with scheduled OSPF validations, they might still appear in the UI but will display results from previous validations.
NetQ Overview
This section describes NetQ components and deployment models. It also outlines how to get started with the NetQ user interface and command line.
NetQ Basics
This section provides an overview of the NetQ hardware, software, and deployment models.
NetQ Components
NetQ contains the following applications and key components:
Telemetry data collection and aggregation via
NetQ switch agents
NetQ host agents
Database
Data streaming
Network services
User interfaces
NetQ Agents
NetQ Agents are installed via software and run on every monitored node in the network—including Cumulus® Linux® switches, Linux bare metal hosts, and virtual machines. The NetQ Agents push network data regularly and event information immediately to the NetQ Platform.
Switch Agents
The NetQ Agents running on Cumulus Linux switches gather the following network data via Netlink:
Interfaces
IP addresses (v4 and v6)
IP routes (v4 and v6)
IP nexthops (v4 and v6)
Links
Bridge FDB (MAC address table)
ARP Entries/Neighbors (IPv4 and IPv6)
for the following protocols:
Bridging protocols: LLDP, STP, MLAG
Routing protocols: BGP
Network virtualization: EVPN, VXLAN
Host Agents
The NetQ Agents running on hosts gather the same information as that for switches, plus the following network data:
Network IP and MAC addresses
Container IP and MAC addresses
The NetQ Agent obtains container information by listening to the Kubernetes orchestration tool.
NetQ Core
The NetQ core performs the data collection, storage, and processing for delivery to various user interfaces. It consists of a collection of scalable components running entirely within a single server. The NetQ software queries this server, rather than individual devices, enabling greater system scalability.
Data Aggregation
The data aggregation component collects data coming from all of the NetQ Agents. It then filters, compresses, and forwards the data to the streaming component. The server monitors for missing messages and also monitors the NetQ Agents themselves, sending notifications about events when appropriate. In addition to the telemetry data collected from the NetQ Agents, the aggregation component collects information from the switches and hosts, such as vendor, model, version, and basic operational state.
Data Stores
NetQ uses two types of data stores. The first stores the raw data, data aggregations, and discrete events needed for quick response to data requests. The second stores data based on correlations, transformations, and raw-data processing.
Real-time Streaming
The streaming component processes the incoming raw data from the aggregation server in real time. It reads the metrics and stores them as a time series, and triggers alarms based on anomaly detection, thresholds, and events.
Network Services
The network services component monitors protocols and services operation individually and on a networkwide basis and stores status details.
User Interfaces
NetQ data is available through several interfaces:
NetQ CLI (command-line interface)
NetQ GUI (graphical user interface)
NetQ RESTful API (representational state transfer application programming interface)
The CLI and UI query the RESTful API to present data. NetQ can integrate with event notification applications and third-party analytics tools.
Data Center Network Deployments
This section describes three common data center deployment types for network management:
Out-of-band management (recommended)
In-band management
Server cluster with high availability
NetQ operates over layer 3, and can operate in both layer-2 bridged and layer-3 routed environments. NVIDIA recommends a layer-3 routed environment whenever possible.
Out-of-band Management Deployment
NVIDIA recommends deploying NetQ on an out-of-band (OOB) management network to separate network management traffic from standard network data traffic.
The physical network hardware includes:
Spine switches: aggregate and distribute data; also known as an aggregation switch, end-of-row (EOR) switch or distribution switch
Leaf switches: where servers connect to the network; also known as a top-of-rack (TOR) or access switch
Server hosts: host applications and data served to the user through the network
Exit switch: where connections to outside the data center occur, also known as a border leaf or service leaf
Edge server (optional): where the firewall is the demarcation point, peering can occur through the exit switch layer to Internet (PE) devices
Internet device: where provider edge (PE) equipment communicates at layer 3 with the network fabric
The following figure shows an example of a Clos network fabric design for a data center using an OOB management network overlaid on top, where NetQ resides. The physical connections are displayed as gray lines, connecting Spine01 to four leaf and two exit devices; Spine02 is connected to the same leaf and exit devices. Leaf01 and Leaf02 connect to each other over a peerlink and act as an MLAG pair for Server01 and Server02, as do Leaf03 and Leaf04 for Server03 and Server04. The edge connects to both exit devices, and the Internet node connects to Exit01.
The physical management hardware includes:
OOB management switch: aggregation switch that connects to all network devices through communications with the NetQ Agent on each node
NetQ Platform: hosts the telemetry software, database, and user interfaces
These switches connect to each physical network device through a virtual network overlay, as shown below.
In-band Management Deployment
While not recommended, you can implement NetQ within your data network. In this scenario, there is no overlay and all traffic to and from the NetQ Agents and the NetQ Platform traverses the data paths along with your regular network traffic. The roles of the switches in the Clos network are the same, except that the NetQ Platform performs the aggregation function that the OOB management switch performed. If your network goes down, you might not have access to the NetQ Platform for troubleshooting. Certain features—such as lifecycle management—require additional configurations for in-band deployments.
Server Cluster Deployments
NetQ supports a server cluster deployment for users who prefer a solution with increased scalability and availability; the data collected by NetQ remains available through additional servers should one fail. In this configuration, three NetQ servers are deployed—one master and two workers (or replicas). NetQ Agents send data to all three servers so that if the master server fails, one of the replicas automatically becomes the master and continues to store the telemetry data. Both on-premises and cloud (OPTA) cluster deployments support high availability through a virtual IP address that is allocated in the same subnet as the master and worker nodes. This allows for UI access in the case of a master node failure.
The following example is based on an OOB-management configuration, and modified to support higher scalability for NetQ.
NetQ Operation
In either in-band or out-of-band deployments, NetQ offers networkwide configuration and device management, proactive monitoring capabilities, and network performance diagnostics.
The NetQ Agent
From a software perspective, a network switch has software associated with the hardware platform, the operating system, and communications. For data centers, the software on a network switch is similar to the following diagram:
The NetQ Agent interacts with the various components and software on switches and hosts and provides the gathered information to the NetQ Platform. You can view the data using the NetQ CLI or UI.
The NetQ Agent polls the user space applications for information about the performance of the various routing protocols and services that are running on the switch. Cumulus Linux supports BGP and OSPF routing protocols as well as static addressing through FRRouting (FRR). Cumulus Linux also supports LLDP and MSTP among other protocols, and a variety of services such as systemd and sensors.
For hosts, the NetQ Agent also polls for performance of containers managed with Kubernetes. This information is used to calculate the network’s health and check if the network is configured and operating correctly.
The NetQ Agent interacts with the Netlink communications between the Linux kernel and the user space, listening for changes to the network state, configurations, routes, and MAC addresses. NetQ sends notifications about these changes so that network operators and administrators can respond quickly when changes are not expected or favorable.
The NetQ Agent also interacts with the hardware platform to obtain performance information about various physical components, such as fans and power supplies, on the switch. The agent measures operational states and temperatures, along with cabling information to allow for proactive maintenance.
The NetQ Platform
After the collected data is sent to and stored in the NetQ database, you can:
Validate configurations and identify misconfigurations in your current network or in a previous deployment.
Monitor communication paths throughout the network.
Notify users of network issues.
Anticipate the impact of connectivity changes.
Validate Configurations
You can monitor and validate your network’s health in the UI or through two sets of commands: netq check and netq show. They extract the information from the network service component and event service. The network service component is continually validating the connectivity and configuration of the devices and protocols running on the network. Using the netq check and netq show commands displays the status of the various components and services on a networkwide and complete software stack basis. See the command line reference for an exhaustive list of netq check and netq show commands.
Monitor Communication Paths
The trace engine validates the available communication paths between two network devices. The corresponding netq trace command enables you to view all of the paths between the two devices and if there are any breaks in the paths. For more information about trace requests, refer to Verify Network Connectivity.
View Historical State and Configuration Info
You can run all check, show, and trace commands for current and past statuses. To investigate past issues, use the netq check command and look for configuration or operational issues around the time that NetQ timestamped event messages. Then use the netq show commands to view information about device configurations. You can also use the netq trace command to see what the connectivity looked like between any problematic nodes at a particular time.
For example, the following diagram shows issues on spine01, leaf04, and server03:
An administrator can run the following commands from any switch in the network to determine the cause of a BGP error on spine01:
cumulus@switch:~$ netq check bgp around 30m
Total Nodes: 25, Failed Nodes: 3, Total Sessions: 220 , Failed Sessions: 24,
Hostname VRF Peer Name Peer Hostname Reason Last Changed
----------------- --------------- ----------------- ----------------- --------------------------------------------- -------------------------
exit-1 DataVrf1080 swp6.2 firewall-1 BGP session with peer firewall-1 swp6.2: AFI/ 1d:2h:6m:21s
SAFI evpn not activated on peer
exit-1 DataVrf1080 swp7.2 firewall-2 BGP session with peer firewall-2 (swp7.2 vrf 1d:1h:59m:43s
DataVrf1080) failed,
reason: Peer not configured
exit-1 DataVrf1081 swp6.3 firewall-1 BGP session with peer firewall-1 swp6.3: AFI/ 1d:2h:6m:21s
SAFI evpn not activated on peer
exit-1 DataVrf1081 swp7.3 firewall-2 BGP session with peer firewall-2 (swp7.3 vrf 1d:1h:59m:43s
DataVrf1081) failed,
reason: Peer not configured
exit-1 DataVrf1082 swp6.4 firewall-1 BGP session with peer firewall-1 swp6.4: AFI/ 1d:2h:6m:21s
SAFI evpn not activated on peer
exit-1 DataVrf1082 swp7.4 firewall-2 BGP session with peer firewall-2 (swp7.4 vrf 1d:1h:59m:43s
DataVrf1082) failed,
reason: Peer not configured
exit-1 default swp6 firewall-1 BGP session with peer firewall-1 swp6: AFI/SA 1d:2h:6m:21s
FI evpn not activated on peer
exit-1 default swp7 firewall-2 BGP session with peer firewall-2 (swp7 vrf de 1d:1h:59m:43s
...
cumulus@switch:~$ netq exit-1 show bgp
Matching bgp records:
Hostname Neighbor VRF ASN Peer ASN PfxRx Last Changed
----------------- ---------------------------- --------------- ---------- ---------- ------------ -------------------------
exit-1 swp3(spine-1) default 655537 655435 27/24/412 Fri Feb 15 17:20:00 2019
exit-1 swp3.2(spine-1) DataVrf1080 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp3.3(spine-1) DataVrf1081 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp3.4(spine-1) DataVrf1082 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp4(spine-2) default 655537 655435 27/24/412 Fri Feb 15 17:20:00 2019
exit-1 swp4.2(spine-2) DataVrf1080 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp4.3(spine-2) DataVrf1081 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp4.4(spine-2) DataVrf1082 655537 655435 13/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp5(spine-3) default 655537 655435 28/24/412 Fri Feb 15 17:20:00 2019
exit-1 swp5.2(spine-3) DataVrf1080 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp5.3(spine-3) DataVrf1081 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp5.4(spine-3) DataVrf1082 655537 655435 14/12/0 Fri Feb 15 17:20:00 2019
exit-1 swp6(firewall-1) default 655537 655539 73/69/- Fri Feb 15 17:22:10 2019
exit-1 swp6.2(firewall-1) DataVrf1080 655537 655539 73/69/- Fri Feb 15 17:22:10 2019
exit-1 swp6.3(firewall-1) DataVrf1081 655537 655539 73/69/- Fri Feb 15 17:22:10 2019
exit-1 swp6.4(firewall-1) DataVrf1082 655537 655539 73/69/- Fri Feb 15 17:22:10 2019
exit-1 swp7 default 655537 - NotEstd Fri Feb 15 17:28:48 2019
exit-1 swp7.2 DataVrf1080 655537 - NotEstd Fri Feb 15 17:28:48 2019
exit-1 swp7.3 DataVrf1081 655537 - NotEstd Fri Feb 15 17:28:48 2019
exit-1 swp7.4 DataVrf1082 655537 - NotEstd Fri Feb 15 17:28:48 2019
Manage Network Events
The NetQ notifier lets you capture and filter events for devices, components, protocols, and services. This is especially useful when an interface or routing protocol goes down and you want to get them back up and running as quickly as possible. You can improve resolution time significantly by creating filters that focus on topics appropriate for a particular group of users. You can create filters for events related to BGP and MLAG session states, interfaces, links, NTP and other services, fans, power supplies, and physical sensor measurements.
The following is an example of a Slack message received on a netq-notifier channel indicating that the BGP session on switch leaf04 interface swp2 has gone down:
Every event or entry in the NetQ database is stored with a timestamp that reports when the NetQ Agent captured an event on the switch or server. This timestamp is based on the switch or server time where the NetQ Agent is running, and is pushed in UTC format.
Interface state, IP addresses, routes, ARP/ND table (IP neighbor) entries and MAC table entries carry a timestamp that represents the time an event occurred (such as when a route is deleted or an interface comes up).
Data that is captured and saved based on polling has a timestamp according to when the information was captured rather than when the event actually happened, though NetQ compensates for this if the data extracted provides additional information to compute a more precise time of the event. For example, BGP uptime can be used to determine when the event actually happened in conjunction with the timestamp.
Restarting a NetQ Agent on a device does not update the timestamps for existing objects to reflect this new restart time. NetQ preserves their timestamps relative to the original start time of the Agent. A rare exception is if you reboot the device between the time it takes the Agent to stop and restart; in this case, the time is still relative to the start time of the Agent.
Exporting NetQ Data
You can export data from the NetQ Platform in the CLI or UI:
In the CLI, use the json option to output command results to JSON format for parsing in other applications
In the UI, expand the cards to a full-screen, tabular view and select Export.
Important File Locations
The following configuration and log files can help with troubleshooting. See Troubleshoot NetQ for more information.
File
Description
/etc/netq/netq.yml
The NetQ configuration file. This file appears only if you installed either the netq-apps package or the NetQ Agent on the system.
/var/log/netqd.log
The NetQ daemon log file for the NetQ CLI. This log file appears only if you installed the netq-apps package on the system.
/var/log/netq-agent.log
The NetQ Agent log file. This log file appears only if you installed the NetQ Agent on the system.
NetQ User Interface Overview
The NetQ user interface (UI) lets you access NetQ through a web browser, where you can visualize your network and interact with the display using a keyboard and mouse.
The NetQ UI is supported on Google Chrome and Mozilla Firefox. It is designed to be viewed on a display with a minimum resolution of 1920 × 1080 pixels.
The following are the default usernames and passwords for UI access:
NetQ on-premises: admin, admin
NetQ cloud: Use the credentials you created during setup. You should receive an email from NVIDIA titled NetQ Access Link.
Enter your username and password to log in. You can also log in with SSO if your company has enabled it.
Username and Password
Locate the email you received from NVIDIA titled NetQ Access Link. Select Create Password.
Enter a new password, then enter it again to confirm it.
Log in using your email address and new password.
Accept the Terms of Use after reading them.
The default workbench opens, with your username and premises shown in the top-right corner of NetQ.
SSO
Follow the steps above until you reach the NetQ login screen.
Select Sign up for SSO and enter your organization’s name.
Enter your username and password.
Create a new password and enter the new password again to confirm it.
Click Update and Accept after reading the Terms of Use.
The default workbench opens, with your username shown in the top-right corner of NetQ.
Enter your username.
Enter your password.
The user-specified home workbench is displayed. If a home workbench is not specified, then the default workbench is displayed.
Any workbench can be set as the home workbench. Select User Settings > Profiles and Preferences, then on the Workbenches card select the workbench you'd like to designate as your home workbench.
Log Out of NetQ
Select User Settings in the top-right corner of NetQ.
Application Header: Contains the main menu, NetQ version, global search field, count of reachable devices, premises list, and account information.
Workbench: Contains a task bar and cards that display operative status and network configuration information.
Main Menu
Select the Menu in the top-left corner to navigate to:
Description
Menu
Search: searches items listed under the main menu
Inventory: lists network’s inventory of devices
Fabric: lists various network elements which you can select to monitor your network’s state
Spectrum-X: lists network monitoring tools exclusive to Spectrum switches
Tools: lists tools to visualize and validate network operations
Alerts: lets you set up notification channels and create rules for threshold-crossing events
Admin: lets administrators manage NetQ itself and access lifecycle management
Search
You can search for devices or cards in the global search field in the header. Right-click the hostname of any switch in your network to open a dashboard in a new tab that displays a comprehensive overview of platform information, events, and interfaces for that switch.
NVIDIA Logo
Selecting the NVIDIA logo takes you to your favorite workbench. For details about specifying your favorite workbench, refer to Set User Preferences.
Workbenches
A workbench is a dashboard that displays a set of cards. Two pre-configured workbenches—NetQ Workbench and Fabric Dashboard—are available to get you started. You can create multiple workbenches and customize them by adding or removing cards. For more detail about managing your data using workbenches, refer to Focus Your Monitoring Using Workbenches.
Cards
Cards display information about your network. Each card describes a particular aspect of the network and some cards can be expanded to display information and statistics at increasingly granular levels. You can add or remove cards from a workbench, move between cards and card sizes, and make copies of cards that display different levels of data for a given time period. For details about working with cards, refer to Access Data with Cards.
User Settings
Each user can customize the NetQ display, time zone, and date format; change their account password; and manage their workbenches. Navigate to User Settings> Profile & Preferences. For details, refer to Set User Preferences.
Focus Your Monitoring Using Workbenches
Workbenches are dashboards where you can visualize and curate data representing different aspects of your network. For example, you might create a workbench that:
Shows network statistics for the past week alongside network statistics for the past 24 hours.
Only displays data about virtual overlays.
Displays switches that you are troubleshooting.
Is focused on application or account management.
Get Started with Workbenches
NVIDIA includes two example workbenches—NetQ Workbench and Fabric Dashboard—to help get you started with NetQ. To access these workbenches or switch between them, select the workbench’s name in the header. The menu displays recently accessed workbenches and the date at which they were created.
NetQ Workbench includes cards displaying your network’s device inventory, switch inventory, validation summary, What Just Happened events, host inventory, DPU inventory, and system events. This workbench is visible to all users within an organization and any changes to it will not be saved.
Fabric Dashboard includes cards displaying link status and events, sensor health, queue lengths, What Just Happened and system events, BGP and EVPN sessions, and device inventory. You can modify this workbench by adding or deleting cards and NetQ saves the changes automatically. This workbench is local to each premises, meaning that changes made to the workbench on one premises will not be reflected when you switch to a different premises.
Create a Custom Workbench
You can create an unlimited number of custom workbenches. These workbenches are only visible to the user who created them and changes are saved automatically.
To create a new workbench:
Select Workbench in the header, then select New.
Enter a name for the workbench and choose whether to restrict access to this workbench to a single premises (local) or make it available across all premises (global). You can modify this setting later if you change your mind.
(Optional) Set the workbench as your home workbench, which opens when you log in to NetQ from the premises where the workbench was created.
Select the cards you want to display on your new workbench, then click Create.
(Optional) To add cards that display data for individual switches, select Add card in the header, then Device card. Select a device and card size. Repeat these steps for each device you’d like to add to the workbench.
You can clone a workbench to quickly create a new workbench with the same cards as the one you're viewing. In the header, select Clone, modify the workbench settings, then click Clone.
Manage Workbenches
The changes you make to a custom workbench are saved automatically. To change a workbench from local to global (or global to local) availability, select the workbench’s name in the header and select Manage my WB. From the Workbenches card, locate the workbench whose availability you’d like to change and select Local or Global.
You can also change your home workbench (the workbench that loads when you log in to NetQ) from this view. Select the house to the left of the workbench that you want to set as your home workbench. The next time you log in from this premises, the workbench you selected will be displayed.
Delete a Workbench
You can only delete workbenches that you created. The NVIDIA-supplied NetQ Workbench and Fabric Dashboard workbenches cannot be deleted. When you delete a workbench that you have designated as your home workbench, NetQ Workbench will replace it as the home workbench. To delete a workbench:
Select User Settings> Profile & Preferences.
Locate the Workbenches card.
Hover over the name of the workbench you want to remove, and click Delete.
Manage Auto-refresh
You can specify how often to update the data displayed on a workbench. By default, NetQ updates the data every five minutes. To alternately disable or re-enable auto-refresh, select Pause or Play. Each workbench refreshes according to its respective refresh interval.
To modify the refresh rate:
In the header, select the dropdown next to Refresh.
Select the refresh rate. A check mark indicates the current selection. The new refresh rate is applied immediately.
To disable auto-refresh on individual cards, select the card’s three-dot menu and click Manual refresh.
Cards present information about your network for monitoring and troubleshooting; each card describes a particular aspect of the network. Cards are collected onto a workbench where all data relevant to a task or set of tasks is visible. You can add and remove cards from a workbench, increase or decrease their sizes, change the time period of the data shown on a card, and make copies of cards to show different levels of data for the same time period.
Available Cards
Each card focuses on a particular aspect of your network. They include:
Validation summary: overview of your network’s health
Events cards: system anomalies and threshold-crossing events (Events card), network issues and packet drops (What Just Happened card), and link events (Link events card)
Link cards: overview of links at the fabric level (Switch link status card)
Sensor health: overview of fan, temperature, and PSU states
Queue status: ports experiencing most packet buffer congestion
Device groups: distribution of device components
Trace request: discovery workflow for paths between two devices in the network fabric
MAC move commentary: info about changes to a MAC address on a specific VLAN
Network services cards: BGP, MLAG, EVPN, and LLDP
Inventory cards: Devices, Switches, DPUs, NICs, and Hosts
Card Sizes
You can increase or decrease the size of certain cards. The granularity of the content on a card varies with the size of the card, with the highest level of information on the smallest card to the most detailed information on the full-screen card.
Card Size Summary
Card Size
Small
Medium
Large
Full Screen
Primary Purpose
Quick view of status, typically at the level of good or bad
View key performance parameters or statistics
Perform quick actions
Monitor for potential issues
View detailed performance and statistics
Perform actions
Compare and review related information
View all attributes for given network aspect
Analyze and visualize detailed data
Export and filter data
Card Actions
Add Cards to Your Workbench
Click Add card in the header.
Select the card(s) you want to add to your workbench.
When you have selected the cards you want to add to your workbench, select Open cards.
The cards are placed at the end of the set of cards currently on the workbench. You might need to scroll down to see them. Drag and drop the cards on the workbench to rearrange them.
Add Switch Cards to Your Workbench
To add switch cards to a workbench:
Select Add card > Device card.
Select the device from the suggestions that appear:
Choose the card’s size, then select Add.
For a comprehensive overview of performance metrics and data for an individual switch, search for its hostname in the global search field and right-click the switch to open the overview in a new tab.
Remove Cards from Your Workbench
To remove all the cards from your workbench, click Workbench, then Clear in the header. To remove an individual card:
Hover over the top section of the card you want to remove.
Click the three-dot menu.
Select Remove.
The card is removed from the workbench, but not from the application.
Change the Size of the Card
Hover over the top portion of the card until you see a rectangular box divided into four segments. If you do not see the box, the size of the card cannot be adjusted.
Move your cursor over the box until the desired size option is highlighted.
One-quarter width opens a small card. One-half width opens a medium card. Three-quarters width opens a large card. Full width opens a full-screen card.
Select the size. When the card changes to the selected size, it might move to a different area on the workbench.
Change the Time Period for the Card Data
All cards have a default time period for the data shown on the card, typically the last 24 hours. You can change the time period to view the data during a different time range to better understand issues and events.
To change the time period for a card:
Hover over the top portion of the card and select the clock .
Select a time period from the dropdown menu.
Changing the time period in this manner only changes the time period for the given card.
Table Settings
You can manipulate the tabular data displayed in a full-screen card by filtering and sorting the columns. Hover over the column header and select it to sort the column. The data is sorted in ascending or descending order: A-Z, Z-A, 1-n, or n-1. The number of rows that can be sorted via the UI is limited to 10,000. To reposition the columns, drag and drop them using your mouse.
Select Export to download and export the tabular data. You can sort and filter tables that exceed 10,000 rows by exporting the data as a CSV file and opening it in a spreadsheet program.
The following icons are common in the full-screen card view:
Icon
Action
Description
Select all
Selects all items in the list.
Clear all
Clears all existing selections in the list.
Add item
Adds item to the list.
Edit
Edits the selected item.
Delete
Removes the selected items.
Filter
Filters the list using available parameters.
,
Generate/Delete AuthKeys
Creates or removes NetQ CLI authorization keys.
Open cards
Opens the corresponding validation or trace card(s).
Assign role
Opens role assignment options for switches.
Export
Exports selected data into either a .csv or JSON-formatted file.
When there are many items in a table, NetQ loads up to 20 rows by default and provides the rest in additional table pages, accessible through the pagination controls under the table.
Set User Preferences
This section describes how to customize your NetQ display, change your password, and manage your workbenches.
Configure Display Settings
The Display card contains the options for setting the application theme (light or dark), language, time zone, and date formats.
To configure the display settings:
Select User Settings in the top-right corner.
Select Profile & Preferences.
Locate the Display card:
Select the Theme field and choose either dark or light. The following figure shows the light theme:
Select the Time zone field to adjust the time zone.
By default, the time zone is set to the user’s local time zone. If a time zone has not been selected, NetQ defaults to the current local time zone where NetQ is installed. All time values are based on this setting. If your deployment is not local to you (for example, you want to view the data from the perspective of a data center in another time zone) you can change the display to a different time zone.
In the Date format field, select the date and time format you want displayed on the cards.
Change Your Password
Click User Settings in the top-right corner.
Click Profile & Preferences.
In the Basic Account Info card, select Change password.
Enter your current password, followed by your new password. Then select Save.
The NetQ CLI provides access to all network state and event information collected by NetQ Agents. It behaves similarly to typical CLIs, with groups of commands that display related information, and help commands that provide additional information. See the command line reference for a comprehensive list of NetQ commands, including examples, options, and definitions.
The NetQ command line interface only runs on switches and server hosts implemented with Intel x86 or ARM-based architectures.
CLI Access
When you install or upgrade NetQ, you can also install and enable the CLI on your NetQ server or appliance and hosts.
To access the CLI from a switch or server:
Log in to the device. The following example uses the default username of cumulus and a hostname of switch:
<computer>:~<username>$ ssh cumulus@switch
Enter your password to reach the command prompt. The default password is CumulusLinux!
You can now run commands:
cumulus@switch:~$ netq show agents
Command Line Basics
This section describes the core structure and behavior of the NetQ CLI.
Command Line Structure
The NetQ command line has a flat structure as opposed to a modal structure: you can run all commands from the standard command prompt instead of only in a specific mode, at the same level.
Command Syntax
All NetQ CLI commands begin with netq. The commands you use to monitor your network fall into one of four syntax categories: validation (check), monitoring (show), configuration, and trace.
netq check <network-protocol-or-service> [options]
netq show <network-protocol-or-service> [options]
netq config <action> <object> [options]
netq trace <destination> from <source> [options]
Symbols
Meaning
Parentheses ( )
Grouping of required parameters. Choose one.
Square brackets [ ]
Single or group of optional parameters. If more than one object or keyword is available, choose one.
Angle brackets < >
Required variable. Value for a keyword or option; enter according to your deployment nomenclature.
Pipe |
Separates object and keyword options, also separates value options; enter one object or keyword and zero or one value.
Command Output
The command output presents results in color for many commands. Results with errors appear in red, and warnings appear in yellow. Results without errors or warnings appear in either black or green. VTEPs appear in blue. A node in the pretty output appears in bold, and angle brackets (< >) wrap around a router interface. To view the output with only black text, run the netq config del color command. You can view output with colors again by running netq config add color.
All check and show commands have a default timeframe of now to one hour ago, unless you specify an approximate time using the around keyword or a range using the between keyword. For example, running netq check bgp shows the status of BGP over the last hour. Running netq show bgp around 3h shows the status of BGP three hours ago.
When entering a time value, you must include a numeric value and the unit of measure:
w: weeks
d: days
h: hours
m: minutes
s: seconds
now
When using the between option, you can enter the start time (text-time) and end time (text-endtime) values as most recent first and least recent second, or vice versa. The values do not have to have the same unit of measure. Use the around option to view information for a particular time.
Command Prompts
NetQ code examples use the following prompts:
cumulus@switch:~$ indicates the user cumulus is logged in to a switch to run the example command
cumulus@host:~$ indicates the user cumulus is logged in to a host to run the example command
cumulus@netq-appliance:~$ indicates the user cumulus is logged in to the NetQ appliance to run the command
cumulus@hostname:~$ indicates the user cumulus is logged in to a switch, host or appliance to run the example command
To use the NetQ CLI, the switches must be running the Cumulus Linux operating system, the NetQ software, the NetQ Agent, and the NetQ CLI. The hosts must be running the Ubuntu OS, the NetQ Agent, and the NetQ CLI. Refer to Install NetQ for additional information.
Command Completion
As you enter commands, you can get help with the valid keywords or options using the tab key. For example, using tab completion with netq check displays the possible objects for the command, and returns you to the command prompt to complete the command:
cumulus@switch:~$ netq check <<press Tab>>
addresses : IPv4/v6 addresses
agents : Netq agent
bgp : BGP info
cl-version : Cumulus Linux version
evpn : EVPN
interfaces : network interface port
mlag : Multi-chassis LAG (alias of clag)
mtu : Link MTU
ntp : NTP
roce : RoCE
sensors : Temperature/Fan/PSU sensors
topology : Topology
vlan : VLAN
vxlan : VxLAN
cumulus@switch:~$ netq check
Command Help
As you enter commands, you can get help with command syntax by entering help as part of the command. For example, to find out which options are available for an IP addresses check, enter the netq check addresses command followed by help:
To see an exhaustive list of commands and their definitions, run:
cumulus@switch:~$ netq help list
To display NetQ command formatting rules, run:
cumulus@switch:~$ netq help verbose
Command History
The CLI stores commands issued within a session, which lets you review and rerun commands that you already ran. At the command prompt, press the Up Arrow and Down Arrow keys to move back and forth through the list of commands previously entered. When you have found a given command, you can run the command by pressing Enter, just as you would if you had entered it manually. You can also modify the command before you run it.
Command Categories
While the CLI has a flat structure, NetQ commands are conceptually grouped into the following functional categories:
The netq check commands validate the current or historical state of the network by looking for errors and misconfigurations in the network. The commands run fabric-wide validations against various configured protocols and services to determine how well the network is operating. You can perform validation checks for the following:
addresses: IPv4 and IPv6 addresses duplicates across devices
agents: NetQ Agents operation on all switches and hosts
bgp: BGP (Border Gateway Protocol) operation across the network
fabric
clag: Cumulus Linux MLAG (multi-chassis LAG/link aggregation) operation
The netq show commands let you view details about the current or historical configuration and status of various protocols and services.
The commands take the form of netq [<hostname>] show <network-protocol-or-service> [options], where the options vary according to the protocol or service. You can restrict the commands from showing the information for all devices to showing information only for a selected device using the hostname option.
▼
Example show command
The following example shows the standard output for the netq show agents command:
cumulus@switch:~$ netq show agents
Matching agents records:
Hostname Status NTP Sync Version Sys Uptime Agent Uptime Reinitialize Time Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
hostd-11 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:48:38 2024 Thu May 30 12:54:54 2024 Thu May 30 12:54:54 2024 Mon Jun 3 16:54:39 2024
hostd-12 Rotten yes 4.10.0-rh7u47~1713946570.f7bc2d7 Thu May 30 12:13:57 2024 Thu May 30 12:55:07 2024 Thu May 30 12:55:07 2024 Thu May 30 13:00:12 2024
hostd-21 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:49:48 2024 Thu May 30 12:55:20 2024 Thu May 30 12:55:20 2024 Mon Jun 3 16:54:53 2024
hostd-22 Rotten yes 4.10.0-rh7u47~1713946570.f7bc2d7 Thu May 30 12:13:58 2024 Thu May 30 12:55:27 2024 Thu May 30 12:55:27 2024 Thu May 30 13:00:12 2024
hosts-11 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:50:48 2024 Thu May 30 12:55:40 2024 Thu May 30 12:55:40 2024 Mon Jun 3 16:54:52 2024
hosts-12 Rotten yes 4.10.0-rh7u47~1713946570.f7bc2d7 Thu May 30 12:13:58 2024 Thu May 30 12:55:53 2024 Thu May 30 12:55:53 2024 Thu May 30 13:00:43 2024
hosts-13 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:51:48 2024 Thu May 30 12:56:06 2024 Thu May 30 12:56:06 2024 Mon Jun 3 16:54:32 2024
hosts-21 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:52:46 2024 Thu May 30 12:56:19 2024 Thu May 30 12:56:19 2024 Mon Jun 3 16:54:34 2024
hosts-22 Rotten yes 4.10.0-rh7u47~1713946570.f7bc2d7 Thu May 30 12:13:59 2024 Thu May 30 12:56:32 2024 Thu May 30 12:56:32 2024 Thu May 30 13:01:13 2024
hosts-23 Fresh yes 4.10.0-ub18.04u47~1717071980.7db4cf1 Thu May 30 12:53:46 2024 Thu May 30 12:56:44 2024 Thu May 30 12:56:44 2024 Mon Jun 3 16:54:57 2024
spine-1 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:39 2024 Thu May 30 13:19:36 2024 Thu May 30 13:19:36 2024 Mon Jun 3 16:54:38 2024
spine-2 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:39 2024 Thu May 30 13:20:46 2024 Thu May 30 13:20:46 2024 Mon Jun 3 16:54:53 2024
spine-3 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:39 2024 Thu May 30 13:21:58 2024 Thu May 30 13:21:58 2024 Mon Jun 3 16:54:47 2024
torc-11 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:37 2024 Thu May 30 13:13:20 2024 Thu May 30 13:13:20 2024 Mon Jun 3 16:54:51 2024
torc-12 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:38 2024 Thu May 30 13:18:23 2024 Thu May 30 13:18:23 2024 Mon Jun 3 16:54:53 2024
torc-21 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:38 2024 Thu May 30 13:10:20 2024 Thu May 30 13:10:20 2024 Mon Jun 3 16:54:36 2024
torc-22 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:38 2024 Thu May 30 13:11:42 2024 Thu May 30 13:11:42 2024 Mon Jun 3 16:54:40 2024
▼
Example show command with filtered output
The following example shows the filtered output for the netq show agents command:
cumulus@switch:~$ netq spine-3 show agents
Matching agents records:
Hostname Status NTP Sync Version Sys Uptime Agent Uptime Reinitialize Time Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
spine-3 Fresh no 4.10.0-cld12u46~1713949601.127fb0c1b Thu May 30 12:13:39 2024 Thu May 30 13:21:58 2024 Thu May 30 13:21:58 2024 Mon Jun 3 16:55:50 2024
Configuration Commands
Various commands—netq config, netq lcm, netq add, and netq del—allow you to manage NetQ Agent and CLI server configurations, configure lifecycle management, set up container monitoring, and manage notifications.
NetQ Agent Configuration
The agent commands configure individual NetQ Agents.
The agent configuration commands can add and remove agents from switches and hosts, start and stop agent operations, debug the agent, specify default commands, and enable or disable a variety of monitoring features (including sensors, FRR (FRRouting), CPU usage limit, and What Just Happened).
Commands apply to one agent at a time. Run them from the switch or host where the NetQ Agent resides. You must run the netq config commands with sudo privileges.
After making configuration changes to your agents, you must restart the agent for the changes to take effect. Use the netq config restart agent command.
The netq config cli configures and manages the CLI component. You can add or remove the CLI (essentially enabling/disabling the service), start and restart it, and view the configuration of the service.
Commands apply to one device at a time, and you run them from the switch or host where you run the CLI.
The CLI configuration commands include:
netq config add cli server
netq config del cli server
netq config show cli premises [json]
netq config show (cli|all) [json]
netq config (status|restart) cli
netq config select cli premise
The following example shows how to restart the CLI instance:
cumulus@switch~:$ netq config restart cli
The following example shows how to enable the CLI on a NetQ on-premises appliance or virtual machine:
cumulus@switch~:$ netq config add cli server 10.1.3.101
NetQ System Configuration Commands
Use the following commands to manage the NetQ system itself:
bootstrap: Loads the installation program onto the network switches and hosts in either a single server or server cluster arrangement.
decommission: Decommissions a switch or host.
install: Installs NetQ in standalone or cluster deployments; also used to install patch software.
upgrade bundle: Upgrades NetQ on NetQ on-premises appliances or VMs.
The following example shows how to decommission a switch named leaf01:
For information and examples on installing and upgrading the NetQ system, see Install NetQ and Upgrade NetQ.
Event Notification Commands
The notification configuration commands can add, remove, and show notification via third-party integrations. These commands create the channels, filters, and rules that display event messages. Refer to Configure System Event Notifications for step-by-step instructions and examples.
Threshold-based Event Notification Commands
NetQ supports TCA events, a set of events that are triggered by crossing a user-defined threshold. Configure and manage TCA events using the following commands:
netq add tca
netq del tca tca_id <text-tca-id-anchor>
netq show tca
Lifecycle Management Commands
The lifecycle management commands help you efficiently manage the deployment of NVIDIA product software onto your network devices (servers, appliances, and switches).
LCM commands allow you to:
Manage network OS and NetQ images in a local repository
Configure switch access credentials for installations and upgrades
Manage switch inventory and roles
Install or upgrade NetQ Agents and CLI on switches
Upgrade the network OS on switches with NetQ Agents
View a result history of upgrade attempts
Add or delete NetQ configuration profiles
The following example shows the NetQ configuration profiles:
cumulus@switch:~$ netq lcm show netq-config
ID Name Default Profile VRF WJH CPU Limit Log Level Last Changed
------------------------- --------------- ------------------------------ --------------- --------- --------- --------- -------------------------
config_profile_3289efda36 NetQ default co Yes mgmt Disable Disable info Tue Apr 27 22:42:05 2021
db4065d56f91ebbd34a523b45 nfig
944fbfd10c5d75f9134d42023
eb2b
The following example shows how to add a Cumulus Linux installation image to the NetQ repository on the switch:
The netq trace commands let you view the available paths between two nodes on the network. You can perform a layer 2 or layer 3 trace, and view the output in one of three formats: JSON, pretty, and detail. JSON output provides the output in a JSON file format for ease of importing to other applications or software. Pretty output lines up the paths in a pseudo-graphical manner to help visualize multiple paths. Detail output is useful for traces with higher hop counts where the pretty output wraps lines, making it harder to interpret the results. The detail output displays a table with a row for each path.
This section describes how to install, configure, and upgrade NetQ.
Before you begin, review the release notes for this version.
Before You Install
This overview is designed to help you understand the various NetQ deployment and installation options.
Installation Overview
Consider the following deployment options and requirements before you install the NetQ system:
Single Server
High-Availability Cluster
High-Availability Scale Cluster
On-premises or cloud
On-premises or cloud
On-premises only
Network size: small
Network size: medium
Supports up to 100 switches and 128 interfaces per switch*
Network size: large
Supports up to 1,000 switches and 125,000 interfaces*
KVM or VMware hypervisor
KVM or VMware hypervisor
KVM or VMware hypervisor
System requirements
On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk
Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk
System requirements (per node)
On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk
Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk
System requirements (per node)
On-premises: 48 virtual CPUs, 512GB RAM, 3.2TB SSD disk
All features supported
All features supported
No support for:
Network snapshots
Trace requests
Flow analysis
Duplicate IP address validations
MAC commentary
Link health view
*Exact device support counts can vary based on multiple factors, such as the number of links, routes, and IP addresses in your network. Contact NVIDIA for assistance in selecting the appropriate deployment model for your network.
Deployment Type: On-Premises or Cloud
On-premises deployments are hosted at your location and require the in-house skill set to install, configure, back up, and maintain NetQ. This model is a good choice if you want very limited or no access to the internet from switches and hosts in your network.
In the cloud deployment, you host only a small, local server on your premises that connects to the NetQ cloud service over selected ports or through a proxy server. NetQ cloud supports local data aggregation and forwarding—the majority of the NetQ applications use a hosted deployment strategy, storing data in the cloud. NVIDIA handles the backups and maintenance of the application and storage.
In all deployment models, the NetQ Agents reside on the switches and hosts they monitor in your network.
Server Arrangement: Single or Cluster
A single server is easier to set up, configure, and manage, but limits your ability to scale your network monitoring. Deploying multiple servers allows you to limit potential downtime and increase availability by having more than one server that can run the software and store the data. Select the standalone, single-server arrangement for smaller, simpler deployments.
The high-availability cluster deployment supports a greater number of switches and provides high availability for your network. The clustering implementation comprises three servers: one master and two workers nodes. NetQ supports high availability server-cluster deployments using a virtual IP address. Even if the master node fails, NetQ services remain operational. However, keep in mind that the master hosts the Kubernetes control plane so anything that requires connectivity with the Kubernetes cluster—such as upgrading NetQ or rescheduling pods to other workers if a worker goes down—will not work.
During the installation process, you configure a virtual IP address that enables redundancy for the Kubernetes control plane. In this configuration, the majority of nodes must be operational for NetQ to function. For example, a three-node cluster can tolerate a one-node failure, but not a two-node failure. For more information, refer to the etcd documentation.
The high-availability scale cluster deployment provides the same benefits as the high-availability cluster deployment, but supports larger networks of up to 1,000 switches. NVIDIA recommends this option for networks that have over 100 switches and at least 100 interfaces per switch. It offers the highest level of scalability, allowing you to adjust NetQ’s network monitoring capacity as your network expands.
Tabular data in the UI is limited to 10,000 rows. For large networks, NVIDIA recommends downloading and exporting the tabular data as a CSV or JSON file and opening it in a spreadsheet program for further analysis. Refer to the installation overview table at the beginning of this section for additional HA scale cluster deployment support information.
Cluster Deployments and Load Balancers
As an alternative to the three-node cluster deployment with a virtual IP address, you can use an external load balancer to provide high availability for the NetQ API and the NetQ UI.
However, you need to be mindful of where you install the certificates for the NetQ UI (port 443); otherwise, you cannot access the NetQ UI. If you are using a load balancer in your deployment, NVIDIA recommends that you install the certificates directly on the load balancer for SSL offloading. However, if you install the certificates on the master node, then configure the load balancer to allow for SSL passthrough.
You can install NetQ either on your premises or as a remote, cloud solution. If you are unsure which option is best for your network, refer to Before You Install.
After installing the NetQ software, you should install the NetQ Agents on each switch you want to monitor. You can install NetQ Agents on switches and servers running:
Cumulus Linux 5.0.0 or later (Spectrum switches)
Ubuntu 22.04, 20.04
Prepare for NetQ Agent Installation
For switches running Cumulus Linux, you need to:
Install and configure NTP or PTP, if needed
Obtain NetQ software packages
For servers running Ubuntu, you need to:
Verify you installed the minimum package versions
Verify the server is running lldpd
Install and configure NTP or PTP, if needed
Obtain NetQ software packages
If your network uses a proxy server for external connections, you should first
configure a global proxy so apt-get can access the software package in the NVIDIA networking repository.
Verify NTP Is Installed and Configured
Verify that
NTP is running on the switch as outlined in the steps below. The switch system clock must be synchronized with the NetQ appliance to enable useful statistical analysis. Alternatively, you can configure
PTP for time synchronization.
cumulus@switch:~$ sudo systemctl status ntp
[sudo] password for cumulus:
● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
Active: active (running) since Fri 2018-06-01 13:49:11 EDT; 2 weeks 6 days ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/ntp.service
└─2873 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -c /var/lib/ntp/ntp.conf.dhcp -u 109:114
If NTP is not installed, install and configure it before continuing.
If NTP is not running:
Verify the IP address or hostname of the NTP server in the /etc/ntp.conf file, and then
Reenable and start the NTP service using the systemctl [enable|start] ntp commands
If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.
Obtain NetQ Agent Software Package
Cumulus Linux 4.4 and later includes the netq-agent package by default. To upgrade the NetQ Agent to the latest version:
Add the repository by uncommenting or adding the following line in /etc/apt/sources.list:
cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-latest
...
You can specify a NetQ Agent version in the repository configuration. The following example shows the repository configuration to retrieve NetQ Agent 4.12:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.12
Add the apps3.cumulusnetworks.com authentication key to Cumulus Linux:
Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-jammy.list and add the following line:
root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-jammy.list
...
deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb jammy netq-latest
...
The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even for a major version update. If you want to keep the repository on a specific version — such as netq-4.4 — use that instead.
Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-focal.list and add the following line:
root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-focal.list
...
deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb focal netq-latest
...
The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even for a major version update. If you want to keep the repository on a specific version — such as netq-4.4 — use that instead.
Install NetQ Agent
After completing the preparation steps, install the agent on your switch or host.
Cumulus Linux 4.4 and later includes the netq-agent package by default. To install the NetQ Agent on earlier versions of Cumulus Linux:
Update the local apt repository, then install the NetQ software on the switch.
Configure the NetQ Agent, as described in the next section.
Configure NetQ Agent
After you install the NetQ Agents on the switches you want to monitor, you must configure them to obtain useful and relevant data.
The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, it uses the default VRF (named default). If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.
If you configure the NetQ Agent to communicate in a VRF that is not default or mgmt, the following line must be added to /etc/netq/netq.yml in the netq-agent section:
netq-agent:
netq_stream_address: 0.0.0.0
Two methods are available for configuring a NetQ Agent:
Edit the configuration file on the switch, or
Use the NetQ CLI
Configure NetQ Agents Using a Configuration File
You can configure the NetQ Agent in the netq.yml configuration file contained in the /etc/netq/ directory.
Open the netq.yml file using your text editor of choice. For example:
sudo nano /etc/netq/netq.yml
Locate the netq-agent section, or add it.
Set the parameters for the agent as follows:
port: 31980 (default configuration)
server: IP address of the NetQ server where the agent should send its collected data
vrf: default (or one that you specify)
inband-interface: the interface used to reach your NetQ server and used by lifecycle management to connect to the switch (for deployments where switches are managed through an in-band interface)
If you configured the NetQ CLI, you can use it to configure the NetQ Agent to send telemetry data to the NetQ appliance or VM. To configure the NetQ CLI, refer to Install NetQ CLI.
This example uses a NetQ server IP address of 192.168.1.254, the default port, and the mgmt VRF for a switch managed through an out-of-band connection:
This example uses a NetQ server IP address of 192.168.1.254, the default port, and the default VRF for a switch managed through an in-band connection on interface swp1:
A couple of additional options are available for configuring the NetQ Agent. If you are using VRFs, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.
Configure the Agent to Use a VRF
By default, NetQ uses the default VRF for communication between the NetQ appliance or VM and NetQ Agents. While optional, NVIDIA strongly recommends that you configure NetQ Agents to communicate with the NetQ appliance or VM only via a
VRF, including a
management VRF. To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if you configured the management VRF and you want the agent to communicate with the NetQ appliance or VM over it, configure the agent like this:
If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.
Configure the Agent to Communicate over a Specific Port
By default, NetQ uses port 31980 for communication between the NetQ server and NetQ Agents for on-premises deployments and port 443 for cloud deployments. If you want the NetQ Agent to communicate with the NetQ sever via a different port, you need to specify the port number when configuring the NetQ Agent, like this:
sudo netq config add agent server 192.168.1.254 port 7379
sudo netq config restart agent
Installing the NetQ CLI on your NetQ VMs, switches, or hosts gives you access to new features and bug fixes, and allows you to manage your network from multiple points in the network.
After installing the NetQ software and agent on each switch you want to monitor, you can also install the NetQ CLI on switches running:
Cumulus Linux 5.0.0 or later (Spectrum switches)
Ubuntu 20.04, 22.04
If your network uses a proxy server for external connections, you should first
configure a global proxy so apt-get can access the software package in the NetQ repository.
Prepare for NetQ CLI Installation on an Ubuntu Server
For servers running the Ubuntu OS, you need to:
Verify you installed the minimum service packages versions
Verify the server is running lldpd
Install and configure NTP or PTP, if needed
Obtain NetQ software packages
These steps are not required for Cumulus Linux.
Verify Service Package Versions
iproute 1:4.3.0-1ubuntu3.16.04.1 all
iproute2 4.3.0-1ubuntu3 amd64
lldpd 0.7.19-1 amd64
ntp 1:4.2.8p4+dfsg-3ubuntu5.6 amd64
Verify Ubuntu is Running lldpd
For Ubuntu, make sure you are running lldpd, not lldpad. Ubuntu does not include lldpd by default, even though the installation requires it.
If NTP is not already installed and configured, follow these steps:
Install
NTP on the server, if not already installed. Servers must be in time synchronization with the NetQ appliance to enable useful statistical analysis.
root@ubuntu:~# sudo apt-get install ntp
Configure the network time server.
Open the /etc/ntp.conf file in your text editor of choice.
Under the Server section, specify the NTP server IP address or hostname.
Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-jammy.list and add the following line:
root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-jammy.list
...
deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb jammy netq-latest
...
Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-focal.list and add the following line:
root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-focal.list
...
deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb focal netq-latest
...
The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even for a major version update. If you want to keep the repository on a specific version — such as netq-4.4 — use that instead.
Install NetQ CLI
Follow these steps to install the NetQ CLI on a switch or host.
Cumulus Linux 4.4 and later includes the netq-apps package by default. To upgrade the NetQ CLI to the latest version:
Add the repository by uncommenting or adding the following line in /etc/apt/sources.list:
cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-latest
...
You can specify a NetQ CLI version in the repository configuration. The following example shows the repository configuration to retrieve NetQ CLI v4.12:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.12
Update the local apt repository and install the software on the switch.
Continue with NetQ CLI configuration in the next section.
Configure the NetQ CLI
By default, you do not configure the NetQ CLI during the NetQ installation. The configuration resides in the /etc/netq/netq.yml file. Until the CLI is configured on a device, you can only run netq config and netq help commands, and you must use sudo to run them.
At minimum, you need to configure the NetQ CLI and NetQ Agent to communicate with the telemetry server. To do so, configure the NetQ Agent and the NetQ CLI so that they are running in the VRF where the routing tables have connectivity to the telemetry server (typically the management VRF).
To access and configure the CLI for your on-premises NetQ deployment, you must generate AuthKeys. You’ll need your username and password to generate them. These keys provide authorized access (access key) and user authentication (secret key).
To generate AuthKeys:
Enter your on-premises NetQ appliance hostname or IP address into your browser to open the NetQ UI login page.
Enter your username and password.
Expand the Menu, then select Management.
On the User Accounts card, select Manage.
Select your user and click Generate keys above the table.
Copy these keys to a safe place. Select Copy to obtain the CLI configuration command to use on your devices.
The secret key is only shown once. If you do not copy these, you will need to regenerate them and reconfigure CLI access.
You can also save these keys to a YAML file for easy reference, and to avoid having to type or copy the key values. You can:
store the file wherever you like, for example in /home/cumulus/ or /etc/netq
name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml
The following example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Replace the key values with your generated keys if you are using this example on your server.
This example uses an optional keys file. Replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.
If you have multiple premises and want to query data from a different premises than you originally configured, rerun the netq config add cli server command with the desired premises name. You can only view the data for one premises at a time with the CLI.
To access and configure the CLI for your NetQ cloud deployment, you must generate AuthKeys. You’ll need your username and password to generate them. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ cloud addresses were obtained during your initial login to the NetQ cloud and premises activation.
To generate AuthKeys:
Enter netq.nvidia.com into your browser to open the NetQ UI login page.
Enter your username and password.
Expand the Menu, then select Management.
On the User Accounts card, select Manage.
Select your user and click Generate keys above the table.
Copy these keys to a safe place. Select Copy to obtain the CLI configuration command to use on your devices.
The secret key is only shown once. If you do not copy these, you will need to regenerate them and reconfigure CLI access.
You can also save these keys to a YAML file for easy reference, and to avoid having to type or copy the key values. You can:
store the file wherever you like, for example in /home/cumulus/ or /etc/netq
name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml
The following example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Replace the key values with your generated keys if you are using this example on your server.
sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
sudo netq config restart cli
Restarting NetQ CLI... Success!
The following example uses an optional keys file. Replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.
sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
sudo netq config restart cli
Restarting NetQ CLI... Success!
If you have multiple premises and want to query data from a different premises than you originally configured, rerun the netq config add cli server command with the desired premises name. You can only view the data for one premises at a time with the CLI.
Set Up Your VMware Virtual Machine for a Single On-premises Server
Follow these steps to set up and configure your VM on a single server in an on-premises deployment:
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
Sixteen (16) virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
You must open the following ports on your NetQ on-premises server:
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Install and activate the NetQ software using the CLI:
cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:
cumulus@hostname:~$ netq install standalone full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0.tgz
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: False
Cluster Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your VMware Virtual Machine for a Single Cloud Server
Follow these steps to set up and configure your VM for a cloud deployment:
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
Four (4) virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications. The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Install and activate the NetQ software using the CLI:
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: False
Cluster Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your VMware Virtual Machine for an On-premises HA Server Cluster
First configure the VM on the master node, and then configure the VM on each worker node.
Follow these steps to set up and configure your VM cluster for an on-premises deployment:
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
Sixteen (16) virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
You must open the following ports on your NetQ on-premises servers:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
Nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Additionally, for internal cluster communication, you must open these ports:
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Verify that your first worker node meets the VM requirements, as described in step 1.
Confirm that the required ports are open for communications, as described in step 2.
Open your hypervisor and set up the VM in the same manner as the master node.
Make a note of the private IP address you assign to the worker node. You need it for later installation steps.
Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Repeat steps 8 through 11 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI:
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.
Run the following commands on your master node, using the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP):
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your Virtual Machine for a Cloud HA Server Cluster
Follow these steps to set up and configure your VM on a cluster of servers in a cloud deployment. First configure the VM on the master node, and then configure the VM on each worker node. NVIDIA recommends installing the virtual machines on different physical servers to increase redundancy in the event of a hardware failure.
System Requirements
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
4 virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu; VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu
Port Requirements
Confirm that the required ports are open for communications. The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
The following ports are used for internal cluster communication and must also be open between servers in your cluster:
Port or Protocol Number
Protocol
Component Access
8080
TCP
Admin API
5000
TCP
Docker registry
6443
TCP
Kubernetes API server
10250
TCP
kubelet health probe
2379
TCP
etcd
2380
TCP
etcd
36443
TCP
Kubernetes control plane
Installation and Configuration
Download the NetQ image.
a. Log in to your NVIDIA Application Hub account.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. Click Product Family and select NetQ.
e. For deployments using KVM, download the NetQ SW 4.12 KVM Cloud image. For deployments using VMware, download the NetQ SW 4.12 VMware Cloud image.
f. If prompted, read the license agreement and proceed with the download.
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify that the master node is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:
127.0.0.1 localhost NEW_HOSTNAME
Open your hypervisor and set up the VM in the same manner as the master node.
Make a note of the private IP address you assign to the worker node. You will need it at a later point in the installation process.
Verify that the worker node is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Repeat steps 6 and 7 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI. Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ
Run the netq install cluster worker-init <ssh-key> command on each of your worker nodes.
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI. Use the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP).
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: True
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.
If any of the applications or services display a DOWN status after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your VMware Virtual Machine for a Cloud HA Server Cluster
First configure the VM on the master node, and then configure the VM on each worker node.
Follow these steps to set up and configure your VM on a cluster of servers in a cloud deployment:
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
Four (4) virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
Nginx
179
TCP
Calico networking (BGP)
443
TCP
Nginx
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
The following ports are used for internal cluster communication and must also be open between servers in your cluster:
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Verify that your first worker node meets the VM requirements, as described in step 1.
Confirm that the required ports are open for communications, as described in step 2.
Open your hypervisor and set up the VM in the same manner as the master node.
Make a note of the private IP address you assign to the worker node. You will need it at a later point in the installation process.
Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Repeat steps 8 through 11 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI:
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI. Use the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP).
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your KVM Virtual Machine for a Single On-premises Server
Follow these steps to set up and configure your VM on a single server in an on-premises deployment:
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
Sixteen (16) virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
You must open the following ports on your NetQ on-premises server:
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Run the install command on your NetQ server:
cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:
cumulus@hostname:~$ netq install standalone full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0.tgz
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: False
Cluster Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your Virtual Machine for a Single On-premises Server
Follow these steps to set up and configure your VM on a single server in an on-premises deployment.
System Requirements
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
16 virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu; VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu
Port Requirements
Confirm that the required ports are open for communications.
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Installation and Configuration
Download the NetQ image.
a. Log in to your NVIDIA Application Hub account.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. Click Product Family and select NetQ.
e. For deployments using KVM, download the NetQ SW 4.12 KVM image. For deployments using VMware, download the NetQ SW 4.12 VMware image.
f. If prompted, read the license agreement and proceed with the download.
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify that the platform is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:
127.0.0.1 localhost NEW_HOSTNAME
Run the installation command on your NetQ server:
cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:
cumulus@hostname:~$ netq install standalone full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0.tgz
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Standalone
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: False
Standalone Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.
If any of the applications or services display a DOWN status after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your KVM Virtual Machine for a Single Cloud Server
Follow these steps to set up and configure your VM on a single server in a cloud deployment:
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
Four (4) virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Install and activate the NetQ software using the CLI:
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: False
Cluster Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your Virtual Machine for a Single Cloud Server
Follow these steps to set up and configure your VM on a single server in a cloud deployment.
System Requirements
Verify that your system meets the VM requirements.
Resource
Minimum Requirements
Processor
4 virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu; VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu
Port Requirements
Confirm that the required ports are open for communications. The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Installation and Configuration
Download the NetQ image.
a. Log in to your NVIDIA Application Hub account.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. Click Product Family and select NetQ.
e. For deployments using KVM, download the NetQ SW 4.12 KVM Cloud image. For deployments using VMware, download the NetQ SW 4.12 VMware Cloud image.
f. If prompted, read the license agreement and proceed with the download.
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify that the platform is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:
127.0.0.1 localhost NEW_HOSTNAME
Install and activate the NetQ software using the CLI:
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful installation:
State: Active
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Standalone
Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
Master SSH Public Key: a3NoLXJzYSBBQUFNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
Is Cloud: True
Standalone Status:
IP Address Hostname Role Status
------------- ------------- ------ --------
10.188.44.147 10.188.44.147 Role Ready
NetQ... Active
Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your KVM Virtual Machine for an On-premises HA Server Cluster
First configure the VM on the master node, and then configure the VM on each worker node.
Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment:
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
Sixteen (16) virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications.
You must open the following ports on your NetQ on-premises servers:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
Nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Additionally, for internal cluster communication, you must open these ports:
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Verify that your first worker node meets the VM requirements, as described in step 1.
Confirm that the required ports are open for communications, as described in step 2.
Open your hypervisor and set up the VM in the same manner as for the master node.
Make a note of the private IP address you assign to the worker node. You need it for later installation steps.
Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Repeat steps 8 through 11 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI:
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.
Run the following commands on your master node, using the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP):
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your Virtual Machine for an On-premises HA Scale Cluster
Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on each additional node. NVIDIA recommends installing the virtual machines on different servers to increase redundancy in the event of a hardware failure.
NetQ 4.12.0 supports a 3-node HA scale cluster consisting of 1 master and 2 additional HA worker nodes.
System Requirements
Verify that each node in your cluster meets the VM requirements.
Resource
Minimum Requirements
Processor
48 virtual CPUs
Memory
512GB RAM
Local disk storage
3.2TB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; other storage options can lead to system instability and are not supported.)
Network interface speed
10 Gbps NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu; VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu
Port Requirements
Confirm that the required ports are open for communications.
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Additionally, for internal cluster communication, you must open these ports:
Port or Protocol Number
Protocol
Component Access
8080
TCP
Admin API
5000
TCP
Docker registry
6443
TCP
Kubernetes API server
10250
TCP
kubelet health probe
2379
TCP
etcd
2380
TCP
etcd
7072
TCP
Kafka JMX monitoring
9092
TCP
Kafka client
7071
TCP
Cassandra JMX monitoring
7000
TCP
Cassandra cluster communication
9042
TCP
Cassandra client
7073
TCP
Zookeeper JMX monitoring
2888
TCP
Zookeeper cluster communication
3888
TCP
Zookeeper cluster communication
2181
TCP
Zookeeper client
36443
TCP
Kubernetes control plane
Installation and Configuration
Download the NetQ image.
a. Log in to your NVIDIA Application Hub account.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. Click Product Family and select NetQ.
e. For deployments using KVM, download the NetQ SW 4.12.0 KVM Scale image. For deployments using VMware, download the NetQ SW 4.12.0 VMware Scale image
f. If prompted, read the license agreement and proceed with the download.
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify that the master node is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check scale
Change the hostname for the VM from the default value.
The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:
127.0.0.1 localhost NEW_HOSTNAME
Open your hypervisor and set up the VM for the additional nodes in the same manner as for the master node.
Run the following command on each node to verify that the node is ready for a NetQ software installation. Fix any errors indicated before installing the software.
cumulus@hostname:~$ sudo opta-check scale
Install and activate the NetQ software using the CLI.
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your additional HA and worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVM3dQN9MWTU1a
Run the netq install cluster worker-init <ssh-key> command on each of your worker nodes.
Run the netq install cluster config generate command on your master node to generate a template for the cluster configuration JSON file:
The local network interface on your master node used for NetQ connectivity.
cluster-vip
The cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes.
master-ip
The IP address assigned to the interface on your master node used for NetQ connectivity.
is-ipv6
Set the value to true if your network connectivity and node address assignments are IPv6.
ha-nodes
The IP addresses of each of the HA nodes in your cluster.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the cluster configuration JSON file:
The local network interface on your master node used for NetQ connectivity.
cluster-vip
The cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes.
master-ip
The IP address assigned to the interface on your master node used for NetQ connectivity.
is-ipv6
Set the value to true if your network connectivity and node address assignments are IPv6.
ha-nodes
The IP addresses of each of the HA nodes in your cluster.
Run the following command on your master HA node, using the JSON configuration file created in step 11:
If this step fails for any reason, run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.
If any of the applications or services display a DOWN status after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your Virtual Machine for an On-premises HA Server Cluster
Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on each worker node. NVIDIA recommends installing the virtual machines on different physical servers to increase redundancy in the event of a hardware failure.
System Requirements
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
16 virtual CPUs
Memory
64 GB RAM
Local disk storage
500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size (Note: This must be an SSD; other storage options can lead to system instability and are not supported.)
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu; VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu
Port Requirements
Confirm that the required ports are open for communications.
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
nginx
179
TCP
Calico networking (BGP)
443
TCP
NetQ UI
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
30001
TCP
DPU communication
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
Additionally, for internal cluster communication, you must open these ports:
Port or Protocol Number
Protocol
Component Access
8080
TCP
Admin API
5000
TCP
Docker registry
6443
TCP
Kubernetes API server
10250
TCP
kubelet health probe
2379
TCP
etcd
2380
TCP
etcd
7072
TCP
Kafka JMX monitoring
9092
TCP
Kafka client
7071
TCP
Cassandra JMX monitoring
7000
TCP
Cassandra cluster communication
9042
TCP
Cassandra client
7073
TCP
Zookeeper JMX monitoring
2888
TCP
Zookeeper cluster communication
3888
TCP
Zookeeper cluster communication
2181
TCP
Zookeeper client
36443
TCP
Kubernetes control plane
Installation and Configuration
Download the NetQ image.
a. Log in to your NVIDIA Application Hub account.
b. Select NVIDIA Licensing Portal.
c. Select Software Downloads from the menu.
d. Click Product Family and select NetQ.
e. For deployments using KVM, download the NetQ SW 4.12 KVM image. For deployments using VMware, download the NetQ SW 4.12 VMware image
f. If prompted, read the license agreement and proceed with the download.
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
VMware Example Configuration
This example shows the VM setup process using an OVA file with VMware ESXi.
Enter the address of the hardware in your browser.
Log in to VMware using credentials with root access.
Click Storage in the Navigator to verify you have an SSD installed.
Click Create/Register VM at the top of the right pane.
Select Deploy a virtual machine from an OVF or OVA file, and click Next.
Provide a name for the VM, for example NetQ.
Tip: Make note of the name used during install as this is needed in a later step.
Drag and drop the NetQ Platform image file you downloaded in Step 1 above.
Click Next.
Select the storage type and data store for the image to use, then click Next. In this example, only one is available.
Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.
Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.
The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.
Once completed, view the full details of the VM and hardware.
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify that the master node is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Change the hostname for the VM from the default value.
The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:
127.0.0.1 localhost NEW_HOSTNAME
Open your hypervisor and set up the VM in the same manner as for the master node.
Make a note of the private IP address you assign to the worker node. You will need it to complete the installation.
Verify that the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check
Repeat steps 6 and 7 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI.
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVM3dQN9MWTU1a
Run the netq install cluster worker-init <ssh-key> command on each of your worker nodes.
Run the following commands on your master node, using the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP):
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.
If any of the applications or services display a DOWN status after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Set Up Your KVM Virtual Machine for a Cloud HA Server Cluster
First configure the VM on the master node, and then configure the VM on each worker node.
Follow these steps to set up and configure your VM on a cluster of servers in a cloud deployment:
Verify that each node in your cluster—the master node and two worker nodes—meets the VM requirements.
Resource
Minimum Requirements
Processor
Four (4) virtual CPUs
Memory
8 GB RAM
Local disk storage
64 GB
Network interface speed
1 Gb NIC
Hypervisor
KVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu, and RedHat operating systems
Confirm that the required ports are open for communications. The OPTA must be able to initiate HTTPS connections (destination TCP port 443) to the netq.nvidia.com domain (*.netq.nvidia.com). You must also open the following ports on your NetQ OPTA:
Port or Protocol Number
Protocol
Component Access
4
IP Protocol
Calico networking (IP-in-IP Protocol)
22
TCP
SSH
80
TCP
Nginx
179
TCP
Calico networking (BGP)
443
TCP
Nginx
2379
TCP
etcd datastore
4789
UDP
Calico networking (VxLAN)
5000
TCP
Docker registry
6443
TCP
kube-apiserver
31980
TCP
NetQ Agent communication
31982
TCP
NetQ Agent SSL communication
32708
TCP
API Gateway
The following ports are used for internal cluster communication and must also be open between servers in your cluster:
Copy the QCOW2 image to a directory where you want to run it.
Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.
Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.
Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:
Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.
Make note of the name used during install as this is needed in a later step.
Watch the boot process in another terminal window.
$ virsh console netq_ts
Log in to the VM and change the password.
Use the default credentials to log in the first time:
Username: cumulus
Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec 3 21:35:42 UTC 2020
System load: 0.09 Processes: 120
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.
Log in again with your new password.
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
System information as of Thu Dec 3 21:35:59 UTC 2020
System load: 0.07 Processes: 121
Usage of /: 8.1% of 61.86GB Users logged in: 0
Memory usage: 5% IP address for eth0: <ipaddr>
Swap usage: 0%
Last login: Thu Dec 3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Change the hostname for the VM from the default value.
The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.
Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').
Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:
127.0.0.1 localhost NEW_HOSTNAME
Verify that your first worker node meets the VM requirements, as described in step 1.
Confirm that the required ports are open for communications, as described in step 2.
Open your hypervisor and set up the VM in the same manner as for the master node.
Make a note of the private IP address you assign to the worker node. You need it for later installation steps.
Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check-cloud
Repeat steps 8 through 11 for each additional worker node in your cluster.
Install and activate the NetQ software using the CLI:
Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:
cumulus@<hostname>:~$ netq install cluster master-init
Please run the following command on all worker nodes:
netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.
Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI. Use the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP).
The HA cluster virtual IP must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:
If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.
If this step fails for any reason, you can run netq bootstrap reset and then try again.
Verify Installation Status
To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:
State: Active
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Cluster
Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus Virtual IP
------------ ----------- ------ ------------ ------------
10.213.7.52 10.213.7.52 Worker Ready 10.213.7.53
10.213.7.51 10.213.7.51 Worker Ready 10.213.7.53
10.213.7.49 10.213.7.49 Master Ready 10.213.7.53
In Summary, Live state of the NetQ is... Active
Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.
If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.
After NetQ is installed, you can log in to NetQ from your browser.
Install NIC and DPU Agents
Installing NetQ telemetry agents on your hosts with NVIDIA ConnectX adapters and NVIDIA BlueField data processing units (DPUs) allows you to track inventory data and statistics across devices. The DOCA Telemetry Service (DTS) is the agent that runs on hosts and DPUs to collect data.
Install DTS on ConnectX Hosts
To install and configure the DOCA Telemetry Service container on a host with ConnectX adapters, perform the following steps:
Obtain the DTS container image path from the NGC catalog. Select Get Container, then View all tags. Copy the 1.18.2-doca2.8.0-host image path.
Initialize the DTS container with Docker on the host. Use the image path obtained in the previous step for the DTS_IMAGE variable and configure the IP address of your NetQ server for the -i option:
The Prometheus adapter pod in NetQ collects statistics from ConnectX adapters in your network. The default scrape interval is every minute. If you want to change the frequency of the scrape interval, make your adjustments, then restart the netq-prom-adapter pod to begin collecting data with the updated parameters:
Log in to your NetQ VM via SSH.
Edit the Prometheus ConfigMap with the kubectl edit cm prometheus-config command.
Edit the scrape_interval parameter.
Retrieve the current pod name with the kubectl get pods | grep netq-prom command:
Retrieve the container yaml configuration file onto the host. Use the path specified in the Adjusting the .yaml Configuration section in the NGC instructions. Copy it to /etc/kubelet.d/doca_telemetry_standalone.yaml:
Edit the image in both the containers and initContainers sections of the /etc/kubelet.d/doca_telemetry_standalone.yaml file to set the container image path retrieved in step 1.
Edit the command in the initContainers section of the /etc/kubelet.d/doca_telemetry_standalone.yaml file to set the DTS_CONFIG_DIR parameter to inventory_netq. Configure the fluent forwarding -i option to your NetQ server IP address and the -p option to 30001:
When you first log in to the NetQ UI as part of an on-premises deployment, your browser will display a warning indicating that the default certificate is not trusted. You can avoid this warning by installing your own, custom-signed certificate using the steps outlined on this page. The self-signed certificate is sufficient for non-production environments or cloud deployments.
If you already have a certificate installed and want to change or update it, run the kubectl delete secret netq-gui-ingress-tls [name] --namespace default command before following the steps outlined in this section. After making your updates, restart nginx with the kubectl delete pod -l app.kubernetes.io/name=ingress-nginx --namespace ingress-nginx command.
You need the following items to perform the certificate installation:
A valid X509 certificate, containing a Subject Alternative Name (SAN) attribute.
A private key file for the certificate.
A DNS record name configured to access the NetQ UI.
The FQDN should match the common name of the certificate. If you use a wild card in the common name — for example, if the common name of the certificate is *.example.com — then the NetQ telemetry server should reside on a subdomain of that domain, accessible via a URL like netq.example.com.
A functioning and healthy NetQ instance.
You can verify this by running the netq show opta-health command.
Install a Certificate using the NetQ CLI
Log in to the NetQ VM via SSH and copy your certificate and key file there.
Generate a Kubernetes secret called netq-gui-ingress-tls:
cumulus@netq-ts:~$ kubectl create secret tls netq-gui-ingress-tls \
--namespace default \
--key <name of your key file>.key \
--cert <name of your cert file>.crt
Verify that you created the secret successfully:
cumulus@netq-ts:~$ kubectl get secret
NAME TYPE DATA AGE
netq-gui-ingress-tls kubernetes.io/tls 2 5s
Update the ingress rule file to install self-signed certificates.
Your custom certificate should now be working. Verify this by opening the NetQ UI at https://<your-hostname-or-ipaddr> in your browser.
Update Cloud Activation Key
NVIDIA provides a cloud activation key when you set up your premises. You use the cloud activation key (called the config-key) to access the cloud services. Note that these authorization keys are different from the ones you use to configure the CLI.
On occasion, you might want to update your cloud service activation key—for example, if you mistyped the key during installation and now your existing key does not work, or you received a new key for your premises from NVIDIA.
Update the activation key using the NetQ CLI:
Run the following command on your master NetQ VM replacing text-opta-key with your new key.
This section describes how to upgrade from your current installation to NetQ 4.12. Refer to the release notes before you upgrade.
Follow these steps to upgrade your on-premises or cloud deployment. Note that these steps are sequential; you must upgrade your NetQ virtual machine before you upgrade the NetQ Agents.
This page describes how to upgrade your NetQ virtual machines. Note that the upgrade instructions vary depending on NetQ version you’re currently running.
If the output of this command displays errors or returns an empty response, you will not be able to upgrade NetQ. Try waiting and then re-run the command. If after several attempts the command continues to fail, reset the NetQ server with netq bootstrap reset keep-db and perform a fresh installation of the tarball with the appropriate netq install command for your deployment type. For more information, refer to Troubleshoot NetQ Installation and Upgrade Issues.
Back up your NetQ data. This is an optional step for on-premises deployments. NVIDIA automatically creates backups for NetQ cloud deployments.
Update NetQ Debian Packages
Update /etc/apt/sources.list.d/cumulus-netq.list to netq-4.12:
cat /etc/apt/sources.list.d/cumulus-netq.list
deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb focal netq-4.12
Update the NetQ debian packages. In cluster deployments, update the packages on the master and all worker nodes:
Select the relevant software for your hypervisor: If you are upgrading NetQ software for a NetQ on-premises VM, select NetQ SW 4.12.0 Appliance to download the NetQ-4.12.0.tgz file. If you are upgrading NetQ software for a NetQ cloud VM, select NetQ SW 4.12.0 Appliance Cloud to download the NetQ-4.12.0-opta.tgz file.
If prompted, read the license agreement and proceed with the download.
For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.
Copy the tarball to the /mnt/installables/ directory on your NetQ VM.
Run the Upgrade
Perform the following steps using the cumulus user account.
Pre-installation Checks
Verify the following items before upgrading NetQ.
Confirm your VM is configured with 16 vCPUs. If your VM is configured with fewer than 16 vCPUs, power off your VM, reconfigure your hypervisor to allocate 16 vCPUs, then power the VM on before proceeding. For cluster deployments, verify these requirements on each node in the cluster.
Check if there is sufficient disk space:
cumulus@<hostname>:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 248G 70G 179G 28% /
cumulus@netq-appliance:~$
NVIDIA recommends proceeding with the installation only if the Use% is less than 70%. You can delete previous software tarballs in the /mnt/installables/ directory to regain some space. If you cannot decrease disk usage to under 70%, contact the NVIDIA support team.
Confirm that the NetQ CLI is properly configured. The netq show agents command should complete successfully and display agent status.
If this step fails for any reason, run the netq bootstrap reset keep-db command and perform a fresh installation of the tarball with the netq install standalone full command.
If this step fails for any reason, run the netq bootstrap reset keep-db command and perform a fresh installation of the tarball with the netq install cluster full command.
If this step fails for any reason, run the netq bootstrap reset keep-db command and perform a fresh installation of the tarball with the netq install opta standalone full command.
Run the netq upgrade command, specifying the current version’s tarball and your cluster’s virtual IP address. The virtual IP address must be:
An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface you specified in the netq installcommand.
A different IP address than the primary IP assigned to the default interface.
If you are upgrading from a NetQ 4.8 or later high availability, cloud cluster with a virtual IP address, you do not need to include the cluster-vip option in the upgrade command. Specifying a virtual IP address that is different from the virtual IP address used during the installation process will cause the upgrade to fail.
If this step fails for any reason, run the netq bootstrap reset keep-db command and perform a fresh installation of the tarball with the netq install opta cluster full command.
NetQ accounts are assigned one of two roles: admin or user. Accounts with admin privileges can perform the same actions as user accounts. Additionally, admins can access a management dashboard in the UI by expanding the Menu on the NetQ dashboard, then selecting Management.
From this dashboard, admins can:
Create, edit, and delete NetQ accounts
Manage login policies, including SSO and LDAP authentication
Review account activity
Create, edit, and delete system events, channels, and notifications
Manage premises
Schedule network traces and validations
Manage switches' lifecycles
The following image displays the management dashboard. Accounts with user privileges cannot perform the functions described above and do not have access to the management dashboard.
Sign in to NetQ as an admin to view and manage accounts. If you want to change individual preferences, visit Set User Preferences.
Navigate to the NetQ management dashboard to complete the tasks outlined in this section. To get there, expand the Menu on the NetQ dashboard and select Management.
Add an Account
This section outlines the steps to add a local user account. To add an LDAP account, refer to LDAP Authentication.
To create a new account:
On the User Accounts card, select Manage to open a table listing all accounts.
Above the table, select Add to add an account.
Enter the fields and select Save.
Be especially careful entering the email address; you cannot change it once you save the account. If you save a mistyped email address, you must delete the account and create a new one.
Edit an Account
As an admin, you can:
Edit the first or last name associated with an account
Reset an account’s password
Change an account’s role (user or admin)
You cannot edit the email address associated with an account, because this is the identifier the system uses for authentication. If you need to change an email address, delete the account and create a new one.
To edit an account:
On the User Accounts card, select Manage to open a table listing all accounts.
Select the account you’d like to edit. Above the table, click Edit to edit the account’s information.
Reset an Admin Password
If your account is assigned an admin role, reset your password by restoring the default password, then changing the password:
Run the following command on your on-premises server’s CLI:
Click Forgot Password? and enter an email address. Look for a message with the subject NetQ Password Reset Link from netq-sre@cumulusnetworks.com.
Select the link in the email and follow the instructions to create a new password.
Delete an Account
To delete one or more accounts:
On the User Accounts card, select Manage to open a table listing all accounts.
Select one or more accounts. Above the table, click Delete to delete the selected account(s).
View Account Activity
Administrators can view account activity in the activity log. To get there, expand the Menu and select Activity log. Use the controls above the table to filter or export the data.
Manage Login Policies
Administrators can configure a session expiration time and the number of times users can refresh before requiring them to log in again to NetQ.
To configure these login policies:
On the Login Management card, select Manage.
Select how long an account can be logged in before requiring a user to log in again:
Click Update to save the changes.
The Login Management card reflects the updated configuration.
Configure Premises
The NetQ management dashboard lets you configure a single NetQ UI and CLI for monitoring data from multiple premises. This means you do not need to log in to each premises individually to view the data.
Configure Multiple Premises
There are two ways to implement a multi-site, on-premises deployment: (1) as a full deployment at the primary premises and each of the external premises or (2) as a full deployment at the primary premises with smaller deployments at the secondary premises.
The primary premises is called OPID0 by default in the UI.
Full NetQ Deployment at Each Premises
In this implementation, there is a NetQ appliance or VM running the NetQ software with a database. Each premises operates independently as an external premises, with its own NetQ UI and CLI. The NetQ appliance or VM at one of the deployments acts as the primary premises. A list of external premises is stored with the primary deployment.
To configure a single UI to monitor multiple premises:
From the UI of the primary premises (OPID0), select the Premises dropdown in the top-right corner of the screen.
Select Manage premises, then select the External premises tab.
Select Add external premises.
Enter the IP address for the external server, your username, and password. The username and password are the same credentials used to log in to the UI for the external server. Select Next
Select the premises you want to connect, then click Finish.
You can also reduce the number of premises that can be displayed in the UI by hovering over a deployment and selecting Delete.
To view the premises you just added, return to the home workbench and select the Premises dropdown in the top-right corner of the screen. Alternately, run the netq config show cli premises command.
Full NetQ Deployment at Primary Premises and Smaller Deployments at Secondary Premises
In this implementation, there is a NetQ appliance or VM at one of the deployments acting as the primary premises for the other deployments. The primary premises runs the NetQ software (including the NetQ UI and CLI) and houses the database. All other deployments are secondary premises; they run the NetQ cloud software and send their data to the primary premises for storage and processing. A list of these secondary premises is stored with the primary deployment.
After the multiple premises are configured, you can view this list of premises in the NetQ UI at the primary premises, change the name of premises on the list, and delete premises from the list.
In this deployment model, the data is stored and can be viewed only from the NetQ UI at the primary premises.
The primary NetQ premises must be installed and operational before the secondary premises can be added.
To create and add secondary premises:
In the workbench header, select the Premises dropdown.
Click Manage premises. Your primary premises (OPID0) is shown by default.
Click Add premises.
Enter the name of a secondary premises you’d like to add, then click Done.
From the confirmation dialog, select View config key.
Click the copy icon, then save the key to a safe place, or click e-mail to send it to yourself or others. Then click Confirm activation.
To view the premises you just added, return to the home workbench and select the Premises dropdown at the top-right corner of the screen. Alternately, run the netq config show cli premises command.
Rename a Premises
To rename an existing premises:
In the workbench header, select the Premises dropdown, then Manage premises.
Select a premises to rename, then click Edit.
Enter the new name for the premises, then click Done.
(Optional) Reconfigure the NetQ CLI by generating new AuthKeys. You must complete this step after renaming a premises for the CLI to be functional.
The following sections describe how to back up and restore your NetQ data and VMs for on-premises deployments. Cloud deployments are backed up automatically.
You must run backup and restore scripts with sudo privileges.
In the directory you copied the vm-backuprestore.sh script, run:
cumulus@netq-appliance:~$ sudo ./vm-backuprestore.sh --backup
[sudo] password for cumulus:
Mon Feb 6 12:37:18 2024 - Please find detailed logs at: /var/log/vm-backuprestore.log
Mon Feb 6 12:37:18 2024 - Starting backup of data, the backup might take time based on the size of the data
Mon Feb 6 12:37:19 2024 - Scaling static pods to replica 0
Mon Feb 6 12:37:19 2024 - Scaling all pods to replica 0
Mon Feb 6 12:37:28 2024 - Scaling all daemonsets to replica 0
Mon Feb 6 12:37:29 2024 - Waiting for all pods to go down
Mon Feb 6 12:37:29 2024 - All pods are down
Mon Feb 6 12:37:29 2024 - Creating backup tar /opt/backuprestore/backup-netq-standalone-onprem-4.9.0-2024-02-06_12_37_29_UTC.tar
Backup is successful, please scp it to the master node the below command:
sudo scp /opt/backuprestore/backup-netq-standalone-onprem-4.9.0-2024-02-06_12_37_29_UTC.tar cumulus@<ip_addr>:/home/cumulus
Restore the backup file using the below command:
./vm-backuprestore.sh --restore --backupfile /opt/backuprestore/backup-netq-standalone-onprem-4.9.0-2024-02-06_12_37_29_UTC.tar
cumulus@netq-appliance:~$
Verify the backup file creation was successful:
cumulus@netq-appliance:~$ cd /opt/backuprestore/
cumulus@netq-appliance:~/opt/backuprestore$ ls
backup-netq-standalone-onprem-4.9.0-2024-02-06_12_37_29_UTC.tar
Restore Your NetQ Data
Restore NetQ data with the backup file you created in the steps above. The restore option of the backup script copies the data from the backup file to the database, decompresses it, verifies the restoration, and starts all necessary services. You should not see any data loss as a result of a restore operation.
Run the restore script, referencing the directory where the backup file resides.
If you restore NetQ data to a server with an IP address that is different from the one used to back up the data, you must reconfigure the agents on each switch as a final step.
cumulus@netq-appliance:~$ sudo vm-backuprestore.sh --restore --backupfile /home/cumulus/backup-netq-standalone-onprem-4.10.0-2029-02-06_12_37_29_UTC.tar
Mon Feb 6 12:39:57 2024 - Please find detailed logs at: /var/log/vm-backuprestore.log
Mon Feb 6 12:39:57 2024 - Starting restore of data
Mon Feb 6 12:39:57 2024 - Extracting release file from backup tar
Mon Feb 6 12:39:57 2024 - Cleaning the system
Mon Feb 6 12:39:57 2024 - Restoring data from tarball /home/cumulus/backup-netq-standalone-onprem-4.10.0-2024-02-06_12_37_29_UTC.tar
Data restored successfully
Please follow the below instructions to bootstrap the cluster
The config key restored is EhVuZXRxLWVuZHBvaW50LWdhdGVfYXkYsagDIix2OUJhMUpyekMwSHBBaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==, alternately the config key is available in file /tmp/config-key
Pass the config key while bootstrapping:
Example(standalone): netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==
Example(cluster): netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==
Alternately you can setup config-key post bootstrap in case you missed to pass it during bootstrap
Example(standalone): netq install standalone activate-job config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==
Example(cluster): netq install cluster activate-job config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==
In case the IP of the restore machine is different from the backup machine, please reconfigure the agents using: https://docs.nvidia.com/networking-ethernet-software/cumulus-netq/Installation-Management/Install-NetQ/Install-NetQ-Agents/#configure-netq-agents-using-a-configuration-file
cumulus@netq-appliance:~$
Post-installation Configurations
This section describes the various integrations you can configure after installing NetQ.
As an administrator, you can integrate the NetQ role-based access control (RBAC) with your lightweight directory access protocol (LDAP) server in on-premises deployments. NetQ maintains control over role-based permissions for the NetQ application. With the RBAC integration, LDAP handles account authentication and your directory service (such as Microsoft Active Directory, Kerberos, OpenLDAP, and Red Hat Directory Service). A copy of each account from LDAP is stored in the local NetQ database.
Integrating with an LDAP server does not prevent you from configuring local accounts (stored and managed in the NetQ database) as well.
Get Started
LDAP integration requires information about how to connect to your LDAP server, the type of authentication you plan to use, bind credentials, and, optionally, search attributes.
Provide Your LDAP Server Information
To connect to your LDAP server, you need the URI and bind credentials. The URI identifies the location of the LDAP server. It comprises a FQDN (fully qualified domain name) or IP address, and the port of the LDAP server where the LDAP client can connect. For example: myldap.mycompany.com or 192.168.10.2. Typically you use port 389 for connection over TCP or UDP. In production environments, you deploy a secure connection with SSL. In this case, the port used is typically 636. Setting the Enable SSL toggle automatically sets the server port to 636.
Specify Your Authentication Method
There are two types of user authentication: anonymous and basic.
Anonymous: LDAP client does not require any authentication. The user can access all resources anonymously. This is not commonly used for production environments.
Basic (also called Simple): LDAP client must provide a bind DN and password to authenticate the connection. When selected, the Admin credentials appear: Bind DN and Bind Password. You define the distinguished name (DN) using a string of variables. Some common variables include:
Syntax
Description or Usage
cn
Common name
ou
Organizational unit or group
dc
Domain name
dc
Domain extension
Bind DN: DN of user with administrator access to query the LDAP server; used for binding with the server. For example, uid =admin,ou=ntwkops,dc=mycompany,dc=com.
Bind Password: Password associated with Bind DN.
The Bind DN and password get sent as clear text. Only users with these credentials can perform LDAP operations.
If you are unfamiliar with the configuration of your LDAP server, contact your administrator to ensure you select the appropriate authentication method and credentials.
Define User Attributes
You need the following two attributes to define a user entry in a directory:
Base DN: Location in directory structure where search begins. For example, dc=mycompany,dc=com.
User ID: Type of identifier used to specify an LDAP user. This can vary depending on the authentication service you are using. For example, you can use the user ID (UID) or email address with OpenLDAP, whereas you might use the sAMAccountName with Active Directory.
Optionally, you can specify the first name, last name, and email address of the user.
Set Search Attributes
While optional, specifying search scope indicates where to start and how deep a given user can search within the directory. You specify the data to search for in the search query.
Search scope options include:
Subtree: Search for users from base, subordinates at any depth (default)
Base: Search for users at the base level only; no subordinates
One level: Search for immediate children of user; not at base or for any descendants
Subordinate: Search for subordinates at any depth of user; but not at base
A typical search query for users could be {userIdAttribute}={userId}.
Create an LDAP Configuration
You can configure one LDAP server per bind DN (distinguished name). After you configure LDAP, you can verify the connectivity and save the configuration.
To create an LDAP configuration:
Expand the Menu and select Management.
Locate the LDAP Server Info card, and click Configure LDAP.
Fill out the LDAP server configuration form according to your particular configuration.
Click Save to complete the configuration, or click Cancel to discard the configuration.
The LDAP configuration cannot be changed after it is configured. If you need to change the configuration, you must delete the current LDAP configuration and create a new one. Note that if you change the LDAP server configuration, all users created against that LDAP server remain in the NetQ database and continue to be visible, but are no longer viable. You must manually delete those users if you do not want to see them.
Example LDAP Configurations
This section lists a variety of example configurations. Scenarios 1-3 are based on using an OpenLDAP or similar authentication service. Scenario 4 is based on using the Active Directory service for authentication.
Scenario 1: Base Configuration
In this scenario, we are configuring the LDAP server with anonymous authentication, a user ID based on an email address, and a search scope of base.
Parameter
Value
Host Server URL
ldap1.mycompany.com
Host Server Port
389
Authentication
Anonymous
Base DN
dc=mycompany,dc=com
User ID
email
Search Scope
Base
Search Query
{userIdAttribute}={userId}
Scenario 2: Basic Authentication and Subset of Users
In this scenario, we are configuring the LDAP server with basic authentication, accessible only to users in the network operators group, and with a limited search scope.
Parameter
Value
Host Server URL
ldap1.mycompany.com
Host Server Port
389
Authentication
Basic
Admin Bind DN
uid =admin,ou=netops,dc=mycompany,dc=com
Admin Bind Password
nqldap!
Base DN
dc=mycompany,dc=com
User ID
UID
Search Scope
One Level
Search Query
{userIdAttribute}={userId}
Scenario 3: Scenario 2 with Widest Search Capability
In this scenario, we are configuring the LDAP server with basic authentication, accessible only to users in the network administrators group, and with an unlimited search scope.
Parameter
Value
Host Server URL
192.168.10.2
Host Server Port
389
Authentication
Basic
Admin Bind DN
uid =admin,ou=netadmin,dc=mycompany,dc=com
Admin Bind Password
1dap*netq
Base DN
dc=mycompany, dc=net
User ID
UID
Search Scope
Subtree
Search Query
userIdAttribute}={userId}
Scenario 4: Scenario 3 with Active Directory Service
In this scenario, we are configuring the LDAP server with basic authentication, accessible only to users in the given Active Directory group, and with an unlimited search scope.
Parameter
Value
Host Server URL
192.168.10.2
Host Server Port
389
Authentication
Basic
Admin Bind DN
cn=netq,ou=45,dc=mycompany,dc=com
Admin Bind Password
nq&4mAd!
Base DN
dc=mycompany, dc=net
User ID
sAMAccountName
Search Scope
Subtree
Search Query
{userIdAttribute}={userId}
Add LDAP Users to NetQ
Click Menu and select Management.
Locate the User Accounts card, and click Manage.
From the User accounts tab, select Add user above the table.
Select LDAP User, then enter the user’s ID.
Enter your administrator password, then select Search.
If the user is found, the email address, first, and last name fields are automatically populated. If searching is not enabled on the LDAP server, you must enter the information manually.
If the fields are not automatically filled in, and searching is enabled on the LDAP server, you might need to edit the mapping file.
LDAP user passwords are not stored in the NetQ database and are always authenticated against LDAP.
Repeat these steps to add additional LDAP users.
Remove LDAP Users from NetQ
You can remove LDAP users in the same manner as local users.
Expand the Menu and select Management.
Locate the User Accounts card, and click Manage.
Select the user(s) you want to remove, then select Delete.
If you delete an LDAP user in LDAP it is not automatically deleted from NetQ; however, the login credentials for these LDAP users stop working immediately.
Integrate NetQ with Grafana
Switches collect statistics about the performance of their interfaces. The NetQ Agent on each switch collects these statistics every 15 seconds and then sends them to your NetQ appliance or virtual machine.
NetQ collects statistics for physical interfaces; it does not collect statistics for virtual interfaces, such as bonds, bridges, and VXLANs.
NetQ displays:
Transmit with tx_ prefix: bytes, carrier, colls, drop, errs, packets
Receive with rx_ prefix: bytes, drop, errs, frame, multicast, packets
You can use Grafana, an open source analytics and monitoring tool, to view these statistics. The fastest way to achieve this is by installing Grafana on an application server or locally per user, and then installing the NetQ plugin.
If you do not have Grafana installed already, refer to grafana.com for instructions on installing and configuring the Grafana tool.
Install NetQ Plugin for Grafana
Use the Grafana CLI to install the NetQ plugin. For more detail about this command, refer to the Grafana CLI documentation.
The Grafana plugin comes unsigned. Before you can install it, you need to update the grafana.ini file then restart the Grafana service:
Edit the /etc/grafana/grafana.ini file and add allow_loading_unsigned_plugins = netq-dashboard under plugins:
Cumulus in the Cloud (CITC): plugin.air.netq.nvidia.com
From the Module dropdown, select procdevstats.
Enter your credentials (the ones used to log in).
For NetQ cloud deployments only, if you have more than one premises configured, you can select the premises you want to view, as follows:
If you leave the Premises field blank, the first premises name is selected by default.
If you enter a premises name, that premises is selected for viewing.
If multiple premises are configured with the same name, then the first listed premises is displayed.
Select Save & Test.
Create Your NetQ Dashboard
After you configure the data source, you can create a customizable dashboard with transmit and receive statistics.
Create a Dashboard
Click to open a blank dashboard.
Click (Dashboard Settings) at the top of the dashboard.
Add Variables
Click Variables.
In the Name field, enter hostname.
In the Label field, enter hostname.
From the Data source list, select Net-Q.
From the Refresh list, select On Dashboard Load.
In the Query field, enter hostname.
Click Add.
You should see a preview at the bottom of the hostname values.
Click Variables to add another variable for the interface name.
In the Name field, enter ifname.
In the Label field, enter ifname.
From the Data source list, select Net-Q.
From the Refresh list, select On Dashboard Load.
In the Query field, enter ifname.
Click Add.
You should see a preview at the bottom of the ifname values.
Click Variables to add a variable for metrics.
In the Name field, enter metrics.
In the Label field, enter metrics.
From the Data source list, select Net-Q.
From the Refresh list, select On Dashboard Load.
In the Query field, enter metrics.
Click Add.
You should see a preview at the bottom of the metrics values.
Add Charts
Now that the variables are defined, click to return to the new dashboard.
Click Add Query.
From the Query source list, select Net-Q.
Select the interface statistic you want to view from the Metric list.
Click the General icon.
From the Repeat list, select hostname.
Set any other parameters around how to display the data.
Return to the dashboard.
Select one or more hostnames from the hostname list.
Select one or more interface names from the ifname list.
Select one or more metrics to display for these hostnames and interfaces from the metrics list.
The following example shows a dashboard with two hostnames, two interfaces, and one metric selected. The more values you select from the variable options, the more charts appear on your dashboard.
Analyze the Data
After you have configured the dashboard, you can start analyzing the data. You can explore the data by modifying the viewing parameters in one of several ways using the dashboard tool set:
Select a different time period for the data by clicking the forward or back arrows. The default time range is dependent on the width of your browser window.
Zoom in on the dashboard by clicking the magnifying glass.
Manually refresh the dashboard data, or set an automatic refresh rate for the dashboard from the down arrow.
Add additional panels.
Click any chart title to edit or remove it from the dashboard.
Rename the dashboard by clicking the cog wheel and entering the new name.
SSO Authentication
You can integrate your NetQ cloud deployment with a Microsoft Azure Active Directory (AD) or Google Cloud authentication server to support single sign-on (SSO) to NetQ. NetQ supports integration with SAML (Security Assertion Markup Language), OAuth (Open Authorization), and multi-factor authentication (MFA). Only one SSO configuration can be configured at a time.
You can create local accounts with default access roles by enabling SSO. After enabling SSO, users logging in for the first time can sign up for SSO through the NetQ login screen or with a link provided by an admin.
Add SSO Configuration and Accounts
To integrate your authentication server:
Expand the Menu and select Management.
Locate the SSO Configuration card and select Manage.
Select either SAML or OpenID (which uses OAuth with OpenID Connect).
Specify the parameters:
You need several pieces of data from your Microsoft Azure or Google account and authentication server to complete the integration.
SSO Organization is typically a company’s name or a department. The name entered in this field will appear in the SSO signup URL.
Role (either user or admin) is automatically assigned when the account is initalized via SSO login.
Name is a unique name for the SSO configuration.
Client ID is the identifier for your resource server.
Client Secret is the secret key for your resource server.
Authorization Endpoint is the URL of the authorization application.
Token Endpoint is the URL of the authorization token.
Select Test to verify the configuration and ensure that you can log in. If it is not working, you are logged out. Check your specification and retest the configuration until it is working properly.
Select Close. The card reflects the configuration:
To require users to log in using this SSO configuration, select Change under the “Disabled” status and confirm. The card updates to reflect that SSO is enabled.
After an admin has configured and enabled SSO, users logging in for the first time can sign up for SSO.
Select Test to verify the configuration and ensure that you can log in. If it is not working, you are logged out. Check your specification and retest the configuration until it is working properly.
Select Close. The card reflects the configuration:
To require users to log in using this SSO configuration, select Change under the “Disabled” status and confirm. The card updates to reflect that SSO is enabled.
Select Submit to enable the configuration. The SSO card reflects the “enabled” status.
After an admin has configured and enabled SSO, users logging in for the first time can sign up for SSO.
The SSO organization you entered during the configuration will replace SSO_Organization in the URL.
Modify Configuration
You can change the specifications for SSO integration with your authentication server at any time, including changing to an alternate SSO type, disabling the existing configuration, or reconfiguring SSO.
Change SSO Type
From the SSO Configuration card:
Select Disable, then Yes.
Select Manage then select the desired SSO type and complete the form.
Copy the redirect URL on the success dialog into your identity provider configuration.
Select Test to verify that the login is working. Modify your specification and retest the configuration until it is working properly.
Select Update.
Disable SSO Configuration
From the SSO Configuration card:
Select Disable.
Select Yes to disable the configuration, or Cancel to keep it enabled.
Uninstall NetQ
This page outlines how to remove the NetQ software from your system server and switches.
Remove the NetQ Agent and CLI
Use the apt-get purge command to remove the NetQ Agent or CLI package from a Cumulus Linux switch or an Ubuntu host:
cumulus@switch:~$ sudo apt-get update
cumulus@switch:~$ sudo apt-get purge netq-agent netq-apps
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
netq-agent* netq-apps*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 310 MB disk space will be freed.
Do you want to continue? [Y/n] Y
...
If you only want to remove the agent or the CLI, but not both, specify just the relevant package in the apt-get purge command.
To verify the removal of the packages from the switch, run:
cumulus@switch:~$ dpkg-query -l netq-agent
dpkg-query: no packages found matching netq-agent
cumulus@switch:~$ dpkg-query -l netq-apps
dpkg-query: no packages found matching netq-apps
Use the yum remove command to remove the NetQ agent or CLI package from a RHEL7 or CentOS host:
Verify the removal of the packages from the switch:
cumulus@switch:~$ dpkg-query -l netq-agent
dpkg-query: no packages found matching netq-agent
cumulus@switch:~$ dpkg-query -l netq-apps
dpkg-query: no packages found matching netq-apps
Delete the virtual machine according to the usual VMware or KVM practice.
Delete a virtual machine from the host computer using one of the following methods:
Right-click the name of the virtual machine in the Favorites list, then select Delete from Disk.
Select the virtual machine and choose VM > Delete from disk.
Delete a virtual machine from the host computer using one of the following methods:
Run virsch undefine <vm-domain> --remove-all-storage
Run virsh undefine <vm-domain> --wipe-storage
Manage Users
As an admin, you can manage users and authentication settings from the NetQ management dashboard.
Lifecycle management displays an inventory of switches that are available for software installation or upgrade through NetQ. From the inventory list, you can assign access profiles and roles to switches, and select switches for software installation and upgrades. You can also decommission switches, which removes them from the NetQ database.
If you manage a switch using an in-band network interface, ensure that you have configured the agent before performing any LCM operations on the switch.
View the LCM Switch Inventory
Expand the Menu, then select Manage switches. From the dashboard, select the Switch management tab. The Switches card displays the number of switches that NetQ discovered and the network OS versions that are running on those switches:
To view a table of all discovered switches and their attributes, select Manage on the Switches card.
If you have more than one network OS version running on your switches, you can click a version segment on the Switches card chart to open a list of switches filtered by that version.
To view a list of all switches discovered by lifecycle management, run the netq lcm show switches command:
netq lcm show switches
[cl-version <text-cumulus-linux-version>]
[netq-version <text-netq-version>]
[json]
The table of switches is the starting point for network OS upgrades or NetQ installations and upgrades. If the switches you want to upgrade are not present in the list, you can:
Verify the missing switches are reachable using ping
Run a switch discovery, which locates all switches running Cumulus Linux in your network’s fabric
Verify that the NetQ Agent is fresh and running version 4.1.0 or later for switches that already have the agent installed (click Menu, then click Agents or run netq show agents)
A switch discovery searches your network for all Cumulus Linux switches (with and without NetQ currently installed) and determines the versions of Cumulus Linux and NetQ installed. These results can be used to install or upgrade Cumulus Linux and NetQ on all discovered switches in a single procedure.
If you intend to upgrade your switches, generate AuthKeys using the UI. Copy the access key and secret key to an accessible location. You will enter the AuthKeys later on in this process.
To discover switches running Cumulus Linux:
Expand the Menu, then select Manage switches:
On the Switches card, click Discover.
Enter a name for the scan.
Choose whether you want to look for switches by entering IP address ranges or import switches using a comma-separated values (CSV) file.
If you do not have a switch listing, then you can manually add the address ranges where your switches are located in the network. This has the advantage of catching switches that might have been missed in a file.
A maximum of 50 addresses can be included in an address range. If necessary, break the range into smaller ranges.
To discover switches using address ranges:
Enter an IP address range in the IP Range field.
Ranges can be contiguous, for example 192.168.0.24-64, or non-contiguous, for example 192.168.0.24-64,128-190,235, but they must be contained within a single subnet.
Optionally, enter another IP address range (in a different subnet) by clicking .
For example, 198.51.100.0-128 or 198.51.100.0-128,190,200-253.
Add additional ranges as needed. Click to remove a range.
If you decide to use a CSV file instead, the ranges you entered will remain if you return to using IP ranges again.
To import switches through a CSV file:
Click Browse.
Select the CSV file containing the list of switches.
The CSV file must include a header containing hostname, ip, and port. They can be in any order you like, but the data must match that order. For example, a CSV file that represents the Cumulus reference topology could look like this:
or this:
You must have an IP address in your file, but the hostname is optional. If the port is blank, NetQ uses switch port 22 by default.
Click Remove if you decide to use a different file or want to use IP address ranges instead. If you entered ranges before selecting the CSV file option, they remain.
Select an access profile from the dropdown menu. If you use Netq-Default you will see a message requesting that you create or update your credentials.
Click Next.
When the network discovery is complete, NetQ presents the number of Cumulus Linux switches it found. Each switch can be in one of the following categories:
Discovered without NetQ: Switches found without NetQ installed
Discovered with NetQ: Switches found with some version of NetQ installed
Discovered but rotten: Switches found that are unreachable
Incorrect credentials: Switches found that are unreachable because the provided access credentials do not match those for the switches
OS not supported: Switches found that are running a Cumulus Linux version not supported by LCM upgrades
Not discovered: IP addresses which did not have an associated Cumulus Linux switch
After performing a switch discovery, you can install or upgrade Cumulus Linux and NetQ.
Use the netq lcm discover command, specifying a single IP address, a range of IP addresses where your switches are located in the network, or a CSV file containing the IP address.
You must also specify the access profile ID, which you can obtain with the netq lcm show credentials command.
cumulus@switch:~$ netq lcm discover ip-range 192.168.0.15 profile_id credential_profile_d9e875bd2e6784617b304c20090ce28ff2bb46a4b9bf23cda98f1bdf911285c9
NetQ Discovery Started with job id: job_scan_9ea69d30-e86e-11ee-b32d-71890ec96f40
When the network discovery is complete, run netq lcm show discovery-job and include the job ID which you obtained from the previous command. If you do not specify a job ID, the output includes a list of all job IDs from all discovery jobs.
cumulus@switch:~$ netq lcm show discovery-job job_scan_9ea69d30-e86e-11ee-b32d-71890ec96f40
Scan COMPLETED
Summary
-------
Start Time: 2024-03-22 17:10:56.363000
End Time: 1970-01-01 00:00:00.000000
Total IPs: 1
Completed IPs: 1
Discovered without NetQ: 0
Discovered with NetQ: 0
Incorrect Credentials: 0
OS Not Supported: 0
Not Discovered: 0
Hostname IP Address MAC Address CPU CL Version NetQ Version Config Profile Discovery Status
----------------- ------------------------- ------------------ -------- ----------- ------------- ---------------------------- ----------------
noc-pr 192.168.0.15 00:01:00:00:11:00 x86_64 5.8.0 4.8.0 [] WITH_NETQ_ROTTEN
NetQ presents the number of Cumulus Linux switches it has found. The output displays their discovery status, which can be one of the following:
Discovered without NetQ: Switches found without NetQ installed
Discovered with NetQ: Switches found with some version of NetQ installed
Discovered but Rotten: Switches found that are unreachable
Incorrect Credentials: Switches found that are unreachable because the provided access credentials do not match those for the switches
OS not Supported: Switches found that are running Cumulus Linux version not supported by the LCM upgrade feature
Not Discovered: IP addresses which did not have an associated Cumulus Linux switch
After performing a switch discovery, you can install or upgrade Cumulus Linux and NetQ using the appropriate netq lcm command.
Role Management
You can assign switches one of four roles: superspine, spine, leaf, and exit.
Switch roles identify switch dependencies and determine the order in which switches are upgraded. The upgrade process begins with switches assigned the superspine role, then continues with the spine switches, leaf switches, exit switches, and finally, switches with no role assigned. Upgrades for all switches with a given role must be successful before the upgrade proceeds to the switches with the closest dependent role.
Role assignment is optional, but recommended. Assigning roles can prevent switches from becoming unreachable due to dependencies between switches or single attachments. Additionally, when you deploy MLAG pairs, assigned roles avoid upgrade conflicts.
Assign Roles to Switches
On the Switches card, click Manage.
Select one switch or multiple switches to assign to the same role. For large networks with many devices, you can assign roles in batches by selecting Bulk assign role and creating rules based on device hostnames.
Above the table, select Assign role.
Select the role (superspine, leaf, spine, or exit) that applies to the selected switch(es).
Click Assign.
Note that the Role column is updated with the role assigned to the selected switch(es). To return to the full list of switches, click All.
Continue selecting switches and assigning roles until most or all switches have roles assigned.
To assign multiple switches to the same role, separate the hostnames with commas (no spaces). This example configures leaf01 through leaf04 switches with the leaf role:
netq lcm add role leaf switches leaf01,leaf02,leaf03,leaf04
To view all switch roles, run:
netq lcm show switches [version <text-cumulus-linux-version>] [json]
Use the version option to only show switches with a given network OS version, X.Y.Z.
The Role column displays assigned roles:
cumulus@switch:~$ netq lcm show switches
Hostname Role IP Address MAC Address CPU CL Version NetQ Version Config Profile Credential Profile Last Changed
----------------- ---------- ------------------------- ------------------ -------- ----------- ------------- ---------------------------- ------------------------------------ -------------------------
noc-pr 192.168.0.15 00:01:00:00:11:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:34:57 2024
1699073372.80
e664937
noc-se 192.168.0.15 00:01:00:00:12:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
spine-1 spine 192.168.0.15 00:01:00:00:13:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
spine-2 spine 192.168.0.15 00:01:00:00:14:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
spine-3 spine 192.168.0.15 00:01:00:00:15:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
tor-2 192.168.0.15 00:01:00:00:17:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
exit-1 exit 192.168.0.15 00:01:00:00:01:00 x86_64 5.8.0 4.8.0-cl4u44~ [] Netq-Default Thu Mar 14 05:35:27 2024
1699073372.80
e664937
exit-2 exit 192.168.0.15 00:01:00:00:02:00 x86_64 5.8.0 4.8.0-cl4u44~ [] CL-auth-profile Wed Mar 20 16:03:15 2024
Reassign Roles to Switches
On the Switches card, click Manage.
Select the switches with the incorrect role from the list.
Click Assign role.
Select the correct role. To leave a switch unassigned, select No Role.
Click Assign.
You use the same command to both assign a role and change a role.
For a single switch, run:
netq lcm add role exit switches border01
To assign multiple switches to the same role, separate the hostnames with commas (no spaces). For example:
cumulus@switch:~$ netq lcm add role exit switches border01,border02
Host a ZTP Script with NetQ
You can host a Zero Touch Provisioning (ZTP) script on your NetQ VM to provision switches running Cumulus Linux. To host a ZTP script, copy the script to your NetQ server and reference the path you copied to in the netq lcm add ztp-script CLI command:
cumulus@netq-server:~$ netq lcm add ztp-script /home/cumulus/ztp.sh
ZTP script ztp.sh uploaded successfully and can be downloaded from http://10.10.10.10/lcm/asset/ztp.sh
cumulus@netq-server:~$
The output of the command will provide the URL to use in the DHCP server option 239 configuration to instruct switches to retrieve the script. If you would like to use your NetQ VM as a DHCP server, you can use the Kea DHCP server package, which is installed by default.
To list scripts that are currently added to NetQ along with their download URLs and script identification numbers, use the netq lcm show ztp-scripts command. You can remove ZTP scripts from NetQ with the netq lcm del ztp-script <text-ztp-script-id> command.
Decommissioning the switch or host removes information about the switch or host from the NetQ database. When the NetQ Agent restarts at a later date, it sends a connection request back to the database, so NetQ can monitor the switch or host again.
From the LCM dashboard, navigate to the Switch management tab.
On the Switches card, select Manage.
Select the devices to decommission, then select Decommission device above the table:
If you attempt to decommission a switch that is assigned a default, unmodified access profile, the process will fail. Create a unique access profile (or update the default with unique credentials), then attach the profile to the switch you want to decommission.
Confirm the devices you want to decommission.
Wait for the decommission process to complete, then select Done.
To decommission a switch or host:
On the given switch or host, stop and disable the NetQ Agent service:
You must have switch access credentials to install and upgrade software on a switch. These user authentication credentials are stored in NetQ as access profiles. The profiles must be applied to a switch before you can upgrade or install software.
Access Profiles
Authentication credentials are stored in access profiles which can be assigned to individual switches. You can create credentials with either basic (SSH username/password) or SSH (public/private key) authentication. This section describes how to create, edit, and delete access profiles. After you create a profile, attach it to individual switches so that you can perform upgrades on those switches.
By default, NVIDIA supplies an access profile called Netq-Default. You must create a new access profile or update the default profile with unique credentials to perform upgrades and other lifecycle management tasks.
You cannot delete the default profile.
Create Access Profiles
Expand the Menu and select Manage switches.
On the Access Profiles card, select Add profile.
Enter a name for the profile, then select the authentication method you want to use: SSH or Basic
The SSH user must have sudoer permission to configure switches when using the SSH key method. To provide sudo access to the SSH user on a switch, create a file in the /etc/sudoers.d/ directory with the following content. Replace <USER> with the SSH access profile username:
“<USER>” ALL=(ALL) NOPASSWD: ALL
Create a pair of SSH private and public keys on the NetQ appliance:
ssh-keygen -t rsa -C "<USER>"
When prompted, hit the enter/return key.
Copy the SSH public key to each switch that you want to upgrade using one of the following methods:
Manually copy the SSH public key to the /home/<USER>/.ssh/authorized_keys file on each switch, or
Run ssh-copy-id USER@<switch_ip> on the server where you generated the SSH key pair for each switch
Copy the SSH private key into the entry field:
For security, your private key is stored in an encrypted format, and only provided to internal processes while encrypted.
(Optional) To verify that the new profile is listed among available profiles, select View profiles from the Access Profiles card.
(Optional) Attach the profile to a switch so that you can perform upgrades.
Enter a username and password.
Click Create, then confirm.
(Optional) To verify that the new profile is listed among available profiles, select View profiles from the Access Profiles card.
(Optional) Attach the profile to a switch so that you can perform upgrades.
Specify a unique name for the configuration after profile_name.
The default credentials for Cumulus Linux have changed from cumulus/CumulusLinux! to cumulus/cumulus for releases 4.2 and later. For details, read Cumulus Linux User Accounts.
To configure SSH authentication using a public/private key:
You must have sudoer permission to properly configure switches when using the SSH key method.
If the keys do not yet exist, create a pair of SSH private and public keys on the NetQ appliance.
ssh-keygen -t rsa -C "<USER>"
When prompted, hit the enter/return key.
Copy the SSH public key to each switch that you want to upgrade using one of the following methods:
Manually copy the SSH public key to the /home/<USER>/.ssh/authorized_keys file on each switch, or
Run ssh-copy-id USER@<switch_ip> on the server where you generated the SSH key pair for each switch
Add these credentials to the switch. Specify a unique name for the configuration after profile_name.
You cannot delete a profile that is currently attached to a switch. You must attach a different profile to the switch first. Note that you cannot delete the Netq-Default profile (but you can edit it).
On the Access Profiles card, select View profiles.
From the list of profiles, select Delete in the profile’s row.
The delete icon only appears next to custom profiles that are not attached to a switch.
Select Remove.
Run netq lcm show credentials. Identify the profiles you’d like to delete and copy their identifiers from the Profile ID column. The following example deletes the n-1000 profile:
cumulus@netq-server:~$ netq lcm show credentials
Profile ID Profile Name Type SSH Key Username Password Number of switches Last Changed
-------------------- ------------------------ ---------------- -------------- ---------------- ---------------- ------------------------------------ -------------------------
credential_profile_d Netq-Default BASIC cumulus ************** 11 Fri Feb 3 18:20:33 2023
9e875bd2e6784617b304
c20090ce28ff2bb46a4b
9bf23cda98f1bdf91128
5c9
credential_profile_3 n-1000 BASIC admin ************** 0 Fri Feb 3 21:49:10 2023
eddab251bddea9653df7
cd1be0fc123c5d7a42f8
18b68134e42858e54a9c
289
Run netq lcm del credentials profile_ids <text-credential-profile-ids>:
cumulus@netq-server:~$ netq lcm del credentials profile_ids credential_profile_3eddab251bddea9653df7cd1be0fc123c5d7a42f818b68134e42858e54a9c289
Verify that the profile is deleted with netq lcm show credentials.
View Access Profiles
You can view the type of credentials used to access your switches in the NetQ UI. You can view the details of the credentials using the NetQ CLI.
Open the LCM dashboard.
On the Access Profiles card, select View profiles.
To view a list of access profiles and their associated credentials, run netq lcm show credentials.
If you use an SSH key for the credentials, the public key appears in the command output.
If you use a username and password for the credentials, the username appears in the command output with the password masked.
Attach an Access Profile to a Switch
NetQ uses access profiles to store user authentications credentials. After creating an access profile from your credentials, you can attach a profile to one or multiple switches.
Expand the Menu and select Manage switches. On the Switches card, select Manage.
The table displays a list of switches. The Access type column specifies whether the type of authentication is basic or SSH. The Profile name column displays the access profile that is assigned to the switch.
Select the switches to which you’d like to assign access profiles, then select Manage access profile above the table:
Select the profile from the list, then click Apply. If the profile you want to use isn’t listed, select Add new profile and follow the steps to create an access profile.
Select Ok on the confirmation dialog. The updated access profiles are now reflected in the Profile name column.
The command syntax to attach a profile to a switch is:
Run netq lcm show switches and verify the change in the credential profile column.
NetQ and Network OS Images
NetQ and network operating system images are managed with LCM. This section explains how to check for missing images, upgrade images, and specify default images.
The network OS and NetQ images are available in several variants based on the software version, the CPU architecture, platform, and SHA checksum. Download both the netq-apps and netq-agents packages according to the version of Cumulus Linux you are running. You should see version 4.12.0 and update 49 in the results.
Cumulus Linux 5.9.0 or later: netq-agent_4.12.0-cld12u49~1731404238.ffa541ea6_amd64.deb
Cumulus Linux 5.8.0 or earlier, ARM platforms: netq-agent_4.12.0-cl4u49~1731403923.ffa541ea6_armel.deb
Cumulus Linux 5.8.0 or earlier, amd64 platforms: netq-agent_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb
View and Upload Missing Images
You should upload images for each network OS and NetQ version currently installed in your inventory so you can support rolling back to a known good version should an installation or upgrade fail. If you have specified a default network OS and/or NetQ version, the NetQ UI also verifies that the necessary versions of the default image are available based on the known switch inventory, and if not, lists those that are missing.
To upload missing network OS images:
Expand the Menu and select Manage switches. Select the Image management tab.
On the Cumulus Linux Images card, select View # missing CL images to see which images you need.
If you have already specified a default image, you must click Manage and then Missing to see the missing images.
Select one or more of the missing images and take note of the version, ASIC vendor, and CPU architecture for each.
Download the network OS disk images (.bin files) from the NVIDIA Enterprise Support Portal. Log in to the portal and from the Downloads tab, select Switches and Gateways. Under Switch Software, click All downloads next to Cumulus Linux for Mellanox Switches. Select the current version and the target version, then click Show Downloads Path. Download the file.
In the UI, select Add image above the table.
Provide the .bin file from an external drive that matches the criteria for the selected image(s).
Click Import.
If the upload was unsuccessful, an Image Import Failed message appears. Close the dialog and try uploading the file again.
Click Done.
(Optional) Click the Uploaded tab to verify the image is in the repository.
Click Close to return to the LCM dashboard.
The Cumulus Linux Images card reflects the number of images you uploaded.
(Optional) Display a summary of Cumulus Linux images uploaded to the LCM repo on the NetQ appliance or VM:
netq lcm show cl-images
Download the network OS disk images (.bin files) from the NVIDIA Enterprise Support Portal. Log into the portal and from the Downloads tab, select Switches and Gateways. Under Switch Software, click All downloads next to Cumulus Linux for Mellanox Switches. Select the current version and the target version, then click Show Downloads Path. Download the file.
Upload the images to the LCM repository. The following example uses a Cumulus Linux 5.9.1 disk image.
Repeat step 2 for each image you need to upload to the LCM repository.
To upload missing NetQ images:
Expand the Menu and select Manage switches. Select the Image management tab.
On the NetQ Images card, select View # missing NetQ images to see which images you need.
If you have already specified a default image, you must click Manage and then Missing to see the missing images.
Select one or all of the missing images and make note of the OS version, CPU architecture, and image type. Remember that you need both netq-apps and netq-agent for NetQ to perform the installation or upgrade.
Download the NetQ Debian packages needed for upgrade from the NetQ repository, selecting the appropriate OS version and architecture. Place the files in an accessible part of your local network.
In the UI, click Add image above the table.
Provide the .deb file(s) from an external drive that matches the criteria for the selected image.
Click Import.
If the upload was unsuccessful, an Image Import Failed message appears. Close the dialog and try uploading the file again.
Click Done.
(Optional) Click the Uploaded tab to verify that the image is in the repository.
Click Close to return to the LCM dashboard.
The NetQ Images card reflects the number of images you uploaded.
(Optional) Display a summary of NetQ images uploaded to the LCM repo on the NetQ appliance or VM:
netq lcm show netq-images
Download the NetQ Debian packages needed for upgrade from the NetQ repository, selecting the appropriate version and hypervisor/platform. Place them in an accessible part of your local network.
Upload the images to the LCM repository. This example uploads the two packages (netq-agent and netq-apps) required for NetQ version 4.12.0 for a NetQ appliance or VM running Ubuntu 20.04 with an AMD 64 architecture.
To upload the network OS or NetQ images that you want to use for upgrade, first download the Cumulus Linux disk images (.bin files) and NetQ Debian packages from the NVIDIA Enterprise Support Portal and NetQ repository, respectively. Place them in an accessible part of your local network.
If you are upgrading the network OS on switches with different ASIC vendors or CPU architectures, you need more than one image. For NetQ, you need both the netq-apps and netq-agent packages for each variant.
After obtaining the images, upload them to NetQ with the UI or CLI:
From the LCM dashboard, select the Image management tab.
Select Add image on the appropriate card:
Provide one or more images from an external drive.
Click Import.
Monitor the progress until it completes. Click Done.
Use the netq lcm add cl-image <text-cl-image-path> and netq lcm add netq-image <text-image-path> commands to upload the images. Run the relevant command for each image that needs to be uploaded.
Specifying a default upgrade version is optional, but recommended. You can assign a specific OS or NetQ version as the default version to use when installing or upgrading switches. The default is typically the newest version that you intend to install or upgrade on all, or the majority, of your switches. If necessary, you can override the default selection during the installation or upgrade process if an alternate version is needed for a given set of switches.
To specify a default version in the NetQ UI:
From the LCM dashboard, select the Image management tab.
Select Click here to set default x version on the relevant card.
Select the version you want to use as the default for switch upgrades.
Click Save. The default version is now displayed on the relevant Images card.
cumulus@switch:~$ netq lcm show default-version netq-images
Remove Images from Local Repository
After you upgrade all your switches beyond a particular release, you can remove images from the LCM repository to save space on the server. To remove images:
From the LCM dashboard, select the Image management tab.
Click Manage on the Cumulus Linux Images or NetQ Images card.
On the Uploaded tab, select the images you want to remove.
Click Delete.
To remove Cumulus Linux images, run:
netq lcm show cl-images [json]
netq lcm del cl-image <text-cl-image-id>
Lifecycle management lets you upgrade to the latest agent version on switches with an existing NetQ Agent. You can upgrade only the NetQ Agent or both the NetQ Agent and NetQ CLI simultaneously. You can run up to five jobs at the same time; however, a given switch can only appear in one running job at a time.
Prepare for a NetQ Agent Upgrade
Before you upgrade, make sure you have the appropriate files and credentials:
(Optional) Create an agent configuration profile, as described in the next section.
Agent Configuration Profiles
You can set up a configuration profile to indicate how you want NetQ configured when it is installed or upgraded on your Cumulus Linux switches. When you create a configuration profile, you can adjust the following agent settings:
The VRF the NetQ agent uses to communicate with the NetQ server
Whether WJH is enabled or disabled
The agent log level
The agent CPU limit
The default configuration profile, NetQ default config, is set up to run in the management VRF and provide info-level logging. Both WJH and CPU limiting are disabled.
Create a NetQ agent configuration profile with the netq lcm add netq-config command. If you manage the switch using an in-band interface, you must specify the interface name using the inband-interface option:
The steps below assume that a version of NetQ is already installed on your switches. If NetQ is not installed, run a switch discovery to find all Cumulus Linux switches with and without NetQ currently installed and perform the upgrade as part of the discovery workflow.
Expand the Menu and select Manage switches.
Locate the Switches card and click Manage. Select the switches you want to upgrade.
Click Upgrade NetQ above the table and follow the steps in the UI.
Verify that the number of switches selected for upgrade matches your expectation.
Enter a name for the upgrade job. The name can contain a maximum of 22 characters (including spaces).
Review each switch. If you’d like to change the agent configuration profile, click Change config, then select an alternate profile to apply to all selected switches. Alternately, you can apply different profiles to each switch by clicking the current profile and selecting an alternate profile.
Review the summary indicating the number of switches and the configuration profile to be used. If either is incorrect, click Back and review your selections.
Select the version of NetQ Agent for upgrade. If you have designated a default version, keep the Default selection. Otherwise, select an alternate version by clicking Custom and selecting it from the list.
By default, the NetQ Agent and CLI are upgraded on the selected switches. If you do not want to upgrade the NetQ CLI, click Advanced and change the selection to No.
NetQ performs several checks to eliminate preventable problems during the upgrade process. When all of the pre-checks pass, click Upgrade to initiate the upgrade.
To upgrade the NetQ Agent on one or more switches, run:
The following example creates a NetQ Agent upgrade job called *upgrade-example. It upgrades the spine01 and spine02 switches with NetQ Agents version 4.12.0.
After starting the upgrade you can monitor the progress in the NetQ UI. Successful upgrades are indicated by a green . Failed upgrades display error messages indicating the cause of failure.
To view the progress of upgrade jobs using the CLI, run:
netq lcm show upgrade-jobs netq-image [json]
netq lcm show status <text-lcm-job-id> [json]
▼
Example netq lcm show upgrade-jobs
You can view the progress of one upgrade job at a time. This requires the job identifier.
The following example shows all upgrade jobs that are currently running or have completed, and then shows the status of the job with a job identifier of job_netq_install_7152a03a8c63c906631c3fb340d8f51e70c3ab508d69f3fdf5032eebad118cc7.
cumulus@switch:~$ netq lcm show upgrade-jobs netq-image json
[
{
"jobId": "job_netq_install_7152a03a8c63c906631c3fb340d8f51e70c3ab508d69f3fdf5032eebad118cc7",
"name": "Leaf01-02 to NetQ330",
"netqVersion": "4.1.0",
"overallStatus": "FAILED",
"pre-checkStatus": "COMPLETED",
"warnings": [],
"errors": [],
"startTime": 1611863290557.0
}
]
cumulus@switch:~$ netq lcm show status netq-image job_netq_install_7152a03a8c63c906631c3fb340d8f51e70c3ab508d69f3fdf5032eebad118cc7
NetQ Upgrade FAILED
Upgrade Summary
---------------
Start Time: 2021-01-28 19:48:10.557000
End Time: 2021-01-28 19:48:17.972000
Upgrade CLI: True
NetQ Version: 4.1.0
Pre Check Status COMPLETED
Precheck Task switch_precheck COMPLETED
Warnings: []
Errors: []
Precheck Task version_precheck COMPLETED
Warnings: []
Errors: []
Precheck Task config_precheck COMPLETED
Warnings: []
Errors: []
Hostname CL Version NetQ Version Prev NetQ Ver Config Profile Status Warnings Errors Start Time
sion
----------------- ----------- ------------- ------------- ---------------------------- ---------------- ---------------- ------------ --------------------------
leaf01 4.2.1 4.1.0 3.2.1 ['NetQ default config'] FAILED [] ["Unreachabl Thu Jan 28 19:48:10 2021
e at Invalid
/incorrect u
sername/pass
word. Skippi
ng remaining
10 retries t
o prevent ac
count lockou
t: Warning:
Permanently
added '192.1
68.200.11' (
ECDSA) to th
e list of kn
own hosts.\r
\nPermission
denied,
please try a
gain."]
leaf02 4.2.1 4.1.0 3.2.1 ['NetQ default config'] FAILED [] ["Unreachabl Thu Jan 28 19:48:10 2021
e at Invalid
/incorrect u
sername/pass
word. Skippi
ng remaining
10 retries t
o prevent ac
count lockou
t: Warning:
Permanently
added '192.1
68.200.12' (
ECDSA) to th
e list of kn
own hosts.\r
\nPermission
denied,
please try a
gain."]
Upgrade Cumulus Linux
Lifecycle management (LCM) lets you upgrade Cumulus Linux on one or more switches in your network with the NetQ UI or the CLI. You do this by scheduling ‘upgrade jobs’ which upgrade Cumulus Linux on your switches. Each job can upgrade CL on up to 50 switches. NetQ upgrades the switches 5 at a time until all switches in the upgrade job are upgraded. You can schedule up to 5 upgrade jobs to run simultaneously.
For deployments running Cumulus Linux versions:
5.6.0 to 5.8.0: you can upgrade up to Cumulus Linux version 5.9 or later if your environment is running NetQ 4.10.1 or later. If you are running an earlier NetQ version, you must upgrade NetQ before you upgrade Cumulus Linux.
5.0.1 to 5.7.0: you can upgrade up to Cumulus Linux version 5.8.
Cloud deployments must be running NetQ v4.11 or later to perform the steps outlined on this page.
Save all configurations with the nv config save command.
When you upgrade a switch that has not been configured using NVUE (which is only supported for upgrades to Cumulus Linux versions 5.8 and earlier), LCM backs up and restores flat file configurations in Cumulus Linux. After you upgrade a switch that has been managed with flat files and subsequently run NVUE configuration commands, NVUE will overwrite the configuration restored by NetQ LCM. See Upgrading Cumulus Linux and System Configuration with the NVUE CLI for additional information.
During the Cumulus Linux upgrade process, NetQ does not upgrade or reinstall packages that are not part of the Cumulus Linux image. For example, if you installed node_exporter packages on a switch, you must reinstall these packages after the upgrade is complete.
Prepare for a Cumulus Linux Upgrade
Before you upgrade, make sure you have the appropriate files and credentials:
If you are upgrading to Cumulus Linux 5.9 or later and select the option to roll back to a previous Cumulus Linux version (for unsuccessful upgrade attempts), you must upload a total of four netq-apps and netq-agents packages to NetQ. Cumulus Linux 5.9 or later packages include cld12. Prior versions of Cumulus Linux include cl4u.
For example, you must upload the following packages for amd64 architecture:
(Optional) Assign a role to each switch to identify switch dependencies and avoid potential upgrade issues.
Upgrade Cumulus Linux
If the NetQ Agent is already installed on the switches you’d like to upgrade, follow the steps below. If the NetQ Agent is not installed on the switches you’d like to upgrade, run a switch discovery to find all Cumulus Linux switches with and without NetQ currently installed and perform the CL upgrade as part of the discovery workflow.
Expand the Menu, then select Manage switches.
From the Switch management tab, locate the Switches card and click Manage.
Select the switches you want to upgrade.
Click Upgrade OS above the table.
Follow the steps in the UI. Enter a name for the upgrade and review the switches that you selected to upgrade:
If you accidentally included a switch that you do not want to upgrade, hover over the switch information card and click Delete to remove it from the upgrade.
Click Next.
Specify which Cumulus Linux version NetQ should use during the upgrade. If you previously uploaded NetQ images, you can also upgrade NetQ at this time.
By default, NetQ performs a roll back to the original Cumulus Linux version on any server which fails to upgrade. It also takes network snapshots before and after the upgrade.
You can exclude selected services and protocols from the snapshots by clicking them. Node and services must be included.
Click Next.
NetQ performs several checks to eliminate preventable problems during the upgrade process. When all of the pre-checks pass, click Preview.
NetQ directs you to a screen where you can review the upgrade. After reviewing, select Start upgrade and confirm.
Perform the upgrade using the netq lcm upgrade cl-image command, providing a name for the upgrade job, the Cumulus Linux and NetQ version, and a comma-separated list of the hostname(s) to be upgraded:
(Recommended) You can restore the previous version of Cumulus Linux if the upgrade job fails by adding the run-restore-on-failure option to the command.
cumulus@switch:~$ netq lcm upgrade cl-image name upgrade-example cl-version 5.9.1 netq-version 4.12.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-restore-on-failure
Pre-check Failures
If one or more of the pre-checks fail, resolve the related issue and start the upgrade again. In the NetQ UI these failures appear on the Upgrade Preview page. In the NetQ CLI, it appears in the form of error messages in the netq lcm show upgrade-jobs cl-image command output.
Analyze Results
After starting the upgrade you can monitor the progress in the NetQ UI. Successful upgrades are indicated by a green check . Failed upgrades display error messages indicating the cause of failure.
To view the progress of current upgrade jobs and the history of previous upgrade jobs using the CLI, run netq lcm show upgrade-jobs cl-image.
To see details of a particular upgrade job, run netq lcm show status job-ID.
To see only Cumulus Linux upgrade jobs, run netq lcm show status cl-image job-ID.
Download details about the upgrade in a JSON-formatted file, by clicking Download report.
Post-check Failures
A successful upgrade can still have post-check warnings. For example, you updated the OS, but not all services are fully up and running after the upgrade. If one or more of the post-checks fail, warning messages appear in the Post-Upgrade Tasks section of the preview. Click the warning category to view the detailed messages.
Network Snapshots
Snapshots capture a network’s state—including the services running on the network—at a particular point in time. Comparing snapshots lets you check what (if anything) changed in the network, which can be helpful when upgrading a switch or modifying its configuration. This section outlines how to create, compare, and interpret snapshots.
Create a Network Snapshot
To create a snapshot:
Expand the Menu, then select Snapshot.
Next, enter the snapshot’s name, time frame, and the elements you’d like included in the snapshot:
To capture the network’s current state, click Now. To capture the network’s state at a previous date and time, click Past, then in the Start Time field, select the calendar icon.
The Choose options field includes all the elements and services that may run on the network. All are selected by default. Click any element to remove it from the snapshot. Nodes and services are included in all snapshots.
The Notes field is optional. You can add a note as a reminder of the snapshot’s purpose.
Select Finish. The card now appears on your workbench.
When you are finished viewing the snapshot, click Dismiss to remove it from your workbench. You can add it back by selecting Snapshot in the header and navigating to the option to view snapshots.
Compare Network Snapshots
You can compare the state of your network before and after an upgrade or other configuration change to help avoid unwanted changes to your network’s state.
To compare network snapshots:
Expand the Menu, then select Snapshot.
Select Compare snapshots, then select the two snapshots you want to compare.
Click Finish.
If the snapshot cards are already on your workbench, place the cards side-by-side for a high-level comparison. For a more detailed comparison, click Compare on one of the cards and select a snapshot for comparison from the list.
Interpreting the Comparison Data
For each network element with changes, a visualization displays the differences between the two snapshots. Green represents additions, red represents subtractions, and orange represents updates.
In the following example, Snapshot 3 and Snapshot 4 are being compared. Snapshot 3 has a BGP count of 212 and Snapshot 4 has a BGP count of 186. The comparison also shows 98 BGP updates.
From this view, you can dismiss the snapshots or select View Details for additional information and to filter and export the data as a JSON file.
The following table describes the information provided for each element type when changes are present:
Element
Data Descriptions
BGP
Hostname: Name of the host running the BGP session
VRF: Virtual route forwarding interface if used
BGP session: Session that was removed or added
ASN: Autonomous system number
Config
Hostname: Name of the host where the configuration file was added or removed
Configuration file: File that was added or removed
Interface
Hostname: Name of the host where the interface resides
Interface name: Name of the interface that was removed or added
IP Address
Hostname: Name of the host where address was removed or added
Prefix: IP address prefix
Mask: IP address mask
Interface name: Name of the interface that owns the address
Links
Hostname: Name of the host where the link was removed or added
Events provide information about how a network and its devices are operating during a given time period. They help with troubleshooting and alert network administrators to potential network problems before they become critical. You can view events in the UI or CLI, or receive notifications about events via Slack, PagerDuty, syslog, email, or a generic webhook channel.
NetQ captures three types of events:
System events: a wide range of events generated by the system about network protocols and services operation, hardware and software status, and system services
Threshold-crossing events: a user-specified set of system-related events based on threshold values
What Just Happened events: network hardware events for NVIDIA Spectrum™ switches
You can track events in the NetQ UI by selecting the menu and navigating to Events or What Just Happened. Alternately, use the CLI to monitor events using the netq show events and netq show wjh-drop commands.
Manage NetQ Agents
Run the following commands to view the status of an agent, disable an agent, manage logging, and configure the events the agent collects.
View NetQ Agent Status
The syntax for the NetQ Agent status command is:
netq [<hostname>] show agents
[fresh | dead | rotten | opta]
[around <text-time>]
[json]
You can view the status for a given switch, host or NetQ appliance or virtual machine. You can also filter by the status and view the status at a time in the past.
To view the current status of all NetQ Agents, run:
cumulus@switch~:$ netq show agents
To view NetQ Agents that are not communicating, run:
cumulus@switch~:$ netq show agents rotten
No matching agents records found
To view NetQ Agent status on the NetQ appliance or VM, run:
cumulus@switch~:$ netq show agents opta
Matching agents records:
Hostname Status NTP Sync Version Sys Uptime Agent Uptime Reinitialize Time Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
netq-appliance Fresh yes 4.12.0-ub20.04u49~1728015156.144ca36 Tue Oct 8 12:06:08 2024 Tue Nov 12 17:03:14 2024 Tue Nov 12 17:04:09 2024 Tue Dec 3 15:43:47 2024
View NetQ Agent Configuration
You can view the current configuration of a NetQ Agent to determine what data it collects and where it sends that data. The syntax for this command is:
sudo netq config show agent
[cpu-limit|frr-monitor|loglevel|ssl|stats|wjh|wjh-threshold]
[json]
The following example shows a NetQ Agent in an on-premises deployment, talking to an appliance or VM at 127.0.0.1 using the default ports and VRF.
To view the configuration of a particular aspect of a NetQ Agent, use the various options.
This example shows a NetQ Agent configured with a CPU limit of 60%.
cumulus@switch:~$ sudo netq config show agent cpu-limit
CPU Quota
-----------
60%
()
Modify the Configuration of the NetQ Agent on a Node
The agent configuration commands let you:
Add, disable, and remove a NetQ Agent
Start and stop a NetQ Agent
Configure a NetQ Agent to collect selected data (CPU usage limit, FRR, WJH)
Configure a NetQ Agent to send data to a server cluster
Troubleshoot the NetQ Agent
Commands apply to one agent at a time, and you run them on the switch or host where the NetQ Agent resides.
Add or Remove a NetQ Agent
To add or remove a NetQ Agent, you must add or remove the IP address (as well as the port and VRF, if specified) from the NetQ configuration file, /etc/netq/netq.yml. This adds or removes the information about the server where the agent sends the data it collects.
To use the NetQ CLI to add or remove a NetQ Agent on a switch or host, run:
If you want to use a specific port on the server, use the port option. If you want the data sent over a particular virtual route interface, use the vrf option.
This example shows how to add a NetQ Agent and tell it to send the data it collects to the NetQ server at the IPv4 address of 10.0.0.23 using the default port (port 31980 for on-premises and port 443 for cloud deployments) and the default VRF (mgmt). The port and VRF are not specified, so NetQ assumes default settings.
This example shows how to add a NetQ Agent and tell it to send the data it collects to the NetQ server at the IPv4 address of 10.0.0.23 using the default port (port 31980 for on-premises and port 443 for cloud deployments) and the default VRF for a switch managed through an in-band connection on interface swp1:
You can temporarily disable the NetQ Agent on a node. Disabling the NetQ Agent maintains the data already collected in the NetQ database, but stops the NetQ Agent from collecting new data until you reenable it.
To disable a NetQ Agent, run:
cumulus@switch:~$ sudo netq config stop agent
To reenable a NetQ Agent, run:
cumulus@switch:~$ sudo netq config restart agent
Configure a NetQ Agent to Limit Switch CPU Usage
You can limit the NetQ Agent to use only a certain percentage of CPU resources on a switch. This setting requires a switch running Cumulus Linux versions 3.7, 4.1, or later.
Configure a NetQ Agent to Send Data to a Server Cluster
If you have a high-availability server cluster arrangement, you should configure the HA servers to distribute the data that the NetQ Agent collects across the servers.
To configure the agent to send data to the servers in your cluster, run:
You must separate the list of IP addresses by commas (not spaces). You can optionally specify a port or VRF.
This example configures the NetQ Agent on a switch to send the data to three servers located at 10.0.0.21, 10.0.0.22, and 10.0.0.23 using the rocket VRF.
To stop a NetQ Agent from sending data to a server cluster, run:
cumulus@switch:~$ sudo netq config del agent cluster-servers
Configure Logging to Troubleshoot a NetQ Agent
The logging level used for a NetQ Agent determines what types of events get logged about the NetQ Agent on the switch or host.
First, you need to decide what level of logging you want to configure. You can configure the logging level to be the same for every NetQ Agent, or selectively increase or decrease the logging level for a NetQ Agent on a problematic node.
Logging Level
Description
debug
Sends notifications for all debug, info, warning, and error messages.
info
Sends notifications for info, warning, and error messages (default).
warning
Sends notifications for warning and error messages.
error
Sends notifications for errors messages.
You can view the NetQ Agent log directly. Messages have the following structure:
(Optional) Verify the connection to the NetQ appliance or VM by viewing the netq-agent.log messages.
Disable Agent Logging
If you set the logging level to debug for troubleshooting, NVIDIA recommends that you either change the logging level to a less verbose mode or disable agent logging when you finish troubleshooting.
To change the logging level from debug to another level, run:
The NetQ Agent contains a pre-configured set of modular commands that run periodically and send event and resource data to the NetQ appliance or VM. You can fine tune which events the agent can poll and vary frequency of polling using the NetQ CLI.
For example, if your network is not running EVPN, you can disable the command that polls for EVPN events. Or you can decrease the polling interval for LLDP from the default of 60 seconds to 120 seconds. By not polling for selected data or polling less frequently, you can reduce switch CPU usage by the NetQ Agent.
Depending on the switch platform, the NetQ Agent might not execute some supported protocol commands. For example, if a switch has no VXLAN capability, then the agent skips all VXLAN-related commands.
Supported Commands
To see the list of supported modular commands, run:
You can change the polling frequency (in seconds) of a modular command. For example, to change the polling frequency of the ntp command to 60 seconds from its default of 30 seconds, run:
You can disable unnecessary commands. This can help reduce the compute resources the NetQ Agent consumes on the switch. For example, if your network does not run EVPN, you can disable the EVPN command:
cumulus@switch:~$ sudo netq config add agent command service-key evpn-vni enable False
Command Service evpn-vni is disabled
Use the UI or CLI to monitor your network’s inventory of switches, hosts, NICs, and DPUs. The inventory includes a count for each device and information about the hardware and software components on individual switches, such as the operating system, motherboard, ASIC, microprocessor, disk, memory, fan, and power supply information.
Networkwide Inventory Commands
Several forms of this command are available based on the inventory component you’d like to view. See the command line reference for additional options, definitions, and examples.
netq show inventory (brief | asic | board | cpu | disk | memory | os)
View Networkwide Inventory in the UI
The NetQ header displays the number of devices in your network that are 'fresh' or reachable.
To view the quantity of devices in your network, search for the “Inventory | Devices” card in the global search field. The medium-sized card displays the total number of devices in the network. Hover your cursor over the chart to view the number and percentage of switches, hosts, NICS, and DPUs that comprise your network.
Expand to the large card to view the distribution of ASIC vendors, OS versions, NetQ Agent versions, and platforms deployed across all switches in your network. You can hover over and select any of the segments in the distribution chart to highlight and filter data, including:
Name or value of the component type, such as the version number or status
Total number of switches with a particular type of component deployed compared to the total number of switches
Percentage of the selected type compared to all component types
Expand the Inventory/Devices card to full-screen to view comprehensive inventory information for all switches, hosts, DPUs, and NICs in your network in a table where you can filter and export data by selecting the icons above the table:
You can right-click the hostname of a given switch to open a monitoring dashboard for that switch in a new tab.
Switch Inventory
With the NetQ UI and NetQ CLI, you can monitor your inventory of switches across the network or individually. A user can view operating system, motherboard, ASIC, microprocessor, disk, memory, fan, and power supply information.
You can access switch performance data for a given switch in the UI by right-clicking a switch and opening the dashboard in a new tab.
The NetQ header displays the number of switches in your network that are 'fresh' or reachable.
To view the hardware and software component inventory for switches running NetQ in your network, search for the “Inventory | Switches” card in the global search field. The card displays the total number of switches in your network, divided into the number of fresh and rotten switches.
View Distribution and Component Counts
Open the large Inventory/Switches card to display more granular information about software and hardware distribution. By default, the card displays data for fresh switches. Select Rotten switches from the dropdown to display information for switches that are in a down state. Hover over the top of the card and select a category to restrict the view to ASICs, platform, or software.
Expand the Inventory/Switches card to full-screen to view, filter or export information about ASICs, motherboards, CPUs, memory, disks, and operating system.
You can right-click the hostname of a given switch to open a monitoring dashboard for that switch in a new tab.
Decommission a Switch
Decommissioning a switch or host removes information about the device from the NetQ database. When the NetQ Agent restarts at a later date, it sends a connection request back to the database, so NetQ can monitor the switch or host again.
Locate the Inventory/Switches card on your workbench and expand it to full-screen.
Select the switches to decommission, then select Decommission device above the table.
If you attempt to decommission a switch that is assigned a default, unmodified access profile, the process will fail. Create a unique access profile (or update the default with unique credentials), then attach the profile to the switch you want to decommission.
Confirm the devices you want to decommission.
Wait for the decommission process to complete, then select Done.
To decommission a switch or host:
On the given switch or host, stop and disable the NetQ Agent service:
In the UI, you can view your inventory of hosts across the network or individually, including a host’s operating system, ASIC, CPU model, disk, platform, and memory information.
Access and View Host Inventory Data
The Inventory/Hosts card monitors the hardware- and software-component inventory on hosts running NetQ in your network. To add this card to your workbench, search for “Inventory | Hosts” in the global search field.
Hover over the chart in the default card view to view component details. To view the distribution of components, hover over the card header and increase the card’s size. Select the corresponding icon to view a detailed chart for ASIC, platform, or software components:
To display detailed information as a table, expand the card to its largest size:
Decommission a Host
Decommissioning hosts removes information about the host from the NetQ database. The NetQ Agent must be disabled and in a ‘rotten’ state to complete the decommissioning process.
Locate the Inventory/Devices card on your workbench and expand it to full-screen.
From the Hosts tab, locate the Agent state column.
If the NetQ Agents is in a ‘fresh’ state, you must stop and disable the NetQ Agent and wait until it reflects a ‘rotten’ state. To disable the agent, run the following commands on the host you want to decommission:
It may take a few minutes for the agent’s new state to be reflected in the UI.
After you have confirmed that the agent is in a ‘rotten’ state, select the host you’d like to decommission, then select Decommission device above the table.
To decommission a host:
Stop and disable the NetQ Agent service on the host:
NetQ periodically runs default validations to verify whether devices, hosts, network protocols, and services are operating as expected. These validations measure what NetQ expects from a healthy network against the data it receives from the network it is monitoring. When NetQ detects an anomaly or inconsistency, the system will broadcast an event.
NetQ excludes certain checks from running by default. You can run an on-demand validation or schedule a validation to view validation results for those protocols and services.
The following table displays the validation categories. Refer to the Validation Reference for a comprehensive breakdown of each test included in each category.
Item
NetQ UI
NetQ CLI
Run by Default
Frequency
Addresses
Yes
Yes
No
on-demand, as scheduled
Agents
Yes
Yes
Yes
60 mins
BGP
Yes
Yes
Yes
60 mins
Cumulus Linux version
No
Yes
No
on-demand, as scheduled
EVPN
Yes
Yes
Yes
60 mins
Interfaces
Yes
Yes
Yes
60 mins
MLAG (CLAG)
Yes
Yes
Yes
60 mins
MTU
Yes
Yes
Yes
60 mins
NTP
Yes
Yes
Yes
60 mins
RoCE
Yes
Yes
No
on-demand, as scheduled
Sensors
Yes
Yes
Yes
60 mins
Topology
Yes
Yes
No
on-demand, as scheduled
VLAN
Yes
Yes
Yes
60 mins
VXLAN
Yes
Yes
Yes
60 mins
After logging in, it can take up to an hour for NetQ to display accurate validation data.
View and Run Validations in the UI
The Validation Summary card displays the results from the subset of hourly validation checks that NetQ runs by default. Select Validation in the header to create or schedule new validation checks, as well as view previous checks.
Validation with the NetQ CLI
The NetQ CLI uses the netq check commands to validate the various elements of your network fabric, looking for inconsistencies in configurations across your fabric, connectivity faults, missing configurations, and so forth. You can run commands from any node in the network.
DPU Inventory
Use the UI or CLI to view your data processing unit (DPU) inventory. For DPU performance information, refer to DPU Monitoring.
Several forms of this command are available based on the inventory component you’d like to view. See the command line reference for additional options, definitions, and examples.
netq show inventory (brief | asic | board | cpu | disk | memory | os)
View DPU Inventory in the UI
The NetQ header displays the number of DPUs in your network that are 'fresh' or reachable.
The Inventory/DPU card displays the hardware- and software-component inventory on DPUs running NetQ in your network, including operating system, ASIC, CPU model, disk, platform, and memory information.
To add this card to your workbench, search for “Inventory | DPUs” in the global search field.
Hover over the chart to view component details. To view the distribution of components, hover over the card header and increase the card’s size. Select the corresponding icon to view a detailed chart for ASIC, platform, or software components:
Expand the card to its largest size to view, filter, and export detailed information:
Decommission a DPU
Decommissioning DPUs removes information about the DPU from the NetQ database. The NetQ Agent must be disabled and in a ‘rotten’ state to complete the decommissioning process.
Locate the Inventory/Devices card on your workbench and expand it to full-screen.
From the DPUs tab, locate the Agent state column.
If the NetQ Agent is in a ‘fresh’ state, you must stop and disable the NetQ Agent and wait until it reflects a ‘rotten’ state. To disable the agent, run the following command on the DPU you want to decommission. Replace <netq_server> with the IP address of your NetQ VM:
sed -i s'/<netq_server>/127.0.0.1/g' /etc/kubelet.d/doca_telemetry_standalone.yaml
After you have confirmed that the agent is in a ‘rotten’ state, select the DPU you’d like to decommission, then select Decommission device above the table.
To decommission a host:
Stop and disable the NetQ Agent service on the host. Replace <netq_server> with the IP address of your NetQ VM:
sed -i s'/<netq_server>/127.0.0.1/g' /etc/kubelet.d/doca_telemetry_standalone.yaml
On the NetQ appliance or VM, decommission the DPU:
Run the netq show inventory brief command to display an inventory summary, including a list of NICs.
netq show inventory brief
View NIC Inventory in the UI
The Inventory/NIC card displays the hardware- and software-component inventory on NICs running NetQ in your network, including connection adapters and firmware versions.
To add this card to your workbench, search for “Inventory | NICs” in the global search field.
Expand the card to full-screen to view a list of hosts and their associated NICs:
To view data from an individual NIC, select it from the table, then select Add card above the table. An individual NIC monitoring card opens on your workbench, displaying ports, packets, and bytes information:
You can expand this card to large or full-screen to view detailed interface statistics, including frame and carrier errors.
Decommission a NIC
Decommissioning removes information about the NIC from the NetQ database.
Stop the DTS container on the NIC’s host with the following command:
docker stop doca_telemetry
Locate the Inventory/Devices card on your workbench and expand it to full-screen.
Navigate to the NICs tab.
Select the NIC you’d like to decommission, then select Decommission device above the table.
To decommission a NIC:
Stop the DTS container on the NIC’s host with the following command:
docker stop doca_telemetry
On the NetQ appliance or VM, decommission the NIC:
Either obtain the NIC guid from the NetQ UI in the full-screen NIC Inventory card, or use tab completion with the netq decommission <hostname> command to view the NIC guids.
Device groups allow you to create a label for a subset of devices in the inventory. You can configure validation checks to run on select devices by referencing group names.
Create a Device Group
To create a device group, add the Device Groups card to your workbench by searching for “Device groups” in the global search field. From the card, select Create new group and follow the instructions in the UI:
Enter a name for the group.
Create a hostname-based rule to define which devices in the inventory should be added to the group.
Confirm the expected matched devices appear in the inventory, and click Create device group.
The following example shows a group name of “exit group” matching any device in the inventory with “exit” in the hostname:
Update a Device Group
When new devices that match existing group rules are added to the inventory, NetQ flags the matching devices for review. The following example shows the switch “exit-2” detected in the inventory after the group was configured:
To add the new device to the group inventory, click Add device and then click Update device group.
Delete a Device Group
To delete a device group:
Expand the Device Groups card:
Click Menu on the desired group and select Delete.
Monitor Events
Use the UI or CLI to monitor events: you can view all events across the entire network or all events on a device, then filter events according to their type, severity, or time frame. Event querying is supported for a 72-hour window within the past 30 days.
It can take several minutes for the NetQ UI to process and accurately display network events. The delay is caused by events with multiple network dependencies. It takes between 5 and 10 minutes for NetQ to consolidate and display these events.
Monitor system events with the following command. See the command line reference for additional options, definitions, and examples.
netq show events
Monitor Events in the UI
Expand the Menu, then select Events.
The dashboard presents a timeline of events alongside the devices that are causing the most events. You can select the controls above the summary to filter events by time, device (hostname), type, severity, or state. Select the tabs below the controls to display all events networkwide, interface events, network services events, system events, or threshold-crossing events. The charts and tables update according to the tab you’ve selected.
Events are also generated when streaming validation checks detect a failure. If an event is generated from a failed validation check, it will be marked resolved automatically the next time the check runs successfully.
Suppress Events
If you are receiving too many event notifications or do not want NetQ to display known issues or false alarms, you can suppress these events. NetQ does not display suppressed events in the event summary dashboard, which effectively allows you to ignore them. In addition to the rules you create to suppress events, NetQ suppresses some events by default.
You can suppress events for the following types of messages:
Adaptive routing
BGP
BTRFS information (events related to the BTRFS file system in Cumulus Linux)
Cable
CL support (events generated when creating the cl-support script)
Config diff (events generated when a configuration file has changed)
EVPN
Installed packages
Lifecycle management
Link (events related to links, including state and interface name)
LLDP
MLAG
MTU
NetQ agent
NTP
PTM
PTP
RoCE configuration
Running config diff (events related to the difference between two configurations)
Sensor
Services (including whether a service is active or inactive)
SSD utilization (events related to the storage on a switch)
Topology
NetQ suppresses BGP, EVPN, link, cable, and sensor-related events with a severity level of 'info' by default in the UI. You can disable this rule if you'd prefer to receive these notifications.
Create an Event Suppression Configuration
If you see a type of event displayed in the events dashboard that you’d like to suppress, navigate to the Event suppression column and select Suppress events. The wizard described below will be pre-filled with your suppression criteria.
To suppress events using the NetQ UI:
Click Menu, then Events.
In the top-right corner, select Show suppression rules.
Select Add rule. You can configure individual suppression rules or you can create a group rule that suppresses events for all message types.
Give your rule a name and fill out the fields. Then select Create.
When you add a new configuration using the CLI, you can specify a scope, which limits the suppression in the following order:
Hostname.
Severity.
Message type-specific filters. For example, the target VNI for EVPN messages, or the interface name for a link message.
NetQ has a predefined set of filter conditions. To see these conditions, run netq show events-config show-filter-conditions:
cumulus@switch:~$ netq show events-config show-filter-conditions
Matching config_events records:
Message Name Filter Condition Name Filter Condition Hierarchy Filter Condition Description
------------------------ ------------------------------------------ ---------------------------------------------------- --------------------------------------------------------
evpn vni 3 Target VNI
evpn severity 2 Severity error/info
evpn hostname 1 Target Hostname
clsupport fileAbsName 3 Target File Absolute Name
clsupport severity 2 Severity error/info
clsupport hostname 1 Target Hostname
link new_state 4 up / down
link ifname 3 Target Ifname
link severity 2 Severity error/info
link hostname 1 Target Hostname
ospf ifname 3 Target Ifname
ospf severity 2 Severity error/info
ospf hostname 1 Target Hostname
sensor new_s_state 4 New Sensor State Eg. ok
sensor sensor 3 Target Sensor Name Eg. Fan, Temp
sensor severity 2 Severity error/info
sensor hostname 1 Target Hostname
configdiff old_state 5 Old State
configdiff new_state 4 New State
configdiff type 3 File Name
configdiff severity 2 Severity error/info
configdiff hostname 1 Target Hostname
ssdutil info 3 low health / significant health drop
ssdutil severity 2 Severity error/info
ssdutil hostname 1 Target Hostname
agent db_state 3 Database State
agent severity 2 Severity error/info
agent hostname 1 Target Hostname
ntp new_state 3 yes / no
ntp severity 2 Severity error/info
ntp hostname 1 Target Hostname
bgp vrf 4 Target VRF
bgp peer 3 Target Peer
bgp severity 2 Severity error/info
bgp hostname 1 Target Hostname
services new_status 4 active / inactive
services name 3 Target Service Name Eg.netqd, mstpd, zebra
services severity 2 Severity error/info
services hostname 1 Target Hostname
btrfsinfo info 3 high btrfs allocation space / data storage efficiency
btrfsinfo severity 2 Severity error/info
btrfsinfo hostname 1 Target Hostname
clag severity 2 Severity error/info
clag hostname 1 Target Hostname
For example, to create a configuration called mybtrfs that suppresses BTRFS-related events on leaf01 for the next 10 minutes, run:
You can delete or disable suppression rules. After you delete a rule, event notifications will resume. Disabling suppression rules pauses those rules, allowing you to receive event notifications temporarily.
To remove suppressed event configurations:
Click Menu, then Events.
Select Show suppression rules at the top of the page.
Toggle between the Single and All tabs to alternately view one suppression rule or a group of rules. Navigate to the rule you want to delete or disable.
For a single rule, click the three-dot menu and select Delete. To pause the rule instead of deleting it, click Disable. To delete a group of rules, click the three-dot menu and select Delete. To disable individual rules within the group, select View all, then Disable.
To remove an event suppression configuration, run the netq del events-config and include the identifier for the suppression configuration.
Select Show suppression rules at the top of the page.
Toggle between the Single and All tabs to view individual and groups of rules, respectively.
You can view all event suppression configurations, or you can filter by a specific configuration or message type using the netq show events-config command.
When you filter for a message type, you must include the show-filter-conditions keyword to show the conditions associated with that message type and the hierarchy in which they get processed.
This section describes how to configure NetQ to send event notifications through a third-party-application, such as syslog, PagerDuty, Slack, email, or a generic webhook channel. NetQ can generate notifications for system and threshold-crossing events or according to a predefined set of rule keys.
You can implement a proxy server (that sits between the NetQ appliance or VM and the integration channels) that receives, processes, and distributes the notifications rather than having them sent directly to the integration channel. If you use such a proxy, you must configure NetQ with the proxy information.
Event Message Format
Messages have the following structure:
<message-type><timestamp><opid><hostname><severity><message>
Element
Description
message type
Category of event
timestamp
Date and time event occurred
opid
Identifier of the service or process that generated the event
hostname
Hostname of network device where event occurred
severity
Severity classification: error or info
message
Text description of event
For example:
To set up the integrations, you must configure NetQ with at least one channel, one rule, and one filter. To refine what messages you want to view and where to send them, you can add additional rules and filters and set thresholds on supported event types. You can also configure a proxy server to receive, process, and forward the messages. This is accomplished in the following order:
Configure Basic NetQ Event Notifications
The simplest configuration you can create is one that sends all events generated by all interfaces to a single notification application. A notification configuration must contain one channel, one rule, and one filter. Creation of the configuration follows this same path:
Create a channel.
Create a rule that accepts a selected set of events.
Create a filter that associates this rule with the newly created channel.
Create a Channel
The first step is to create a Slack, PagerDuty, syslog, email, or generic channel to receive the notifications.
Expand the Menu and select Notification channels.
The Slack tab is displayed by default.
Add a channel.
When no channels have been specified, click Add Slack channel.
When at least one channel has been specified, click Add above the table.
Provide a unique name for the channel. Note that spaces are not allowed—use dashes or camelCase instead.
Create an incoming webhook as described in the Slack documentation. Then copy and paste it in the Webhook URL field.
(Optional) Select the toggle to send all notifications to this channel.
Click Add.
(Optional) To verify the channel configuration, click Test.
WebHook URL for the desired channel. For example: https://hooks.slack.com/services/text/moretext/evenmoretext
severity <level>
The log level, either info or error. The severity defaults to info if unspecified.
tag <text-slack-tag>
Optional tag appended to the Slack notification to highlight particular channels or people. An @ sign must precede the tag value. For example, @netq-info.
default
Set the channel as default and send all notifications to this channel
The following example shows the creation of a slk-netq-events channel and verifies the configuration.
Create an incoming webhook as described in the documentation for your version of Slack.
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info Is Default
--------------- ---------------- -------- -------------------------- ------------
slk-netq-events slack info webhook:https://hooks.s False
lack.com/services/text/
moretext/evenmoretext.
Expand the Menu and select Notification channels.
Click PagerDuty.
Add a channel.
When no channels have been specified, click Add PagerDuty channel.
When at least one channel has been specified, click Add above the table.
Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.
Obtain and enter an integration key (also called a service key or routing key).
(Optional) Select the toggle to send all notifications to this channel.
Click Add.
(Optional) To verify the channel configuration, click Test.
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info Is Default
--------------- ---------------- ---------------- ------------------------ ------------
pd-netq-events pagerduty info integration-key: c6d666e False
210a8425298ef7abde0d1998
Expand the Menu and select Notification channels.
Click Syslog.
Add a channel.
When no channels have been specified, click Add syslog channel.
When at least one channel has been specified, click Add above the table.
Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.
Enter the IP address and port of the syslog server.
(Optional) Select the toggle to send all notifications to this channel.
Click Add.
(Optional) To verify the channel configuration, click Test.
To create and verify a syslog channel, run:
netq add notification channel syslog <text-channel-name> hostname <text-syslog-hostname> port <text-syslog-port> [severity info | severity error] [default]
netq show notification channel [json]
Option
Description
<text-channel-name>
User-specified syslog channel name
hostname <text-syslog-hostname>
Hostname or IP address of the syslog server to receive notifications
port <text-syslog-port>
Port on the syslog server to receive notifications
severity <level>
The log level, either info or error. The severity defaults to info if unspecified.
default
Set the channel as default and send all notifications to this channel
The following example shows the creation of a syslog-netq-events channel and verifies the configuration.
Obtain the syslog server hostname (or IP address) and port.
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info Is Default
--------------- ---------------- -------- ---------------------- ------------
syslog-netq-eve syslog info host:syslog-server False
nts port: 514
Expand the Menu and select Notification channels.
Click Email.
Add a channel.
When no channels have been specified, click Add email channel.
When at least one channel has been specified, click Add above the table.
Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.
Enter a list of recipient email addresses, separated by commas, and no spaces. For example: user1@domain.com,user2@domain.com,user3@domain.com
(Optional) Select the toggle to send all notifications to this channel.
The first time you configure an email channel, you must also specify the SMTP server information:
Host: hostname or IP address of the SMTP server
Port: port of the SMTP server (typically 587)
User ID/Password: your administrative credentials
From: email address that indicates who sent the notifications
After the initial setup, any additional email channels you create can use this configuration, by clicking Existing.
Click Add.
(Optional) To verify the channel configuration, click Test.
To create and verify the specification of an email channel, run:
netq add notification channel email <text-channel-name> to <text-email-toids> [smtpserver <text-email-hostname>] [smtpport <text-email-port>] [login <text-email-id>] [password <text-email-password>] [severity info | severity error] [default]
netq add notification channel email <text-channel-name> to <text-email-toids> [default]
netq show notification channel [json]
The configuration is different depending on whether you are using the on-premises or cloud version of NetQ. Do not configure SMTP for cloud deployments as the NetQ cloud service uses the NetQ SMTP server to push email notifications.
For an on-premises deployment:
Set up an SMTP server. The server can be internal or public.
Create a user account (login and password) on the SMTP server. NetQ sends notifications to this address.
Create the notification channel using the following command:
netq del notification rule <text-rule-name-anchor>
▼
Example rules
This example creates a rule named all-interfaces, using the key ifname and the value ALL, which sends all events from all interfaces to any channel with this rule.
cumulus@switch:~$ netq add notification rule all-interfaces key ifname value ALL
Successfully added/updated rule all-ifs
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
all-interfaces ifname ALL
These examples use the channels and rules created in the previous sections. After creating this filter, NetQ will send all interface events to your designated channel.
cumulus@switch:~$ netq add notification filter notify-all-ifs rule all-interfaces channel pd-netq-events
Successfully added/updated filter notify-all-ifs
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
notify-all-ifs 1 info pd-netq-events all-interfaces
cumulus@switch:~$ netq add notification filter notify-all-ifs rule all-interfaces channel slk-netq-events
Successfully added/updated filter notify-all-ifs
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
notify-all-ifs 1 info slk-netq-events all-interfaces
cumulus@switch:~$ netq add notification filter notify-all-ifs rule all-interfaces channel syslog-netq-events
Successfully added/updated filter notify-all-ifs
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
notify-all-ifs 1 info syslog-netq-events all-ifs
cumulus@switch:~$ netq add notification filter notify-all-ifs rule all-interfaces channel onprem-email
Successfully added/updated filter notify-all-ifs
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
notify-all-ifs 1 info onprem-email all-ifs
▼
Additional filter examples
Create a filter for BGP events on a particular device:
Create a filter to drop messages from a given interface, and match
against this filter before any other filters. To create a drop-style
filter, do not specify a channel. To list the filter first, use the
before option.
Filter names can contain spaces, but must be enclosed with single quotes in commands. It is easier to use dashes in place of spaces or mixed case for better readability. For example, use bgpSessionChanges or BGP-session-changes or BGPsessions, instead of 'BGP Session Changes'. Filter names are also case sensitive.
As you create filters, they are added to the bottom of a list of filters. By default, NetQ processes event messages against filters starting at the top of the filter list and works its way down until it finds a match. NetQ applies the first filter that matches an event message, ignoring the other filters. Then it moves to the next event message and reruns the process, starting at the top of the list of filters. NetQ ignores events that do not match any filter.
You might have to change the order of filters in the list to ensure you capture the events you want and drop the events you do not want. This is possible using the before or after keywords to ensure one rule is processed before or after another.
To delete notification filters, run:
netq del notification filter <text-filter-name-anchor>
Delete a Channel
You can remove channels if they are not part of an existing notification configuration.
To remove notification channels:
Expand the Menu and select Notification channels.
Select the tab for the type of channel you want to remove.
Select one or more channels.
Click Delete.
To remove notification channels, run:
netq del notification channel <text-channel-name-anchor>
This example removes a Slack integration and verifies it is no longer in
the configuration:
cumulus@switch:~$ netq del notification channel slk-netq-events
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- ---------------- ------------------------
pd-netq-events pagerduty info integration-key: 1234567
890
Configure a Proxy Server
To send notification messages through a proxy server instead of directly to a notification channel, you configure NetQ with the hostname and optionally a port of a proxy server. If you do not specify a port, NetQ defaults to port 80. NetQ supports one proxy server. To simplify deployment, configure your proxy server before configuring channels, rules, or filters.
You can remove the proxy server with netq del notification proxy. This changes the NetQ behavior to send events directly to the notification channels.
Rule Keys and Values Reference
A single key-value pair comprises each rule. The key-value pair indicates what messages to include or drop from event information sent to a notification channel. You can create more than one rule for a single filter. Creating multiple rules for a given filter can provide a very defined filter. For example, you can specify rules around hostnames or interface names, enabling you to filter messages specific to those hosts or interfaces. You can only create rules after you have set up your notification channels.
NetQ includes a predefined fixed set of valid rule keys. You enter values as regular expressions, which vary according to your deployment.
Service
Rule Key
Description
Example Rule Values
BGP
message_type
Network protocol or service identifier
bgp
hostname
User-defined, text-based name for a switch or host
server02, leaf11, exit01, spine-4
peer
User-defined, text-based name for a peer switch or host
server4, leaf-3, exit02, spine06
desc
Text description
vrf
Name of VRF interface
mgmt, default
old_state
Previous state of the BGP service
Established, Failed
new_state
Current state of the BGP service
Established, Failed
old_last_reset_time
Previous time that BGP service was reset
Apr3, 2019, 4:17 PM
new_last_reset_time
Most recent time that BGP service was reset
Apr8, 2019, 11:38 AM
ConfigDiff
message_type
Network protocol or service identifier
configdiff
hostname
User-defined, text-based name for a switch or host
server02, leaf11, exit01, spine-4
vni
Virtual Network Instance identifier
12, 23
old_state
Previous state of the configuration file
created, modified
new_state
Current state of the configuration file
created, modified
EVPN
message_type
Network protocol or service identifier
evpn
hostname
User-defined, text-based name for a switch or host
server02, leaf-9, exit01, spine04
vni
Virtual Network Instance identifier
12, 23
old_in_kernel_state
Previous VNI state, in kernel or not
true, false
new_in_kernel_state
Current VNI state, in kernel or not
true, false
old_adv_all_vni_state
Previous VNI advertising state, advertising all or not
true, false
new_adv_all_vni_state
Current VNI advertising state, advertising all or not
true, false
LCM
message_type
Network protocol or service identifier
clag
hostname
User-defined, text-based name for a switch or host
server02, leaf-9, exit01, spine04
old_conflicted_bonds
Previous pair of interfaces in a conflicted bond
swp7 swp8, swp3 swp4
new_conflicted_bonds
Current pair of interfaces in a conflicted bond
swp11 swp12, swp23 swp24
old_state_protodownbond
Previous state of the bond
protodown, up
new_state_protodownbond
Current state of the bond
protodown, up
Link
message_type
Network protocol or service identifier
link
hostname
User-defined, text-based name for a switch or host
server02, leaf-6, exit01, spine7
ifname
Software interface name
eth0, swp53
LLDP
message_type
Network protocol or service identifier
lldp
hostname
User-defined, text-based name for a switch or host
server02, leaf41, exit01, spine-5, tor-36
ifname
Software interface name
eth1, swp12
old_peer_ifname
Previous software interface name
eth1, swp12, swp27
new_peer_ifname
Current software interface name
eth1, swp12, swp27
old_peer_hostname
Previous user-defined, text-based name for a peer switch or host
server02, leaf41, exit01, spine-5, tor-36
new_peer_hostname
Current user-defined, text-based name for a peer switch or host
server02, leaf41, exit01, spine-5, tor-36
MLAG (CLAG)
message_type
Network protocol or service identifier
clag
hostname
User-defined, text-based name for a switch or host
server02, leaf-9, exit01, spine04
old_conflicted_bonds
Previous pair of interfaces in a conflicted bond
swp7 swp8, swp3 swp4
new_conflicted_bonds
Current pair of interfaces in a conflicted bond
swp11 swp12, swp23 swp24
old_state_protodownbond
Previous state of the bond
protodown, up
new_state_protodownbond
Current state of the bond
protodown, up
Node
message_type
Network protocol or service identifier
node
hostname
User-defined, text-based name for a switch or host
server02, leaf41, exit01, spine-5, tor-36
ntp_state
Current state of NTP service
in sync, not sync
db_state
Current state of DB
Add, Update, Del, Dead
NTP
message_type
Network protocol or service identifier
ntp
hostname
User-defined, text-based name for a switch or host
server02, leaf-9, exit01, spine04
old_state
Previous state of service
in sync, not sync
new_state
Current state of service
in sync, not sync
Port
message_type
Network protocol or service identifier
port
hostname
User-defined, text-based name for a switch or host
server02, leaf13, exit01, spine-8, tor-36
ifname
Interface name
eth0, swp14
old_speed
Previous speed rating of port
10 G, 25 G, 40 G, unknown
old_transreceiver
Previous transceiver
40G Base-CR4, 25G Base-CR
old_vendor_name
Previous vendor name of installed port module
Amphenol, OEM, NVIDIA, Fiberstore, Finisar
old_serial_number
Previous serial number of installed port module
MT1507VS05177, AVE1823402U, PTN1VH2
old_supported_fec
Previous forward error correction (FEC) support status
User-defined, text-based name for a switch or host
server02, leaf-26, exit01, spine2-4
old_state
Previous state of a fan, power supply unit, or thermal sensor
Fan: ok, absent, bad
PSU: ok, absent, bad
Temp: ok, busted, bad, critical
new_state
Current state of a fan, power supply unit, or thermal sensor
Fan: ok, absent, bad
PSU: ok, absent, bad
Temp: ok, busted, bad, critical
old_s_state
Previous state of a fan or power supply unit.
Fan: up, down
PSU: up, down
new_s_state
Current state of a fan or power supply unit.
Fan: up, down
PSU: up, down
new_s_max
Current maximum temperature threshold value
Temp: 110
new_s_crit
Current critical high temperature threshold value
Temp: 85
new_s_lcrit
Current critical low temperature threshold value
Temp: -25
new_s_min
Current minimum temperature threshold value
Temp: -50
Services
message_type
Network protocol or service identifier
services
hostname
User-defined, text-based name for a switch or host
server02, leaf03, exit01, spine-8
name
Name of service
clagd, lldpd, ssh, ntp, netqd, netq-agent
old_pid
Previous process or service identifier
12323, 52941
new_pid
Current process or service identifier
12323, 52941
old_status
Previous status of service
up, down
new_status
Current status of service
up, down
Examples of Advanced Notification Configurations
The following section lists examples of advanced notification configurations.
Create a Notification for BGP Events from a Selected Switch
This example creates a notification integration with a PagerDuty channel called pd-netq-events. It then creates a rule bgpHostname and a filter called 4bgpSpine for any notifications from spine-01. The result is that any info severity event messages from Spine-01 is filtered to the pd-netq-events channel.
▼
Display example
cumulus@switch:~$ netq add notification channel pagerduty pd-netq-events integration-key 1234567890
Successfully added/updated channel pd-netq-events
cumulus@switch:~$ netq add notification rule bgpHostname key node value spine-01
Successfully added/updated rule bgpHostname
cumulus@switch:~$ netq add notification filter bgpSpine rule bgpHostname channel pd-netq-events
Successfully added/updated filter bgpSpine
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- ---------------- ------------------------
pd-netq-events pagerduty info integration-key: 1234567
890
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
bgpSpine 1 info pd-netq-events bgpHostnam
e
Create a Notification for Errors on a Given EVPN VNI
This example creates a notification integration with a PagerDuty channel called pd-netq-events. It then creates a rule evpnVni and a filter called 3vni42 for any error messages from VNI 42 on the EVPN overlay network. The result is that any event messages from VNI 42 with a severity level of ‘error’ are filtered to the pd-netq-events channel.
▼
Display example
cumulus@switch:~$ netq add notification channel pagerduty pd-netq-events integration-key 1234567890
Successfully added/updated channel pd-netq-events
cumulus@switch:~$ netq add notification rule evpnVni key vni value 42
Successfully added/updated rule evpnVni
cumulus@switch:~$ netq add notification filter vni42 rule evpnVni channel pd-netq-events
Successfully added/updated filter vni42
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- ---------------- ------------------------
pd-netq-events pagerduty info integration-key: 1234567
890
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
evpnVni vni 42
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
bgpSpine 1 info pd-netq-events bgpHostnam
e
vni42 2 error pd-netq-events evpnVni
Create a Notification for Configuration File Changes
This example creates a notification integration with a Slack channel called slk-netq-events. It then creates a rule sysconf and a filter called configChange for any configuration file update messages. The result is that any configuration update messages are filtered to the slk-netq-events channel.
▼
Display example
cumulus@switch:~$ netq add notification channel slack slk-netq-events webhook https://hooks.slack.com/services/text/moretext/evenmoretext
Successfully added/updated channel slk-netq-events
cumulus@switch:~$ netq add notification rule sysconf key message_type value configdiff
Successfully added/updated rule sysconf
cumulus@switch:~$ netq add notification filter configChange severity info rule sysconf channel slk-netq-events
Successfully added/updated filter configChange
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- -------- ----------------------
slk-netq-events slack info webhook:https://hooks.s
lack.com/services/text/
moretext/evenmoretext
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
evpnVni vni 42
sysconf message_type configdiff
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
bgpSpine 1 info pd-netq-events bgpHostnam
e
vni42 2 error pd-netq-events evpnVni
configChange 3 info slk-netq-events sysconf
Create a Notification for When a Service Goes Down
This example creates a notification integration with a Slack channel called slk-netq-events. It then creates a rule svcStatus and a filter called svcDown for any services state messages indicating a service is no longer operational. The result is that any service down messages are filtered to the slk-netq-events channel.
▼
Display example
cumulus@switch:~$ netq add notification channel slack slk-netq-events webhook https://hooks.slack.com/services/text/moretext/evenmoretext
Successfully added/updated channel slk-netq-events
cumulus@switch:~$ netq add notification rule svcStatus key new_status value down
Successfully added/updated rule svcStatus
cumulus@switch:~$ netq add notification filter svcDown severity error rule svcStatus channel slk-netq-events
Successfully added/updated filter svcDown
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- -------- ----------------------
slk-netq-events slack info webhook:https://hooks.s
lack.com/services/text/
moretext/evenmoretext
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
evpnVni vni 42
svcStatus new_status down
sysconf configdiff updated
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
bgpSpine 1 info pd-netq-events bgpHostnam
e
vni42 2 error pd-netq-events evpnVni
configChange 3 info slk-netq-events sysconf
svcDown 4 error slk-netq-events svcStatus
Create a Filter to Drop Notifications from a Given Interface
This example creates a notification integration with a Slack channel called slk-netq-events. It then creates a rule swp52 and a filter called swp52Drop that drops all notifications for events from interface swp52.
▼
Display example
cumulus@switch:~$ netq add notification channel slack slk-netq-events webhook https://hooks.slack.com/services/text/moretext/evenmoretext
Successfully added/updated channel slk-netq-events
cumulus@switch:~$ netq add notification rule swp52 key port value swp52
Successfully added/updated rule swp52
cumulus@switch:~$ netq add notification filter swp52Drop severity error rule swp52 before bgpSpine
Successfully added/updated filter swp52Drop
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- -------- ----------------------
slk-netq-events slack info webhook:https://hooks.s
lack.com/services/text/
moretext/evenmoretext
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
evpnVni vni 42
svcStatus new_status down
swp52 port swp52
sysconf configdiff updated
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
swp52Drop 1 error NetqDefaultChann swp52
el
bgpSpine 2 info pd-netq-events bgpHostnam
e
vni42 3 error pd-netq-events evpnVni
configChange 4 info slk-netq-events sysconf
svcDown 5 error slk-netq-events svcStatus
Create a Notification for a Given Device that Has a Tendency to Overheat (Using Multiple Rules)
This example creates a notification when switch leaf04 has passed over the high temperature threshold. Two rules were necessary to create this notification, one to identify the specific device and one to identify the temperature trigger. NetQ then sends the message to the pd-netq-events channel.
▼
Display example
cumulus@switch:~$ netq add notification channel pagerduty pd-netq-events integration-key 1234567890
Successfully added/updated channel pd-netq-events
cumulus@switch:~$ netq add notification rule switchLeaf04 key hostname value leaf04
Successfully added/updated rule switchLeaf04
cumulus@switch:~$ netq add notification rule overTemp key new_s_crit value 24
Successfully added/updated rule overTemp
cumulus@switch:~$ netq add notification filter critTemp rule switchLeaf04 channel pd-netq-events
Successfully added/updated filter critTemp
cumulus@switch:~$ netq add notification filter critTemp severity critical rule overTemp channel pd-netq-events
Successfully added/updated filter critTemp
cumulus@switch:~$ netq show notification channel
Matching config_notify records:
Name Type Severity Channel Info
--------------- ---------------- ---------------- ------------------------
pd-netq-events pagerduty info integration-key: 1234567
890
cumulus@switch:~$ netq show notification rule
Matching config_notify records:
Name Rule Key Rule Value
--------------- ---------------- --------------------
bgpHostname hostname spine-01
evpnVni vni 42
overTemp new_s_crit 24
svcStatus new_status down
switchLeaf04 hostname leaf04
swp52 port swp52
sysconf configdiff updated
cumulus@switch:~$ netq show notification filter
Matching config_notify records:
Name Order Severity Channels Rules
--------------- ---------- ---------------- ---------------- ----------
swp52Drop 1 error NetqDefaultChann swp52
el
bgpSpine 2 info pd-netq-events bgpHostnam
e
vni42 3 error pd-netq-events evpnVni
configChange 4 info slk-netq-events sysconf
svcDown 5 error slk-netq-events svcStatus
critTemp 6 error pd-netq-events switchLeaf
04
overTemp
Configure and Monitor Threshold-Crossing Events
Threshold-crossing events are user-defined events that detect and prevent network failures for ACL resources, BGP, digital optics, ECMP, forwarding resources, interface errors and statistics, link flaps, resource utilization, RoCE, sensors, and What Just Happened events. You can find a complete list of TCAs—including event IDs required for the command line—in the Threshold-Crossing Events Reference.
Create a Threshold-crossing Rule
Click Menu and navigate to Threshold crossing rules.
Select the tab that reflects the event type for the rule.
Click Create a rule. Enter a name for the rule and assign a severity, then click Next.
Select the attribute you want to monitor. The listed attributes change depending on the type of event you chose in the previous step.
Click Next.
On the Set threshold step, enter a threshold value.
For digital optics, you can choose to use the thresholds defined by the optics vendor (default) or specify your own.
Define the scope of the rule.
If you want to restrict the rule based on a particular parameter, enter values for one or more of the available attributes. For What Just Happened rules, select a reason from the available list.
If you want the rule to apply to across the network, select the Apply rule to entire network toggle.
Click Next.
(Optional) Select a notification channel where you want the events to be sent.
Only previously created channels are available for selection. If no channel is available or selected, the notifications can only be retrieved from the database. You can add a channel at a later time and then add it to the rule.
Click Finish. The rules may take several minutes to appear in the UI.
The simplest configuration you can create is one that sends a TCA event generated by all devices and all interfaces to a single notification application. Use the netq add tca command to configure the event. Its syntax is:
Note that the event ID is case-sensitive and must be in all uppercase.
For example, this rule tells NetQ to deliver an event notification to the tca_slack_ifstats pre-configured Slack channel when the CPU utilization exceeds 95% of its capacity on any monitored switch:
This rule tells NetQ to deliver an event notification to the tca_pd_ifstats PagerDuty channel when the number of transmit bytes per second (Bps) on the leaf12 switch exceeds 20,000 Bps on any interface:
This rule tells NetQ to deliver an event notification to the syslog-netq syslog channel when the temperature on sensor temp1 on the leaf12 switch exceeds 32 degrees Celcius:
This rule tells NetQ to deliver an event notification to the tca-slack channel when the total number of ACL drops on the leaf04 switch exceeds 20,000 for any reason, ingress port, or drop type.
For a Slack channel, the event messages should be similar to this:
Set the Severity of a Threshold-crossing Event
In addition to defining a scope for TCA rule, you can also set a severity of either info or error. To add a severity to a rule, use the severity option.
For example, if you want to add an error severity to the CPU utilization rule you created earlier:
Digital optics have the additional option of applying user- or vendor-defined thresholds, using the threshold_type and threshold options.
This example shows how to send an error to channel ch1 when the upper threshold for module voltage exceeds the vendor-defined thresholds for interface swp31 on the mlx-2700-04 switch.
This example shows how to send an error to channel ch1 when the upper threshold for module voltage exceeds the user-defined threshold of 3V for interface swp31 on the mlx-2700-04 switch.
Now you have four rules created (the original one, plus these three new ones) all based on the TCA_SENSOR_TEMPERATURE_UPPER event. To identify the various rules, NetQ automatically generates a TCA name for each rule. As you create each rule, NetQ adds an _# to the event name. The TCA Name for the first rule created is then TCA_SENSOR_TEMPERATURE_UPPER_1, the second rule created for this event is TCA_SENSOR_TEMPERATURE_UPPER_2, and so forth.
View Threshold-crossing Rules
Click Menu and navigate to Threshold crossing rules.
Select the relevant tab. The UI displays each rule and its parameters as a card. Each attribute is displayed on the rule card as a regular expression:
Equals is displayed as an equals sign (=)
Starts with is displayed as a caret (^)
Blank (all) is displayed as an asterisk (*)
This example indicates that the rule applies across all interfaces on the exit-1 switch.
After creating a rule, you can use the filters that appear above the rule cards to filter by status, severity, channel, and/or events.
netq show tca [tca_id <text-tca-id-anchor>] [json]
This example displays all TCA rules:
cumulus@switch:~$ netq show tca
Matching config_tca records:
TCA Name Event Name Scope Severity Channel/s Active Threshold Unit Threshold Type Suppress Until
---------------------------- -------------------- -------------------------- -------- ------------------ ------ ------------------ -------- -------------- ----------------------------
TCA_CPU_UTILIZATION_UPPER_1 TCA_CPU_UTILIZATION_ {"hostname":"leaf01"} info pd-netq-events,slk True 87 % user_set Fri Oct 9 15:39:35 2020
UPPER -netq-events
TCA_CPU_UTILIZATION_UPPER_2 TCA_CPU_UTILIZATION_ {"hostname":"*"} error slk-netq-events True 93 % user_set Fri Oct 9 15:39:56 2020
UPPER
TCA_DOM_BIAS_CURRENT_ALARM_U TCA_DOM_BIAS_CURRENT {"hostname":"leaf*","ifnam error slk-netq-events True 0 mA vendor_set Fri Oct 9 16:02:37 2020
PPER_1 _ALARM_UPPER e":"*"}
TCA_DOM_RX_POWER_ALARM_UPPER TCA_DOM_RX_POWER_ALA {"hostname":"*","ifname":" info slk-netq-events True 0 mW vendor_set Fri Oct 9 15:25:26 2020
_1 RM_UPPER *"}
TCA_SENSOR_TEMPERATURE_UPPER TCA_SENSOR_TEMPERATU {"hostname":"leaf","s_name error slk-netq-events True 32 degreeC user_set Fri Oct 9 15:40:18 2020
_1 RE_UPPER ":"temp1"}
TCA_TCAM_IPV4_ROUTE_UPPER_1 TCA_TCAM_IPV4_ROUTE_ {"hostname":"*"} error pd-netq-events True 20000 % user_set Fri Oct 9 16:13:39 2020
UPPER
This example displays a specific TCA rule:
cumulus@switch:~$ netq show tca tca_id TCA_TXMULTICAST_UPPER_1
Matching config_tca records:
TCA Name Event Name Scope Severity Channel/s Active Threshold Suppress Until
---------------------------- -------------------- -------------------------- ---------------- ------------------ ------ ------------------ ----------------------------
TCA_TXMULTICAST_UPPER_1 TCA_TXMULTICAST_UPPE {"ifname":"swp3","hostname info tca-tx-bytes-slack True 0 Sun Dec 8 16:40:14 2269
R ":"leaf01"}
Manage Threshold-crossing Events and Notifications
Change the Threshold on a Rule
After receiving notifications based on a rule, you might want to increase or decrease the threshold value to limit or increase the number of events you receive.
To modify the threshold:
Locate the rule you want to modify and hover over the top of the card.
Click Edit.
Enter a new threshold value, then select Update rule.
After receiving notifications based on a rule, you might find that you want to narrow or widen the scope value to limit or increase the number of events you receive.
To modify the scope:
Locate the rule you want to modify and hover over the top of the card.
Click Edit.
Select the toggle to either apply the rule to the entire network or individual hosts.
This example changes the scope for the rule TCA_CPU_UTILIZATION_UPPER to apply only to switches beginning with a hostname of leaf. You must also provide a threshold value. This example case uses a value of 95 percent. Note that this overwrites the existing scope and threshold values.
cumulus@switch:~$ netq add tca event_id TCA_CPU_UTILIZATION_UPPER scope hostname^leaf threshold 95
Successfully added/updated tca
cumulus@switch:~$ netq show tca
Matching config_tca records:
TCA Name Event Name Scope Severity Channel/s Active Threshold Suppress Until
---------------------------- -------------------- -------------------------- ---------------- ------------------ ------ ------------------ ----------------------------
TCA_CPU_UTILIZATION_UPPER_1 TCA_CPU_UTILIZATION_ {"hostname":"*"} error onprem-email True 93 Mon Aug 31 20:59:57 2020
UPPER
TCA_CPU_UTILIZATION_UPPER_2 TCA_CPU_UTILIZATION_ {"hostname":"hostname^leaf info True 95 Tue Sep 1 18:47:24 2020
UPPER "}
Change, Add, or Remove Channels
Locate the rule you want to modify and hover over the top of the card.
You cannot change the name of a threshold-crossing rule using the NetQ CLI because the rules do not have names. They receive identifiers (the tca_id) automatically. In the NetQ UI, to change a rule name, you must delete the rule and re-create it with the new name.
Change the Severity of a Rule
Threshold-crossing rules are categorized as either info or error.
In the NetQ UI, you must delete the rule and re-create it, specifying the new severity.
In the NetQ CLI, to change the severity, run:
netq add tca tca_id <text-tca-id-anchor> (severity info | severity error)
This example changes the severity of the maximum CPU utilization 1 rule from error to info:
During troubleshooting or switch maintenance, you might want to suppress a rule to prevent erroneous or excessive notifications. This effectively pauses notifications for a specified time period.
Locate the rule you want to disable and click Disable.
Select the Date/Time field to set when you want the rule to be reenabled.
Click Disable.
Note the changes in the card:
The state changes to Snoozed
The Suppressed field displays the date and time at which the rule will be reenabled.
The Disable button changes to Disable forever.
Using the suppress_until option allows you to prevent the rule from being applied for a designated amout of time (in seconds). When this time has passed, the rule is automatically reenabled.
To reenable the rule, set the is_active option to true.
Delete a Rule
To delete a rule:
Locate the rule you want to remove and hover over the card.
In the card’s top-right corner, select Delete.
To remove a rule altogether, run:
netq del tca tca_id <text-tca-id-anchor>
This example deletes the maximum receive bytes rule:
cumulus@switch:~$ netq del tca tca_id TCA_RXBYTES_UPPER_1
Successfully deleted TCA TCA_RXBYTES_UPPER_1
Resolve Scope Conflicts
There might be occasions where the scopes defined by multiple threshold-crossing rules overlap. In such cases, NetQ uses the rule with the most specific scope that is still true to generate the event.
To clarify this, consider this example. Three events occurred:
First event on switch leaf01, interface swp1
Second event on switch leaf01, interface swp3
Third event on switch spine01, interface swp1
NetQ attempts to match the threshold-crossing event against hostname and interface name with three threshold-crossing rules with different scopes:
Scope 1 send events for the swp1 interface on switch leaf01 (very specific)
Scope 2 send events for all interfaces on switches that start with leaf (moderately specific)
Scope 3 send events for all switches and interfaces (very broad)
The result is:
For the first event, NetQ applies the scope from rule 1 because it matches scope 1 exactly
For the second event, NetQ applies the scope from rule 2 because it does not match scope 1, but does match scope 2
For the third event, NetQ applies the scope from rule 3 because it does not match either scope 1 or scope 2
In summary:
Input Event
Scope Parameters
TCA Scope 1
TCA Scope 2
TCA Scope 3
Scope Applied
leaf01,swp1
Hostname, Interface
'*','*'
leaf*,'*'
leaf01,swp1
Scope 3
leaf01,swp3
Hostname, Interface
'*','*'
leaf*,'*'
leaf01,swp1
Scope 2
spine01,swp1
Hostname, Interface
'*','*'
leaf*,'*'
leaf01,swp1
Scope 1
You can modify threshold-crossing rules to remove conflicts.
BGP
Use the UI or CLI to monitor Border Gateway Protocol (BGP) on a networkwide or per-session basis.
BGP Commands
Monitor BGP with the following commands. See the command line reference for additional options, definitions, and examples.
netq show bgp
netq show events message_type bgp
netq show events-config message_type bgp
The netq check bgp command checks for consistency across BGP sessions in your network fabric.
netq check bgp
View BGP in the UI
To add the BGP card to your workbench, navigate to the header and select Add card > Other card > Network services > All BGP Sessions card > Open cards. In this example, there are 44 nodes running the BGP protocol, 252 open events (from the last 24 hours), and 9 nodes with unestablished sessions.
Expand to the large card for additional BGP info. By default, the card displays the Sessions summary tab. From here you can see which devices are handling the most BGP sessions, or select the dropdown to view nodes with the most unestablished BGP sessions. You can view BGP-related events by selecting the Events tab.
Expand the BGP card to full-screen to view, filter, or export:
Virtual routing and forwarding (VRF) information
Autonomous system number (ASN) assignments
Peer ASNs
The received address prefix for IPv4/IPv6/EVPN when the session is established
From this table, you can select a row, then click Open card above the table.
NetQ adds a new, BGP ‘single-session’ card to your workbench. From this card, you can view session state changes and compare them with events, and monitor the running BGP configuration and changes to the configuration file.
Before adding a BGP single-session card, verify that both the peer hostname and peer ASN are valid. This ensures the information presented is reliable.
Monitor a Single BGP Session
The BGP single-session card displays the node, its peer, its status (established or unestablished), and its router ID. This information can help you determine the stability of the BGP session between two devices. The heat map indicates the status of the session over the designated time period. In this example, the session has been established throughout the entire time period:
Understanding the Heat Map
On the medium and large single-session cards, vertically stacked heat maps represent the status of the sessions: one for established sessions, and one for unestablished sessions. Depending on the time period of data on the card, the number of smaller time blocks indicate that the status varies. A vertical stack of time blocks, one from each map, includes the results from all checks during that time. The results appear by how saturated the color is for each block. If only established sessions occurred during that time period for the entire time block, then the top block is 100% saturated (white) and the unestablished block is 0% saturated (gray). As unestablished sessions increase in saturation, the established sessions block is proportionally reduced in saturation. The following table lists the most common time periods, their corresponding number of blocks, and the amount of time represented by one block:
Time Period
Number of Runs
Number Time Blocks
Amount of Time in Each Block
6 hours
18
6
1 hour
12 hours
36
12
1 hour
24 hours
72
24
1 hour
1 week
504
7
1 day
1 month
2,086
30
1 day
1 quarter
7,000
13
1 week
View Changes to the BGP Service Configuration File
Each time a change is made to the configuration file for the BGP service, NetQ logs the change and lets you compare it with the previous version. This can be useful when you are troubleshooting potential causes for events or sessions losing their connections.
To view the configuration file changes:
From the large single-session card, select the Configuration File Evolution tab.
Select the time.
Choose between the File view and the Diff view.
The File view displays the content of the file:
The Diff view highlights the changes (if any) between this version (on left) and the most recent version (on right) side by side:
The Validation Summary card in the NetQ UI lets you view the overall health of your network at a glance, giving you a high-level understanding of how well your network is operating. Successful validation results determine overall network health shown in this card.
View Key Metrics of Network Health
Overall network health in the NetQ UI is a calculated average of several key health metrics: system, network services, and interface health.
System health represents the NetQ Agent and sensor health validations. In all cases, validation checks are performed on the agents. If you are monitoring platform sensors, the validation checks include these as well.
Network service health represents the individual network protocol and services validation checks. In all cases, checks are performed on NTP. If you are running BGP, EVPN, MLAG, or VXLAN protocols the validation checks include these as well.
Interface health represents the interfaces, VLAN, and link MTU validation checks.
View Detailed Network Health
To view details about your network’s health, open or locate the large Validation Summary card on your workbench. To view devices with the most issues or recent issues, select the Most failures tab or Recent failures tab, respectively. You can unselect one or more services on the left side of the card to display devices affected by the selected services on the right side of the card.
By default, the System health tab is displayed.
The health of agents and sensors is represented on the left side of the card. Hover over the chart for each type of validation to see detailed results. The right side of the card displays devices with failures related to agents and sensors.
Click the Network service health tab.
The health of each network protocol or service is represented on the left side of the card. Hover over the chart for each type of validation to see detailed results. The right side of the card displays devices with failures related to these protocols and services.
Click the Interface health tab.
The health of interfaces, VLANs, and link MTUs is represented on the left side of the card. Hover over the chart for each type of validation to see detailed results. The right side of the card displays devices with failures related to interfaces, VLANs, and link MTUs.
View Details of a Particular Service
From the relevant tab (System Health, Network Service Health, or Interface Health) on the large Validation Summary card, you can select a chart to open a full-screen view of the validation data for that service.
The following example shows the MLAG validation at its most granular level. The Checks section displays the individual tests that comprise the MLAG validation, including which tests passed and which failed. The Results section displays the devices which failed the MLAG validation, the sessions that failed, and the individual nodes that failed.
View Default Network Protocol and Service Validation Results
Expand the Validation Summary card to full-screen to view all default validation checks during a designated time period.
Configure and Monitor What Just Happened
What Just Happened (WJH) streams detailed and contextual telemetry data for analysis. This provides real-time visibility into problems in the network, such as hardware packet drops due to buffer congestion, incorrect routing, and ACL or layer 1 problems.
Using WJH in combination with NetQ helps you identify losses anywhere in the fabric. From a single management console you can:
View any current or historic drop information, including the reason for the drop
Identify problematic flows or endpoints, and pinpoint where communication is failing in the network
WJH is only supported on NVIDIA Spectrum switches running Cumulus Linux 4.4.0 or later. WJH latency and congestion monitoring is supported on NVIDIA Spectrum-2 switches or later. SONiC only supports collection of WJH data with gNMI.
By default, Cumulus Linux 4.4.0 and later includes the NetQ Agent and CLI. Depending on the version of Cumulus Linux running on your NVIDIA switch, you might need to upgrade the NetQ Agent and CLI to the latest release:
WJH is enabled by default on NVIDIA Spectrum switches running Cumulus Linux 4.4.0 or later. Before WJH can collect data, you must enable the NetQ Agent on your switches and servers.
To enable WJH on any switch or server:
Configure the NetQ Agent on the switch:
cumulus@switch:~$ sudo netq config add agent wjh
Restart the NetQ Agent to begin collecting WJH data:
cumulus@switch:~$ sudo netq config restart agent
When you finish viewing WJH metrics, you can stop the NetQ Agent from collecting WJH data to reduce network traffic. Use netq config del agent wjh followed by netq config restart agent to disable WJH on a given switch.
Using wjh_dump.py on an NVIDIA platform that is running Cumulus Linux and the NetQ Agent causes the NetQ WJH client to stop receiving packet drop call backs. To prevent this issue, run wjh_dump.py on a system other than the one where the NetQ Agent has WJH enabled, or disable wjh_dump.py and restart the NetQ Agent with netq config restart agent.
View What Just Happened Metrics
You can view the WJH metrics from the NetQ UI or the NetQ CLI. WJH metrics are visible on the WJH card and the Events card. To view the metrics on the Events card, open the large card and select the WJH tab at the top of the card. For a more detailed view, open the WJH card.
To add the WJH card to your workbench, navigate to the header and select Add card>Events>What Just Happened>Open cards
You can expand the card to see a detailed summary of WJH data, including devices with the most drops, the number of drops, their distribution, and a timeline:
Expand the card to its largest size to open the WJH dashboard. You can also access this dashboard by clicking Menu, then What Just Happened.
The table beneath the charts displays WJH events and recommendations for resolving them. Hover over the color-coded chart to view WJH event categories:
Click on a category in the chart for a detailed view:
Select Advanced view in the top-right corner for a tabular display of drops that can be sorted by drop type. This display includes additional information, such as source and destination IP addresses, ports, and MACs.
For L1 events, you can group entries by switch and ingress port to reduce the number of events displayed. To do this, select the Aggregate by port toggle in the top-right corner.
To view WJH drops, run one of the following commands. Refer to the command line reference for a comprehensive list of options and definitions.
An additional command is available that aggregates WJH L1 errors that occur on the same ingress port.
netq [<hostname>] show wjh-drop l1
[ingress-port <text-ingress-port>]
[severity <text-severity>]
[reason <text-reason>]
[port-aggregate <text-port-aggregate>]
[between <text-time> and <text-endtime>]
[around <text-time>] [json]
This example uses the first form of the command to show drops on switch leaf03 for the past week.
cumulus@switch:~$ netq leaf03 show wjh-drop between now and 7d
Matching wjh records:
Drop type Aggregate Count
------------------ ------------------------------
L1 560
Buffer 224
Router 144
L2 0
ACL 0
Tunnel 0
This example uses the second form of the command to show drops on switch leaf03 for the past week including the drop reasons.
cumulus@switch:~$ netq leaf03 show wjh-drop details between now and 7d
Matching wjh records:
Drop type Aggregate Count Reason
------------------ ------------------------------ ---------------------------------------------
L1 556 None
Buffer 196 WRED
Router 144 Blackhole route
Buffer 14 Packet Latency Threshold Crossed
Buffer 14 Port TC Congestion Threshold
L1 4 Oper down
This example shows the drops seen at layer 2 across the network.
cumulus@mlx-2700-03:mgmt:~$ netq show wjh-drop l2
Matching wjh records:
Hostname Ingress Port Reason Agg Count Src Ip Dst Ip Proto Src Port Dst Port Src Mac Dst Mac First Timestamp Last Timestamp
----------------- ------------------------ --------------------------------------------- ------------------ ---------------- ---------------- ------ ---------------- ---------------- ------------------ ------------------ ------------------------------ ----------------------------
mlx-2700-03 swp1s2 Port loopback filter 10 27.0.0.19 27.0.0.22 0 0 0 00:02:00:00:00:73 0c:ff:ff:ff:ff:ff Mon Dec 16 11:54:15 2019 Mon Dec 16 11:54:15 2019
mlx-2700-03 swp1s2 Source MAC equals destination MAC 10 27.0.0.19 27.0.0.22 0 0 0 00:02:00:00:00:73 00:02:00:00:00:73 Mon Dec 16 11:53:17 2019 Mon Dec 16 11:53:17 2019
mlx-2700-03 swp1s2 Source MAC equals destination MAC 10 0.0.0.0 0.0.0.0 0 0 0 00:02:00:00:00:73 00:02:00:00:00:73 Mon Dec 16 11:40:44 2019 Mon Dec 16 11:40:44 2019
The following two examples include the severity of a drop event (error, warning, or notice) for ACLs and routers.
cumulus@switch:~$ netq show wjh-drop acl
Matching wjh records:
Hostname Ingress Port Reason Severity Agg Count Src Ip Dst Ip Proto Src Port Dst Port Src Mac Dst Mac Acl Rule Id Acl Bind Point Acl Name Acl Rule First Timestamp Last Timestamp
----------------- ------------------------ --------------------------------------------- ---------------- ------------------ ---------------- ---------------- ------ ---------------- ---------------- ------------------ ------------------ ---------------------- ---------------------------- ---------------- ---------------- ------------------------------ ----------------------------
leaf01 swp2 Ingress router ACL Error 49 55.0.0.1 55.0.0.2 17 8492 21423 00:32:10:45:76:89 00:ab:05:d4:1b:13 0x0 0 Tue Oct 6 15:29:13 2020 Tue Oct 6 15:29:39 2020
cumulus@switch:~$ netq show wjh-drop router
Matching wjh records:
Hostname Ingress Port Reason Severity Agg Count Src Ip Dst Ip Proto Src Port Dst Port Src Mac Dst Mac First Timestamp Last Timestamp
----------------- ------------------------ --------------------------------------------- ---------------- ------------------ ---------------- ---------------- ------ ---------------- ---------------- ------------------ ------------------ ------------------------------ ----------------------------
leaf01 swp1 Blackhole route Notice 36 46.0.1.2 47.0.2.3 6 1235 43523 00:01:02:03:04:05 00:06:07:08:09:0a Tue Oct 6 15:29:13 2020 Tue Oct 6 15:29:47 2020
Configure Latency and Congestion Thresholds
WJH latency and congestion metrics depend on threshold settings to trigger the events. WJH measures packet latency as the time spent inside a single system (switch). When specified, WJH triggers events when measured values cross high thresholds and events are suppressed when values are below low thresholds.
You can specify multiple traffic classes and multiple ports by separating the classes or ports by a comma (no spaces).
For example, the following command creates latency thresholds for Class 3 traffic on port swp1 where the upper threshold is 10 usecs and the lower threshold is 1 usec:
This example creates congestion thresholds for Class 4 traffic on port swp1 where the upper threshold is 200 cells and the lower threshold is 10 cells, where a cell is a unit of 144 bytes:
Refer to the command line reference for a comprehensive list of options and definitions for this command.
Suppress Events with Filters
You can create filters with the UI or CLI to prevent WJH from generating events. Filters can be applied to a drop category (such as layer 1 drops or buffer drops), a drop reason (for example, “decapsulation error” or “multicast MAC mismatch”), or according to severity level (notice, warning, or error). With the CLI, you can create filters to suppress events according to their source or destination IP addresses.
For a complete list of drop types, reasons, and severity levels, refer to the WJH Events Reference.
Before configuring the NetQ Agent to filter WJH drops, you must generate AuthKeys. Copy the access key and secret key to an accessible location. You will enter them in one of the final steps.
Expand the Menu and select Manage switches.
Select the NetQ agent configurations tab.
On the NetQ Agent Configurations card, select Add config.
Enter a name for the profile. In the WJH row, select Enable, then Customize. By default, WJH includes all drop reasons and severities. Uncheck any drop reasons or severity you do not want to generate WJH events, then click Done.
Select Add to save the configuration profile, or click Close to discard it.
To configure the NetQ Agent to filter WJH drops, run netq config add agent wjh-drop-filter. Use tab completion to view the available drop type, drop reason, and severity values.
With the NetQ UI, you can monitor hardware resources of individual data processing units (DPUs), including CPU utilization, disk usage, and memory utilization. For DPU inventory information, refer to DPU Inventory.
For an overview of the current or past health of DPU hardware resources, open the DPU device card.
To open a DPU device card:
Search for the device’s hostname in the global search field or from the header select Add card > Device card.
Select a DPU from the dropdown.
Click Add. This example shows that the r-netq-bf2-01 DPU has low utilization across CPU, memory, and disks:
View DPU Attributes
For a quick look at the key attributes of a particular DPU, expand the DPU card.
Attributes are displayed as the default tab on the large DPU card. You can view the static information about the DPU, including its hostname, ASIC vendor and model, CPU information, OS version, and agent version.
To view a larger display of hardware resource utilization, select Utilization.
Expand the card to its largest size to view a list of installed packages and RoCE counters for a given DPU. You can filter RoCE information by physical port, priority port, RoCE extended, RoCE, and peripheral component interconnect (PCI).
You can use gRPC Network Management Interface (gNMI) to collect system resource, interface, and counter information from Cumulus Linux and export it to your own gNMI client.
Configure the gNMI Agent
The gNMI agent is disabled by default. To enable it, run:
The gNMI agent listens over port 9339. You can change the default port in case you use that port in another application. The /etc/netq/netq.yml file stores the configuration.
Use the following commands to adjust the settings:
Restart the NetQ agent to incorporate the configuration changes:
cumulus@switch:~$ netq config restart agent
Use the gNMI Agent Only
NVIDIA recommends collecting data with both the gNMI and NetQ agents. However, if you do not want to collect data with both agents, you can disable the NetQ agent. Data is then sent exclusively to the gNMI agent.
To disable the NetQ agent, use the following command:
You cannot disable both the NetQ and gNMI agents. If both agents are enabled on Cumulus Linux and a NetQ server is unreachable, the data from the following models are not sent to gNMI:
openconfig-interfaces
openconfig-if-ethernet
openconfig-if-ethernet-ext
openconfig-system
nvidia-if-ethernet-ext
WJH, openconfig-platform, and openconfig-lldp data continue streaming to gNMI in this state. If you are only using gNMI and a NetQ telemetry server does not exist, you should disable the NetQ agent by setting opta-enable to false.
Supported Models
Cumulus Linux supports the following OpenConfig models:
The client should use the following YANG models as a reference:
▼
nvidia-if-ethernet-ext
module nvidia-if-ethernet-counters-ext {
// xPath --> /interfaces/interface[name=*]/ethernet/counters/state/
namespace "http://nvidia.com/yang/nvidia-ethernet-counters";
prefix "nvidia-if-ethernet-counters-ext";
// import some basic types
import openconfig-interfaces { prefix oc-if; }
import openconfig-if-ethernet { prefix oc-eth; }
import openconfig-yang-types { prefix oc-yang; }
revision "2021-10-12" {
description
"Initial revision";
reference "1.0.0.";
}
grouping ethernet-counters-ext {
leaf alignment-error {
type oc-yang:counter64;
}
leaf in-acl-drops {
type oc-yang:counter64;
}
leaf in-buffer-drops {
type oc-yang:counter64;
}
leaf in-dot3-frame-errors {
type oc-yang:counter64;
}
leaf in-dot3-length-errors {
type oc-yang:counter64;
}
leaf in-l3-drops {
type oc-yang:counter64;
}
leaf in-pfc0-packets {
type oc-yang:counter64;
}
leaf in-pfc1-packets {
type oc-yang:counter64;
}
leaf in-pfc2-packets {
type oc-yang:counter64;
}
leaf in-pfc3-packets {
type oc-yang:counter64;
}
leaf in-pfc4-packets {
type oc-yang:counter64;
}
leaf in-pfc5-packets {
type oc-yang:counter64;
}
leaf in-pfc6-packets {
type oc-yang:counter64;
}
leaf in-pfc7-packets {
type oc-yang:counter64;
}
leaf out-non-q-drops {
type oc-yang:counter64;
}
leaf out-pfc0-packets {
type oc-yang:counter64;
}
leaf out-pfc1-packets {
type oc-yang:counter64;
}
leaf out-pfc2-packets {
type oc-yang:counter64;
}
leaf out-pfc3-packets {
type oc-yang:counter64;
}
leaf out-pfc4-packets {
type oc-yang:counter64;
}
leaf out-pfc5-packets {
type oc-yang:counter64;
}
leaf out-pfc6-packets {
type oc-yang:counter64;
}
leaf out-pfc7-packets {
type oc-yang:counter64;
}
leaf out-q0-wred-drops {
type oc-yang:counter64;
}
leaf out-q1-wred-drops {
type oc-yang:counter64;
}
leaf out-q2-wred-drops {
type oc-yang:counter64;
}
leaf out-q3-wred-drops {
type oc-yang:counter64;
}
leaf out-q4-wred-drops {
type oc-yang:counter64;
}
leaf out-q5-wred-drops {
type oc-yang:counter64;
}
leaf out-q6-wred-drops {
type oc-yang:counter64;
}
leaf out-q7-wred-drops {
type oc-yang:counter64;
}
leaf out-q8-wred-drops {
type oc-yang:counter64;
}
leaf out-q9-wred-drops {
type oc-yang:counter64;
}
leaf out-q-drops {
type oc-yang:counter64;
}
leaf out-q-length {
type oc-yang:counter64;
}
leaf out-wred-drops {
type oc-yang:counter64;
}
leaf symbol-errors {
type oc-yang:counter64;
}
leaf out-tx-fifo-full {
type oc-yang:counter64;
}
}
augment "/oc-if:interfaces/oc-if:interface/oc-eth:ethernet/" +
"oc-eth:state/oc-eth:counters" {
uses ethernet-counters-ext;
}
}
▼
nvidia-if-wjh-drop-aggregate
module nvidia-wjh {
// Entrypoint /oc-if:interfaces/oc-if:interface
//
// xPath L1 --> interfaces/interface[name=*]/wjh/aggregate/l1
// xPath L2 --> /interfaces/interface[name=*]/wjh/aggregate/l2/reasons/reason[id=*][severity=*]
// xPath Router --> /interfaces/interface[name=*]/wjh/aggregate/router/reasons/reason[id=*][severity=*]
// xPath Tunnel --> /interfaces/interface[name=*]/wjh/aggregate/tunnel/reasons/reason[id=*][severity=*]
// xPath Buffer --> /interfaces/interface[name=*]/wjh/aggregate/buffer/reasons/reason[id=*][severity=*]
// xPath ACL --> /interfaces/interface[name=*]/wjh/aggregate/acl/reasons/reason[id=*][severity=*]
import openconfig-interfaces { prefix oc-if; }
namespace "http://nvidia.com/yang/what-just-happened-config";
prefix "nvidia-wjh";
revision "2021-10-12" {
description
"Initial revision";
reference "1.0.0.";
}
augment "/oc-if:interfaces/oc-if:interface" {
uses interfaces-wjh;
}
grouping interfaces-wjh {
description "Top-level grouping for What-just happened data.";
container wjh {
container aggregate {
container l1 {
container state {
leaf drop {
type string;
description "Drop list based on wjh-drop-types module encoded in JSON";
}
}
}
container l2 {
uses reason-drops;
}
container router {
uses reason-drops;
}
container tunnel {
uses reason-drops;
}
container acl {
uses reason-drops;
}
container buffer {
uses reason-drops;
}
}
}
}
grouping reason-drops {
container reasons {
list reason {
key "id severity";
leaf id {
type leafref {
path "../state/id";
}
description "reason ID";
}
leaf severity {
type leafref {
path "../state/severity";
}
description "Reason severity";
}
container state {
leaf id {
type uint32;
description "Reason ID";
}
leaf name {
type string;
description "Reason name";
}
leaf severity {
type string;
mandatory "true";
description "Reason severity";
}
leaf drop {
type string;
description "Drop list based on wjh-drop-types module encoded in JSON";
}
}
}
}
}
}
module wjh-drop-types {
namespace "http://nvidia.com/yang/what-just-happened-config-types";
prefix "wjh-drop-types";
container l1-aggregated {
uses l1-drops;
}
container l2-aggregated {
uses l2-drops;
}
container router-aggregated {
uses router-drops;
}
container tunnel-aggregated {
uses tunnel-drops;
}
container acl-aggregated {
uses acl-drops;
}
container buffer-aggregated {
uses buffer-drops;
}
grouping reason-key {
leaf id {
type uint32;
mandatory "true";
description "reason ID";
}
leaf severity {
type string;
mandatory "true";
description "Severity";
}
}
grouping reason_info {
leaf reason {
type string;
mandatory "true";
description "Reason name";
}
leaf drop_type {
type string;
mandatory "true";
description "reason drop type";
}
leaf ingress_port {
type string;
mandatory "true";
description "Ingress port name";
}
leaf ingress_lag {
type string;
description "Ingress LAG name";
}
leaf egress_port {
type string;
description "Egress port name";
}
leaf agg_count {
type uint64;
description "Aggregation count";
}
leaf severity {
type string;
description "Severity";
}
leaf first_timestamp {
type uint64;
description "First timestamp";
}
leaf end_timestamp {
type uint64;
description "End timestamp";
}
}
grouping packet_info {
leaf smac {
type string;
description "Source MAC";
}
leaf dmac {
type string;
description "Destination MAC";
}
leaf sip {
type string;
description "Source IP";
}
leaf dip {
type string;
description "Destination IP";
}
leaf proto {
type uint32;
description "Protocol";
}
leaf sport {
type uint32;
description "Source port";
}
leaf dport {
type uint32;
description "Destination port";
}
}
grouping l1-drops {
description "What-just happened drops.";
leaf ingress_port {
type string;
description "Ingress port";
}
leaf is_port_up {
type boolean;
description "Is port up";
}
leaf port_down_reason {
type string;
description "Port down reason";
}
leaf description {
type string;
description "Description";
}
leaf state_change_count {
type uint64;
description "State change count";
}
leaf symbol_error_count {
type uint64;
description "Symbol error count";
}
leaf crc_error_count {
type uint64;
description "CRC error count";
}
leaf first_timestamp {
type uint64;
description "First timestamp";
}
leaf end_timestamp {
type uint64;
description "End timestamp";
}
leaf timestamp {
type uint64;
description "Timestamp";
}
}
grouping l2-drops {
description "What-just happened drops.";
uses reason_info;
uses packet_info;
}
grouping router-drops {
description "What-just happened drops.";
uses reason_info;
uses packet_info;
}
grouping tunnel-drops {
description "What-just happened drops.";
uses reason_info;
uses packet_info;
}
grouping acl-drops {
description "What-just happened drops.";
uses reason_info;
uses packet_info;
leaf acl_rule_id {
type uint64;
description "ACL rule ID";
}
leaf acl_bind_point {
type uint32;
description "ACL bind point";
}
leaf acl_name {
type string;
description "ACL name";
}
leaf acl_rule {
type string;
description "ACL rule";
}
}
grouping buffer-drops {
description "What-just happened drops.";
uses reason_info;
uses packet_info;
leaf traffic_class {
type uint32;
description "Traffic Class";
}
leaf original_occupancy {
type uint32;
description "Original occupancy";
}
leaf original_latency {
type uint64;
description "Original latency";
}
}
}
Collect WJH Data Using gNMI
You can export What Just Happened data from the NetQ agent to your own gNMI client. Refer to the previous section for the nvidia-if-wjh-drop-aggregate reference YANG model.
Supported Features
The gNMI agent supports capability and stream subscribe requests for WJH events.
If you are using SONiC, WJH data can only be collected using gNMI.
WJH Drop Reasons
The data NetQ sends to the gNMI agent is in the form of WJH drop reasons. The reasons are generated by the SDK and are stored in the /usr/etc/wjh_lib_conf.xml file on the switch. Use this file as a guide to filter for specific reason types (L1, ACL, and so forth), reason IDs, or event severities.
L1 Drop Reasons
Reason ID
Reason
Description
10021
Port admin down
Validate port configuration
10022
Auto-negotiation failure
Set port speed manually, disable auto-negotiation
10023
Logical mismatch with peer link
Check cable/transceiver
10024
Link training failure
Check cable/transceiver
10025
Peer is sending remote faults
Replace cable/transceiver
10026
Bad signal integrity
Replace cable/transceiver
10027
Cable/transceiver is not supported
Use supported cable/transceiver
10028
Cable/transceiver is unplugged
Plug cable/transceiver
10029
Calibration failure
Check cable/transceiver
10030
Cable/transceiver bad status
Check cable/transceiver
10031
Other reason
Other L1 drop reason
L2 Drop Reasons
Reason ID
Reason
Severity
Description
201
MLAG port isolation
Notice
Expected behavior
202
Destination MAC is reserved (DMAC=01-80-C2-00-00-0x)
Error
Bad packet was received from the peer
203
VLAN tagging mismatch
Error
Validate the VLAN tag configuration on both ends of the link
204
Ingress VLAN filtering
Error
Validate the VLAN membership configuration on both ends of the link
205
Ingress spanning tree filter
Notice
Expected behavior
206
Unicast MAC table action discard
Error
Validate MAC table for this destination MAC
207
Multicast egress port list is empty
Warning
Validate why IGMP join or multicast router port does not exist
208
Port loopback filter
Error
Validate MAC table for this destination MAC
209
Source MAC is multicast
Error
Bad packet was received from peer
210
Source MAC equals destination MAC
Error
Bad packet was received from peer
Router Drop Reasons
Reason ID
Reason
Severity
Description
301
Non-routable packet
Notice
Expected behavior
302
Blackhole route
Warning
Validate routing table for this destination IP
303
Unresolved neighbor/next hop
Warning
Validate ARP table for the neighbor/next hop
304
Blackhole ARP/neighbor
Warning
Validate ARP table for the next hop
305
IPv6 destination in multicast scope FFx0:/16
Notice
Expected behavior - packet is not routable
306
IPv6 destination in multicast scope FFx1:/16
Notice
Expected behavior - packet is not routable
307
Non-IP packet
Notice
Destination MAC is the router, packet is not routable
308
Unicast destination IP but multicast destination MAC
Error
Bad packet was received from the peer
309
Destination IP is loopback address
Error
Bad packet was received from the peer
310
Source IP is multicast
Error
Bad packet was received from the peer
311
Source IP is in class E
Error
Bad packet was received from the peer
312
Source IP is loopback address
Error
Bad packet was received from the peer
313
Source IP is unspecified
Error
Bad packet was received from the peer
314
Checksum or IPver or IPv4 IHL too short
Error
Bad cable or bad packet was received from the peer
315
Multicast MAC mismatch
Error
Bad packet was received from the peer
316
Source IP equals destination IP
Error
Bad packet was received from the peer
317
IPv4 source IP is limited broadcast
Error
Bad packet was received from the peer
318
IPv4 destination IP is local network (destination=0.0.0.0/8)
Error
Bad packet was received from the peer
320
Ingress router interface is disabled
Warning
Validate your configuration
321
Egress router interface is disabled
Warning
Validate your configuration
323
IPv4 routing table (LPM) unicast miss
Warning
Validate routing table for this destination IP
324
IPv6 routing table (LPM) unicast miss
Warning
Validate routing table for this destination IP
325
Router interface loopback
Warning
Validate the interface configuration
326
Packet size is larger than router interface MTU
Warning
Validate the router interface MTU configuration
327
TTL value is too small
Warning
Actual path is longer than the TTL
Tunnel Drop Reasons
Reason ID
Reason
Severity
Description
402
Overlay switch - Source MAC is multicast
Error
The peer sent a bad packet
403
Overlay switch - Source MAC equals destination MAC
Error
The peer sent a bad packet
404
Decapsulation error
Error
The peer sent a bad packet
ACL Drop Reasons
Reason ID
Reason
Severity
Description
601
Ingress port ACL
Notice
Validate ACL configuration
602
Ingress router ACL
Notice
Validate ACL configuration
603
Egress router ACL
Notice
Validate ACL configuration
604
Egress port ACL
Notice
Validate ACL configuration
Buffer Drop Reasons
Reason ID
Reason
Severity
Description
503
Tail drop
Warning
Monitor network congestion
504
WRED
Warning
Monitor network congestion
505
Port TC congestion threshold crossed
Notice
Monitor network congestion
506
Packet latency threshold crossed
Notice
Monitor network congestion
gNMI Client Requests
You can use your gNMI client on a host server to request capabilities and data that the agent is subscribed to.
The following example shows a gNMI client request for interface speed:
NetQ Agent state changed to Rotten (not heard from in over 15 seconds)
Error
Agent state changed to rotten
Agent state changed to rotten
agent
NetQ Agent rebooted
Error
Netq-agent rebooted at (@last_boot)
Netq-agent rebooted at 1573166417
agent
Node running NetQ Agent rebooted
Error
Switch rebooted at (@sys_uptime)
Switch rebooted at 1573166131
agent
NetQ Agent state changed to Fresh
Info
Agent state changed to fresh
Agent state changed to fresh
agent
NetQ Agent state was reset
Info
Agent state was paused and resumed at (@last_reinit)
Agent state was paused and resumed at 1573166125
agent
Version of NetQ Agent has changed
Info
Agent version has been changed old_version:@old_version and new_version:@new_version. Agent reset at @sys_uptime
Agent version has been changed old_version:2.1.2 and new_version:2.3.1. Agent reset at 1573079725
BGP Events
Type
Trigger
Severity
Message Format
Example
bgp
BGP Session state changed
Error
BGP session with peer @peer @neighbor vrf @vrf state changed from @old_state to @new_state
BGP session with peer leaf03 leaf04 vrf mgmt state changed from Established to Failed
bgp
BGP Session state changed from Failed to Established
Info
BGP session with peer @peer @peerhost @neighbor vrf @vrf session state changed from Failed to Established
BGP session with peer swp5 spine02 spine03 vrf default session state changed from Failed to Established
bgp
BGP Session state changed from Established to Failed
Info
BGP session with peer @peer @neighbor vrf @vrf state changed from established to failed
BGP session with peer leaf03 leaf04 vrf mgmt state changed from down to up
bgp
The reset time for a BGP session changed
Info
BGP session with peer @peer @neighbor vrf @vrf reset time changed from @old_last_reset_time to @new_last_reset_time
BGP session with peer spine03 swp9 vrf vrf2 reset time changed from 1559427694 to 1559837484
BTRFS Events
Type
Trigger
Severity
Message Format
Example
btrfsinfo
Disk space available after BTRFS allocation is less than 80% of partition size or only 2 GB remain.
Error
@info : @details
high btrfs allocation space : greater than 80% of partition size, 61708420
btrfsinfo
Indicates if a rebalance operation can free up space on the disk
Error
@info : @details
data storage efficiency : space left after allocation greater than chunk size 6170849.2","
Cable Events
Type
Trigger
Severity
Message Format
Example
cable
Link speed is not the same on both ends of the link
Error
@ifname speed @speed, mismatched with peer @peer @peer_if speed @peer_speed
swp2 speed 10, mismatched with peer server02 swp8 speed 40
cable
The speed setting for a given port changed
Info
@ifname speed changed from @old_speed to @new_speed
swp9 speed changed from 10 to 40
cable
The transceiver status for a given port changed
Info
@ifname transceiver changed from @old_transceiver to @new_transceiver
swp4 transceiver changed from disabled to enabled
cable
The vendor of a given transceiver changed
Info
@ifname vendor name changed from @old_vendor_name to @new_vendor_name
swp23 vendor name changed from Broadcom to NVIDIA
cable
The part number of a given transceiver changed
Info
@ifname part number changed from @old_part_number to @new_part_number
swp7 part number changed from FP1ZZ5654002A to MSN2700-CS2F0
cable
The serial number of a given transceiver changed
Info
@ifname serial number changed from @old_serial_number to @new_serial_number
swp4 serial number changed from 571254X1507020 to MT1552X12041
cable
The status of forward error correction (FEC) support for a given port changed
Info
@ifname supported fec changed from @old_supported_fec to @new_supported_fec
swp12 supported fec changed from supported to unsupported
swp12 supported fec changed from unsupported to supported
cable
The advertised support for FEC for a given port changed
Info
@ifname supported fec changed from @old_advertised_fec to @new_advertised_fec
swp24 supported FEC changed from advertised to not advertised
cable
The FEC status for a given port changed
Info
@ifname fec changed from @old_fec to @new_fec
swp15 fec changed from disabled to enabled
CLAG/MLAG Events
Type
Trigger
Severity
Message Format
Example
clag
CLAG remote peer state changed from up to down
Error
Peer state changed to down
Peer state changed to down
clag
Local CLAG host MTU does not match its remote peer MTU
Error
SVI @svi1 on vlan @vlan mtu @mtu1 mismatched with peer mtu @mtu2
SVI svi7 on vlan 4 mtu 1592 mistmatched with peer mtu 1680
clag
CLAG SVI on VLAN is missing from remote peer state
Error
SVI on vlan @vlan is missing from peer
SVI on vlan vlan4 is missing from peer
clag
CLAG peerlink is not opperating at full capacity. At least one link is down.
Error
Clag peerlink not at full redundancy, member link @slave is down
Clag peerlink not at full redundancy, member link swp40 is down
clag
CLAG remote peer state changed from down to up
Info
Peer state changed to up
Peer state changed to up
clag
Local CLAG host state changed from down to up
Info
Clag state changed from down to up
Clag state changed from down to up
clag
CLAG bond in Conflicted state updated with new bonds
Info
Clag conflicted bond changed from @old_conflicted_bonds to @new_conflicted_bonds
Clag conflicted bond changed from swp7 swp8 to @swp9 swp10
clag
CLAG bond changed state from protodown to up state
Info
Clag conflicted bond changed from @old_state_protodownbond to @new_state_protodownbond
Clag conflicted bond changed from protodown to up
CL Support Events
Type
Trigger
Severity
Message Format
Example
clsupport
A new CL Support file has been created for the given node
Error
HostName @hostname has new CL SUPPORT file
HostName leaf01 has new CL SUPPORT file
Config Diff Events
Type
Trigger
Severity
Message Format
Example
configdiff
Configuration file deleted on a device
Error
@hostname config file @type was deleted
spine03 config file /etc/frr/frr.conf was deleted
configdiff
Configuration file has been created
Info
@hostname config file @type was created
leaf12 config file /etc/lldp.d/README.conf was created
configdiff
Configuration file has been modified
Info
@hostname config file @type was modified
spine03 config file /etc/frr/frr.conf was modified
EVPN Events
Type
Trigger
Severity
Message Format
Example
evpn
A VNI was configured and moved from the up state to the down state
Error
VNI @vni state changed from up to down
VNI 36 state changed from up to down
evpn
A VNI was configured and moved from the down state to the up state
Info
VNI @vni state changed from down to up
VNI 36 state changed from down to up
evpn
The kernel state changed on a VNI
Info
VNI @vni kernel state changed from @old_in_kernel_state to @new_in_kernel_state
VNI 3 kernel state changed from down to up
evpn
A VNI state changed from not advertising all VNIs to advertising all VNIs
Info
VNI @vni vni state changed from @old_adv_all_vni_state to @new_adv_all_vni_state
VNI 11 vni state changed from false to true
Lifecycle Management Events
Type
Trigger
Severity
Message Format
Example
lcm
Cumulus Linux backup started for a switch or host
Info
CL configuration backup started for hostname @hostname
CL configuration backup started for hostname spine01
lcm
Cumulus Linux backup completed for a switch or host
Info
CL configuration backup completed for hostname @hostname
CL configuration backup completed for hostname spine01
lcm
Cumulus Linux backup failed for a switch or host
Error
CL configuration backup failed for hostname @hostname
CL configuration backup failed for hostname spine01
lcm
Cumulus Linux upgrade from one version to a newer version has started for a switch or host
Error
CL Image upgrade from version @old_cl_version to version @new_cl_version started for hostname @hostname
CL Image upgrade from version 4.1.0 to version 4.2.1 started for hostname server01
lcm
Cumulus Linux upgrade from one version to a newer version has completed successfully for a switch or host
Info
CL Image upgrade from version @old_cl_version to version @new_cl_version completed for hostname @hostname
CL Image upgrade from version 4.1.0 to version 4.2.1 completed for hostname server01
lcm
Cumulus Linux upgrade from one version to a newer version has failed for a switch or host
Error
CL Image upgrade from version @old_cl_version to version @new_cl_version failed for hostname @hostname
CL Image upgrade from version 4.1.0 to version 4.2.1 failed for hostname server01
lcm
Restoration of a Cumulus Linux configuration started for a switch or host
Info
CL configuration restore started for hostname @hostname
CL configuration restore started for hostname leaf01
lcm
Restoration of a Cumulus Linux configuration completed successfully for a switch or host
Info
CL configuration restore completed for hostname @hostname
CL configuration restore completed for hostname leaf01
lcm
Restoration of a Cumulus Linux configuration failed for a switch or host
Error
CL configuration restore failed for hostname @hostname
CL configuration restore failed for hostname leaf01
lcm
Rollback of a Cumulus Linux image has started for a switch or host
Error
CL Image rollback from version @old_cl_version to version @new_cl_version started for hostname @hostname
CL Image rollback from version 4.2.1 to version 4.1.0 started for hostname leaf01
lcm
Rollback of a Cumulus Linux image has completed successfully for a switch or host
Info
CL Image rollback from version @old_cl_version to version @new_cl_version completed for hostname @hostname
CL Image rollback from version 4.2.1 to version 4.1.0 completed for hostname leaf01
lcm
Rollback of a Cumulus Linux image has failed for a switch or host
Error
CL Image rollback from version @old_cl_version to version @new_cl_version failed for hostname @hostname
CL Image rollback from version 4.2.1 to version 4.1.0 failed for hostname leaf01
lcm
Installation of a NetQ image has started for a switch or host
Info
NetQ Image version @netq_version installation started for hostname @hostname
NetQ Image version 3.2.0 installation started for hostname spine02
lcm
Installation of a NetQ image has completed successfully for a switch or host
Info
NetQ Image version @netq_version installation completed for hostname @hostname
NetQ Image version 3.2.0 installation completed for hostname spine02
lcm
Installation of a NetQ image has failed for a switch or host
Error
NetQ Image version @netq_version installation failed for hostname @hostname
NetQ Image version 3.2.0 installation failed for hostname spine02
lcm
Upgrade of a NetQ image has started for a switch or host
Info
NetQ Image upgrade from version @old_netq_version to version @netq_version started for hostname @hostname
NetQ Image upgrade from version 3.1.0 to version 3.2.0 started for hostname spine02
lcm
Upgrade of a NetQ image has completed successfully for a switch or host
Info
NetQ Image upgrade from version @old_netq_version to version @netq_version completed for hostname @hostname
NetQ Image upgrade from version 3.1.0 to version 3.2.0 completed for hostname spine02
lcm
Upgrade of a NetQ image has failed for a switch or host
Error
NetQ Image upgrade from version @old_netq_version to version @netq_version failed for hostname @hostname
NetQ Image upgrade from version 3.1.0 to version 3.2.0 failed for hostname spine02
Link Events
Type
Trigger
Severity
Message Format
Example
link
Link operational state changed from up to down
Error
HostName @hostname changed state from @old_state to @new_state Interface:@ifname
HostName leaf01 changed state from up to down Interface:swp34
link
Link operational state changed from down to up
Info
HostName @hostname changed state from @old_state to @new_state Interface:@ifname
HostName leaf04 changed state from down to up Interface:swp11
LLDP Events
Type
Trigger
Severity
Message Format
Example
lldp
Local LLDP host has new neighbor information
Info
LLDP Session with host @hostname and @ifname modified fields @changed_fields
LLDP Session with host leaf02 swp6 modified fields leaf06 swp21
lldp
Local LLDP host has new peer interface name
Info
LLDP Session with host @hostname and @ifname @old_peer_ifname changed to @new_peer_ifname
LLDP Session with host spine01 and swp5 swp12 changed to port12
lldp
Local LLDP host has new peer hostname
Info
LLDP Session with host @hostname and @ifname @old_peer_hostname changed to @new_peer_hostname
LLDP Session with host leaf03 and swp2 leaf07 changed to exit01
MTU Events
Type
Trigger
Severity
Message Format
Example
mtu
VLAN interface link MTU is smaller than that of its parent MTU
Error
vlan interface @link mtu @mtu is smaller than parent @parent mtu @parent_mtu
vlan interface swp3 mtu 1500 is smaller than parent peerlink-1 mtu 1690
mtu
Bridge interface MTU is smaller than the member interface with the smallest MTU
Error
bridge @link mtu @mtu is smaller than least of member interface mtu @min
bridge swp0 mtu 1280 is smaller than least of member interface mtu 1500
NTP Events
Type
Trigger
Severity
Message Format
Example
ntp
NTP sync state changed from in sync to not in sync
Error
Sync state changed from @old_state to @new_state for @hostname
Sync state changed from in sync to not sync for leaf06
ntp
NTP sync state changed from not in sync to in sync
Info
Sync state changed from @old_state to @new_state for @hostname
Sync state changed from not sync to in sync for leaf06
## Package Information Events
Type
Trigger
Severity
Message Format
Example
packageinfo
Package version on device does not match the version identified in the existing manifest
Error
@package_name manifest version mismatch
netq-apps manifest version mismatch
Resource Events
Type
Trigger
Severity
Message Format
Example
resource
A physical resource has been deleted from a device
Error
Resource Utils deleted for @hostname
Resource Utils deleted for spine02
resource
Root file system access on a device has changed from Read/Write to Read Only
Error
@hostname root file system access mode set to Read Only
server03 root file system access mode set to Read Only
resource
Root file system access on a device has changed from Read Only to Read/Write
Info
@hostname root file system access mode set to Read/Write
leaf11 root file system access mode set to Read/Write
resource
A physical resource has been added to a device
Info
Resource Utils added for @hostname
Resource Utils added for spine04
Running Config Diff Events
Type
Trigger
Severity
Message Format
Example
runningconfigdiff
Running configuration file has been modified
Info
@commandname config result was modified
@commandname config result was modified
Sensor Events
Type
Trigger
Severity
Message Format
Example
sensor
A fan or power supply unit sensor has changed state
Error
Sensor @sensor state changed from @old_s_state to @new_s_state
Sensor fan state changed from up to down
sensor
A temperature sensor has crossed the maximum threshold for that sensor
Error
Sensor @sensor max value @new_s_max exceeds threshold @new_s_crit
Sensor temp max value 110 exceeds the threshold 95
sensor
A temperature sensor has crossed the minimum threshold for that sensor
Error
Sensor @sensor min value @new_s_lcrit fall behind threshold @new_s_min
Sensor psu min value 10 fell below threshold 25
sensor
A temperature, fan, or power supply sensor state changed
Info
Sensor @sensor state changed from @old_state to @new_state
Sensor temperature state changed from Error to ok
Sensor fan state changed from absent to ok
Sensor psu state changed from bad to ok
sensor
A fan or power supply sensor state changed
Info
Sensor @sensor state changed from @old_s_state to @new_s_state
Sensor fan state changed from down to up
Sensor psu state changed from down to up
Services Events
Type
Trigger
Severity
Message Format
Example
services
A service status changed from down to up
Error
Service @name status changed from @old_status to @new_status
Service bgp status changed from down to up
services
A service status changed from up to down
Error
Service @name status changed from @old_status to @new_status
Service lldp status changed from up to down
services
A service changed state from inactive to active
Info
Service @name changed state from inactive to active
Service bgp changed state from inactive to active
Service lldp changed state from inactive to active
SSD Utilization Events
Type
Trigger
Severity
Message Format
Example
ssdutil
3ME3 disk health has dropped below 10%
Error
@info: @details
low health : 5.0%
ssdutil
A dip in 3ME3 disk health of more than 2% has occurred within the last 24 hours
Error
@info: @details
significant health drop : 3.0%
Version Events
Type
Trigger
Severity
Message Format
Example
version
An unknown version of the operating system was detected
Error
unexpected os version @my_ver
unexpected os version cl3.2
version
Desired version of the operating system is not available
Error
os version @ver
os version cl3.7.9
version
An unknown version of a software package was detected
Error
expected release version @ver
expected release version cl3.6.2
version
Desired version of a software package is not available
Error
different from version @ver
different from version cl4.0
VXLAN Events
Type
Trigger
Severity
Message Format
Example
vxlan
Replication list is contains an inconsistent set of nodes<>
Error<>
VNI @vni replication list inconsistent with @conflicts diff:@diff<>
VNI 14 replication list inconsistent with ["leaf03","leaf04"] diff:+:["leaf03","leaf04"] -:["leaf07","leaf08"]
Threshold-Crossing Events Reference
This reference lists the threshold-based events that NetQ supports. You can view these messages through third-party notification applications. For details about configuring notifications for these events, refer to Configure and Monitor Threshold-Crossing Events.
ACL Resources
NetQ UI Name
NetQ CLI Event ID
Description
Ingress ACL IPv4 %
TCA_TCAM_IN_ACL_V4_FILTER_UPPER
Number of ingress ACL filters for IPv4 addresses on a given switch or host exceeded user-defined threshold
Egress ACL IPv4 %
TCA_TCAM_EG_ACL_V4_FILTER_UPPER
Number of egress ACL filters for IPv4 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress ACL IPv4 mangle %
TCA_TCAM_IN_ACL_V4_MANGLE_UPPER
Number of ingress ACL mangles for IPv4 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress ACL IPv4 mangle %
TCA_TCAM_EG_ACL_V4_MANGLE_UPPER
Number of egress ACL mangles for IPv4 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress ACL IPv6 %
TCA_TCAM_IN_ACL_V6_FILTER_UPPER
Number of ingress ACL filters for IPv6 addresses on a given switch or host exceeded user-defined maximum threshold
Egress ACL IPv6 %
TCA_TCAM_EG_ACL_V6_FILTER_UPPER
Number of egress ACL filters for IPv6 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress ACL IPv6 mangle %
TCA_TCAM_IN_ACL_V6_MANGLE_UPPER
Number of ingress ACL mangles for IPv6 addresses on a given switch or host exceeded user-defined maximum threshold
Egress ACL IPv6 mangle %
TCA_TCAM_EG_ACL_V6_MANGLE_UPPER
Number of egress ACL mangles for IPv6 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress ACL 8021x %
TCA_TCAM_IN_ACL_8021x_FILTER_UPPER
Number of ingress ACL 802.1 filters on a given switch or host exceeded user-defined maximum threshold
ACL L4 port %
TCA_TCAM_ACL_L4_PORT_CHECKERS_UPPER
Number of ACL port range checkers on a given switch or host exceeded user-defined maximum threshold
ACL regions %
TCA_TCAM_ACL_REGIONS_UPPER
Number of ACL regions on a given switch or host exceeded user-defined maximum threshold
Ingress ACL mirror %
TCA_TCAM_IN_ACL_MIRROR_UPPER
Number of ingress ACL mirrors on a given switch or host exceeded user-defined maximum threshold
ACL 18B rules %
TCA_TCAM_ACL_18B_RULES_UPPER
Number of ACL 18B rules on a given switch or host exceeded user-defined maximum threshold
ACL 32B %
TCA_TCAM_ACL_32B_RULES_UPPER
Number of ACL 32B rules on a given switch or host exceeded user-defined maximum threshold
ACL 54B %
TCA_TCAM_ACL_54B_RULES_UPPER
Number of ACL 54B rules on a given switch or host exceeded user-defined maximum threshold
Ingress PBR IPv4 %
TCA_TCAM_IN_PBR_V4_FILTER_UPPER
Number of ingress policy-based routing (PBR) filters for IPv4 addresses on a given switch or host exceeded user-defined maximum threshold
Ingress PBR IPv6 %
TCA_TCAM_IN_PBR_V6_FILTER_UPPER
Number of ingress policy-based routing (PBR) filters for IPv6 addresses on a given switch or host exceeded user-defined maximum threshold
BGP
NetQ UI Name
NetQ CLI Event ID
Description
BGP connection drop
TCA_BGP_CONN_DROP
Increase in drop count for a BGP session exceeding user-defined threshold
BGP packet queue length
TCA_BGP_PACKET_QUEUE_LENGTH
Packet queue length persistently non-zero for more than the threshold duration (in seconds)
Digital Optics
NetQ UI Name
NetQ CLI Event ID
Description
Laser Rx power alarm upper
TCA_DOM_RX_POWER_ALARM_UPPER
Transceiver Input power (mW) for the digital optical module on a given switch or host interface exceeded user-defined the maximum alarm threshold
Laser Rx power alarm lower
TCA_DOM_RX_POWER_ALARM_LOWER
Transceiver Input power (mW) for the digital optical module on a given switch or host exceeded user-defined minimum alarm threshold
Laser Rx power warning upper
TCA_DOM_RX_POWER_WARNING_UPPER
Transceiver Input power (mW) for the digital optical module on a given switch or host exceeded user-defined specified warning threshold
Laser Rx power warning lower
TCA_DOM_RX_POWER_WARNING_LOWER
Transceiver Input power (mW) for the digital optical module on a given switch or host exceeded user-defined minimum warning threshold
Laser bias current alarm upper
TCA_DOM_BIAS_CURRENT_ALARM_UPPER
Laser bias current (mA) for the digital optical module on a given switch or host exceeded user-defined maximum alarm threshold
Laser bias current alarm lower
TCA_DOM_BIAS_CURRENT_ALARM_LOWER
Laser bias current (mA) for the digital optical module on a given switch or host exceeded user-defined minimum alarm threshold
Laser bias current warning upper
TCA_DOM_BIAS_CURRENT_WARNING_UPPER
Laser bias current (mA) for the digital optical module on a given switch or host exceeded user-defined maximum warning threshold
Laser bias current warning lower
TCA_DOM_BIAS_CURRENT_WARNING_LOWER
Laser bias current (mA) for the digital optical module on a given switch or host exceeded user-defined minimum warning threshold
Laser output power alarm upper
TCA_DOM_OUTPUT_POWER_ALARM_UPPER
Laser output power (mW) for the digital optical module on a given switch or host exceeded user-defined maximum alarm threshold
Laser output power alarm lower
TCA_DOM_OUTPUT_POWER_ALARM_LOWER
Laser output power (mW) for the digital optical module on a given switch or host exceeded user-defined minimum alarm threshold
Laser output power alarm upper
TCA_DOM_OUTPUT_POWER_WARNING_UPPER
Laser output power (mW) for the digital optical module on a given switch or host exceeded user-defined maximum warning threshold
Laser output power warning lower
TCA_DOM_OUTPUT_POWER_WARNING_LOWER
Laser output power (mW) for the digital optical module on a given switch or host exceeded user-defined minimum warning threshold
Laser module temperature alarm upper
TCA_DOM_MODULE_TEMPERATURE_ALARM_UPPER
Digital optical module temperature (°C) on a given switch or host exceeded user-defined maximum alarm threshold
Laser module temperature alarm lower
TCA_DOM_MODULE_TEMPERATURE_ALARM_LOWER
Digital optical module temperature (°C) on a given switch or host exceeded user-defined minimum alarm threshold
Laser module temperature warning upper
TCA_DOM_MODULE_TEMPERATURE_WARNING_UPPER
Digital optical module temperature (°C) on a given switch or host exceeded user-defined maximum warning threshold
Laser module temperature warning lower
TCA_DOM_MODULE_TEMPERATURE_WARNING_LOWER
Digital optical module temperature (°C) on a given switch or host exceeded user-defined minimum warning threshold
Laser module voltage alarm upper
TCA_DOM_MODULE_VOLTAGE_ALARM_UPPER
Transceiver voltage (V) on a given switch or host exceeded user-defined maximum alarm threshold
Laser module voltage alarm lower
TCA_DOM_MODULE_VOLTAGE_ALARM_LOWER
Transceiver voltage (V) on a given switch or host exceeded user-defined minimum alarm threshold
Laser module voltage warning upper
TCA_DOM_MODULE_VOLTAGE_WARNING_UPPER
Transceiver voltage (V) on a given switch or host exceeded user-defined maximum warning threshold
Laser module voltage warning lower
TCA_DOM_MODULE_VOLTAGE_WARNING_LOWER
Transceiver voltage (V) on a given switch or host exceeded user-defined minimum warning threshold
DPU RoCE
NetQ UI Name
NetQ CLI Event ID
Description
Implied nak seq error
TCA_HOSTD_IMPLIED_NAK_SEQ_ERR
Count of implied sequence errors exceeded user-defined maximum threshold
Out of buffer
TCA_HOSTD_OUT_OF_BUFFER
Count of out-of-buffer errors exceeded user-defined maximum threshold
Outbound PCI stalled read
TCA_HOSTD_OUTBOUND_PCI_STALLED_RD
Percentage of outbound stalled read requests exceeded user-defined maximum threshold
Outbound PCI stalled write
TCA_HOSTD_OUTBOUND_PCI_STALLED_WR
Percentage of outbound stalled write requests exceeded user-defined maximum threshold
Packet seq err
TCA_HOSTD_PACKET_SEQ_ERR
Count of packet sequence errors exceeded user-defined maximum threshold
Req CQE error
TCA_HOSTD_REQ_CQE_ERROR
Count of req completion queue events (CQE) errors exceeded user-defined maximum threshold
Req remote access errors
TCA_HOSTD_REQ_REMOTE_ACCESS_ERRORS
Count of remote access errors exceeded user-defined maximum threshold
Resp CQE error
TCA_HOSTD_RESP_CQE_ERROR
Count of response completion queue events (CQE) errors exceeded user-defined maximum threshold
Resp remote access errors
TCA_HOSTD_RESP_REMOTE_ACCESS_ERRORS
Count of response remote access errors exceeded user-defined maximum threshold
RNR nak retry error
TCA_HOSTD_RNR_NAK_RETRY_ERR
Count of RNR retry errors exceeded user-defined maximum threshold
Rx CRC errors phy
TCA_HOSTD_RX_CRC_ERRORS_PHY
Count of Rx CRC errors exceeded user-defined maximum threshold
Rx discards phy
TCA_HOSTD_RX_DISCARDS_PHY
Rate of Rx discards exceeded user-defined maximum threshold
Rx PCI signal integrity
TCA_HOSTD_RX_PCI_SIGNAL_INTEGRITY
Count of Rx PCIe signal integrity errors exceeded user-defined maximum threshold
Rx pcs symbol err phy
TCA_HOSTD_RX_PCS_SYMBOL_ERR_PHY
Count of Rx symbol errors exceeded user-defined maximum threshold
Rx prio0 buf discard
TCA_HOSTD_RX_PRIO0_BUF_DISCARD
Rate of p0 buffer discards exceeded user-defined maximum threshold
Rx prio0 cong discard
TCA_HOSTD_RX_PRIO0_CONG_DISCARD
Rate of p0 congestion discards exceeded user-defined maximum threshold
Rx prio1 buf discard
TCA_HOSTD_RX_PRIO1_BUF_DISCARD
Rate of p1 buffer discards exceeded user-defined maximum threshold
Rx prio1 cong discard
TCA_HOSTD_RX_PRIO1_CONG_DISCARD
Rate of p1 congestion discards exceeded user-defined maximum threshold
Rx prio2 buf discard
TCA_HOSTD_RX_PRIO2_BUF_DISCARD
Rate of p2 buffer discards exceeded user-defined maximum threshold
Rx prio2 cong discard
TCA_HOSTD_RX_PRIO2_CONG_DISCARD
Rate of p2 congestion discards exceeded user-defined maximum threshold
Rx prio3 buf discard
TCA_HOSTD_RX_PRIO3_BUF_DISCARD
Rate of p3 buffer discards exceeded user-defined maximum threshold
Rx prio3 cong discard
TCA_HOSTD_RX_PRIO3_CONG_DISCARD
Rate of p3 congestion discards exceeded user-defined maximum threshold
Rx prio4 buf discard
TCA_HOSTD_RX_PRIO4_BUF_DISCARD
Rate of p4 buffer discards exceeded user-defined maximum threshold
Rx prio4 cong discard
TCA_HOSTD_RX_PRIO4_CONG_DISCARD
Rate of p4 congestion discards exceeded user-defined maximum threshold
Rx prio5 buf discard
TCA_HOSTD_RX_PRIO5_BUF_DISCARD
Rate of p5 buffer discards exceeded user-defined maximum threshold
Rx prio5 cong discard
TCA_HOSTD_RX_PRIO5_CONG_DISCARD
Rate of p5 congestion discards exceeded user-defined maximum threshold
Rx prio6 buf discard
TCA_HOSTD_RX_PRIO6_BUF_DISCARD
Rate of p6 buffer discards exceeded user-defined maximum threshold
Rx prio6 cong discard
TCA_HOSTD_RX_PRIO6_CONG_DISCARD
Rate of p6 congestion discards exceeded user-defined maximum threshold
Rx prio7 buf discard
TCA_HOSTD_RX_PRIO7_BUF_DISCARD
Rate of p7 buffer discards exceeded user-defined maximum threshold
Rx prio7 cong discard
TCA_HOSTD_RX_PRIO7_CONG_DISCARD
Rate of p7 congestion discards exceeded user-defined maximum threshold
Rx symbol err phy
TCA_HOSTD_RX_SYMBOL_ERR_PHY
Count of Rx symbol errors (physical coding errors) exceeded user-defined maximum threshold
Tx discards phy
TCA_HOSTD_TX_DISCARDS_PHY
Rate of Tx discards exceeded user-defined maximum threshold
Tx errors phy
TCA_HOSTD_TX_ERRORS_PHY
Count of Tx errors exceeded user-defined maximum threshold
Tx pause storm error events
TCA_HOSTD_TX_PAUSE_STORM_ERROR_EVENTS
Count of pause error events exceeded user-defined maximum threshold
Tx pause storm warning events
TCA_HOSTD_TX_PAUSE_STORM_WARNING_EVENTS
Count of pause warning events exceeded user-defined maximum threshold
Tx PCI signal integrity
TCA_HOSTD_TX_PCI_SIGNAL_INTEGRITY
Count of Tx PCIe signal integrity errors exceeded user-defined maximum threshold
ECMP
NetQ UI Name
NetQ CLI Event ID
Description
ECMP imbalance
TCA_ECMP_IMBALANCE
ECMP path utilization imbalance greater than the threshold
Forwarding Resources
NetQ UI Name
NetQ CLI Event ID
Description
Total route entries %
TCA_TCAM_TOTAL_ROUTE_ENTRIES_UPPER
Number of routes on a given switch or host exceeded user-defined maximum threshold
Mcast routes %
TCA_TCAM_TOTAL_MCAST_ROUTES_UPPER
Number of multicast routes on a given switch or host exceeded user-defined maximum threshold
MAC entries %
TCA_TCAM_MAC_ENTRIES_UPPER
Number of MAC addresses on a given switch or host exceeded user-defined maximum threshold
IPv4 routes %
TCA_TCAM_IPV4_ROUTE_UPPER
Number of IPv4 routes on a given switch or host exceeded user-defined maximum threshold
IPv4 hosts %
TCA_TCAM_IPV4_HOST_UPPER
Number of IPv4 hosts on a given switch or host exceeded user-defined maximum threshold
Exceeding IPv6 routes %
TCA_TCAM_IPV6_ROUTE_UPPER
Number of IPv6 routes on a given switch or host exceeded user-defined maximum threshold
IPv6 hosts %
TCA_TCAM_IPV6_HOST_UPPER
Number of IPv6 hosts on a given switch or host exceeded user-defined maximum threshold
ECMP next hop %
TCA_TCAM_ECMP_NEXTHOPS_UPPER
Number of equal cost multi-path (ECMP) next hop entries on a given switch or host exceeded user-defined maximum threshold
Interface Errors
NetQ UI Name
NetQ CLI Event ID
Description
Oversize errors
TCA_HW_IF_OVERSIZE_ERRORS
Number of times a frame longer than maximum size (1518 Bytes) exceeded user-defined threshold
Undersize errors
TCA_HW_IF_UNDERSIZE_ERRORS
Number of times a frame shorter than minimum size (64 Bytes) exceeded user-defined threshold
Alignment errors
TCA_HW_IF_ALIGNMENT_ERRORS
Number of times a frame with an uneven byte count and a CRC error exceeded user-defined threshold
Jabber errors
TCA_HW_IF_JABBER_ERRORS
Number of times a frame longer than maximum size (1518 bytes) and with a CRC error exceeded user-defined threshold
Symbol errors
TCA_HW_IF_SYMBOL_ERRORS
Number of times that detected undefined or invalid symbols exceeded user-defined threshold
Interface Statistics
NetQ UI Name
NetQ CLI Event ID
Description
Broadcast received bytes
TCA_RXBROADCAST_UPPER
Number of broadcast receive bytes per second exceeded user-defined maximum threshold on a switch interface
Received bytes
TCA_RXBYTES_UPPER
Number of receive bytes exceeded user-defined maximum threshold on a switch interface
Multicast received bytes
TCA_RXMULTICAST_UPPER
rx_multicast per second on a given switch or host exceeded user-defined maximum threshold
Broadcast transmitted bytes
TCA_TXBROADCAST_UPPER
Number of broadcast transmit bytes per second exceeded user-defined maximum threshold on a switch interface
Transmitted bytes
TCA_TXBYTES_UPPER
Number of transmit bytes exceeded user-defined maximum threshold on a switch interface
Multicast transmitted bytes
TCA_TXMULTICAST_UPPER
Number of multicast transmit bytes per second exceeded user-defined maximum threshold on a switch interface
Link Flaps
NetQ UI Name
NetQ CLI Event ID
Description
Link flap errors
TCA_LINK_FLAP_UPPER
Number of link flaps exceeded user-defined maximum threshold
Resource Utilization
NetQ UI Name
NetQ CLI Event ID
Description
Service memory utilization
TCA_SERVICE_MEMORY_UTILIZATION_UPPER
Percentage of service memory utilization exceeded user-defined maximum threshold on a switch
Disk utilization
TCA_DISK_UTILIZATION_UPPER
Percentage of disk utilization exceeded user-defined maximum threshold on a switch or host
CPU utilization
TCA_CPU_UTILIZATION_UPPER
Percentage of CPU utilization exceeded user-defined maximum threshold on a switch or host
Service CPU utilization
TCA_SERVICE_CPU_UTILIZATION_UPPER
Percentage of service CPU utilization exceeded user-defined maximum threshold on a switch
Memory utilization
TCA_MEMORY_UTILIZATION_UPPER
Percentage of memory utilization exceeded user-defined maximum threshold on a switch or host
RoCE
NetQ UI Name
NetQ CLI Event ID
Description
Rx CNP buffer usage
TCA_RX_CNP_BUFFER_USAGE_CELLS
Percentage of Rx General+CNP buffer usage exceeded user-defined maximum threshold on a switch interface
Rx CNP no buffer discard
TCA_RX_CNP_NO_BUFFER_DISCARD
Rate of Rx General+CNP no buffer discard exceeded user-defined maximum threshold on a switch interface
Rx CNP PG usage
TCA_RX_CNP_PG_USAGE_CELLS
Percentage of Rx General+CNP PG usage exceeded user-defined maximum threshold on a switch interface
Rx RoCE buffer usage
TCA_RX_ROCE_BUFFER_USAGE_CELLS
Percentage of Rx RoCE buffer usage exceeded user-defined maximum threshold on a switch interface
Rx RoCE no buffer discard
TCA_RX_ROCE_NO_BUFFER_DISCARD
Rate of Rx RoCE no buffer discard exceeded user-defined maximum threshold on a switch interface
Rx RoCE PG usage
TCA_RX_ROCE_PG_USAGE_CELLS
Percentage of Rx RoCE PG usage exceeded user-defined maximum threshold on a switch interface
Rx RoCE PFC pause duration
TCA_RX_ROCE_PFC_PAUSE_DURATION
Number of Rx RoCE PFC pause duration exceeded user-defined maximum threshold on a switch interface
Rx RoCE PFC pause packets
TCA_RX_ROCE_PFC_PAUSE_PACKETS
Rate of Rx RoCE PFC pause packets exceeded user-defined maximum threshold on a switch interface
Tx CNP buffer usage
TCA_TX_CNP_BUFFER_USAGE_CELLS
Percentage of Tx General+CNP buffer usage exceeded user-defined maximum threshold on a switch interface
Tx CNP TC usage
TCA_TX_CNP_TC_USAGE_CELLS
Percentage of Tx CNP TC usage exceeded user-defined maximum threshold on a switch interface
Tx CNP unicast no buffer discard
TCA_TX_CNP_UNICAST_NO_BUFFER_DISCARD
Rate of Tx CNP unicast no buffer discard exceeded user-defined maximum threshold on a switch interface
Tx ECN marked packets
TCA_TX_ECN_MARKED_PACKETS
Rate of Tx Port ECN marked packets exceeded user-defined maximum threshold on a switch interface
Tx RoCE buffer usage
TCA_TX_ROCE_BUFFER_USAGE_CELLS
Percentage of Tx RoCE buffer usage exceeded user-defined maximum threshold on a switch interface
Tx RoCE PFC pause duration
TCA_TX_ROCE_PFC_PAUSE_DURATION
Number of Tx RoCE PFC pause duration exceeded user-defined maximum threshold on a switch interface
Tx RoCE PFC pause packets
TCA_TX_ROCE_PFC_PAUSE_PACKETS
Rate of Tx RoCE PFC pause packets exceeded user-defined maximum threshold on a switch interface
Tx RoCE TC usage
TCA_TX_ROCE_TC_USAGE_CELLS
Percentage of Tx RoCE TC usage exceeded user-defined maximum threshold on a switch interface
Tx RoCE unicast no buffer discard
TCA_TX_ROCE_UNICAST_NO_BUFFER_DISCARD
Rate of Tx RoCE unicast no buffer discard exceeded user-defined maximum threshold on a switch interface
Sensors
NetQ UI Name
NetQ CLI Event ID
Description
Fan speed
TCA_SENSOR_FAN_UPPER
Fan speed exceeded user-defined maximum threshold on a switch
Power supply watts
TCA_SENSOR_POWER_UPPER
Power supply output exceeded user-defined maximum threshold on a switch
Power supply volts
TCA_SENSOR_VOLTAGE_UPPER
Power supply voltage exceeded user-defined maximum threshold on a switch
Switch temperature
TCA_SENSOR_TEMPERATURE_UPPER
Temperature (° C) exceeded user-defined maximum threshold on a switch
Sensor state
TCA_SENSOR_STATE
Sensor state changed from good to either bad or absent
What Just Happened
NetQ UI Name
NetQ CLI Event ID
Drop Type
Reason/Port Down Reason
Description
ACL drop aggregate upper
TCA_WJH_ACL_DROP_AGG_UPPER
ACL
Egress port ACL
ACL action set to deny on the physical egress port or bond
ACL drop aggregate upper
TCA_WJH_ACL_DROP_AGG_UPPER
ACL
Egress router ACL
ACL action set to deny on the egress switch virtual interfaces (SVIs)
ACL drop aggregate upper
TCA_WJH_ACL_DROP_AGG_UPPER
ACL
Ingress port ACL
ACL action set to deny on the physical ingress port or bond
ACL drop aggregate upper
TCA_WJH_ACL_DROP_AGG_UPPER
ACL
Ingress router ACL
ACL action set to deny on the ingress switch virtual interfaces (SVIs)
Buffer drop aggregate upper
TCA_WJH_BUFFER_DROP_AGG_UPPER
Buffer
Packet Latency Threshold Crossed
Time a packet spent within the switch exceeded or dropped below the specified high or low threshold
Buffer drop aggregate upper
TCA_WJH_BUFFER_DROP_AGG_UPPER
Buffer
Port TC Congestion Threshold Crossed
Percentage of the occupancy buffer exceeded or dropped below the specified high or low threshold
Buffer drop aggregate upper
TCA_WJH_BUFFER_DROP_AGG_UPPER
Buffer
Tail drop
Tail drop is enabled, and buffer queue is filled to maximum capacity
Buffer drop aggregate upper
TCA_WJH_BUFFER_DROP_AGG_UPPER
Buffer
WRED
Weighted Random Early Detection is enabled, and buffer queue is filled to maximum capacity or the RED engine dropped the packet as of random congestion prevention
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Auto-negotiation failure
Negotiation of port speed with peer has failed
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Bad signal integrity
Integrity of the signal on port is not sufficient for good communication
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Cable/transceiver is not supported
The attached cable or transceiver is not supported by this port
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Cable/transceiver is unplugged
A cable or transceiver is missing or not fully inserted into the port
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Calibration failure
Calibration failure
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Link training failure
Link is not able to go operational up due to link training failure
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Peer is sending remote faults
Peer node is not operating correctly
CRC error upper
TCA_WJH_CRC_ERROR_UPPER
L1
Port admin down
Port has been purposely set down by user
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Destination MAC is reserved (DMAC=01-80-C2-00-00-0x)
The address cannot be used by this link
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Ingress spanning tree filter
Port is in Spanning Tree blocking state
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Ingress VLAN filtering
Frames whose port is not a member of the VLAN are discarded
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
MLAG port isolation
Not supported for port isolation implemented with system ACL
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Multicast egress port list is empty
No ports are defined for multicast egress
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Port loopback filter
Port is operating in loopback mode; packets are being sent to itself (source MAC address is the same as the destination MAC address
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
Unicast MAC table action discard
Currently not supported
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
L2
VLAN tagging mismatch
VLAN tags on the source and destination do not match
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Blackhole ARP/neighbor
Packet received with blackhole adjacency
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Blackhole route
Packet received with action equal to discard
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Checksum or IPver or IPv4 IHL too short
Cannot read packet due to header checksum error, IP version mismatch, or IPv4 header length is too short
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Destination IP is loopback address
Cannot read packet as destination IP address is a loopback address (dip=>127.0.0.0/8)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Egress router interface is disabled
Packet destined to a different subnet cannot be routed because egress router interface is disabled
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Ingress router interface is disabled
Packet destined to a different subnet cannot be routed because ingress router interface is disabled
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv4 destination IP is link local
Packet has IPv4 destination address that is a local link (destination in 169.254.0.0/16)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv4 destination IP is local network (destination=0.0.0.0/8)
Packet has IPv4 destination address that is a local network (destination=0.0.0.0/8)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv4 routing table (LPM) unicast miss
No route available in routing table for packet
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv4 source IP is limited broadcast
Packet has broadcast source IP address
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv6 destination in multicast scope FFx0:/16
Packet received with multicast destination address in FFx0:/16 address range
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv6 destination in multicast scope FFx1:/16
Packet received with multicast destination address in FFx1:/16 address range
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
IPv6 routing table (LPM) unicast miss
No route available in routing table for packet
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Multicast MAC mismatch
For IPv4, destination MAC address is not equal to {0x01-00-5E-0 (25 bits), DIP[22:0]} and DIP is multicast. For IPv6, destination MAC address is not equal to {0x3333, DIP[31:0]} and DIP is multicast
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Non IP packet
Cannot read packet header because it is not an IP packet
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Non-routable packet
Packet has no route in routing table
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Packet size is larger than router interface MTU
Packet has larger MTU configured than the VLAN
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Router interface loopback
Packet has destination IP address that is local. For example, SIP = 1.1.1.1, DIP = 1.1.1.128.
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Source IP equals destination IP
Packet has a source IP address equal to the destination IP address
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Source IP is in class E
Cannot read packet as source IP address is a Class E address
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Source IP is loopback address
Cannot read packet as source IP address is a loopback address (ipv4 => 127.0.0.0/8 for ipv6 => ::1/128)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Source IP is multicast
Cannot read packet as source IP address is a multicast address (ipv4 SIP => 224.0.0.0/4)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Source IP is unspecified
Cannot read packet as source IP address is unspecified (ipv4 = 0.0.0.0/32; for ipv6 = ::0)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
TTL value is too small
Packet has TTL value of 1
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Unicast destination IP but multicast destination MAC
Cannot read packet with IP unicast address when destination MAC address is not unicast (FF:FF:FF:FF:FF:FF)
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Router
Unresolved neighbor/next-hop
The next hop in the route is unknown
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Tunnel
Decapsulation error
Decapsulation produced incorrect format of packet. For example, encapsulation of packet with many VLANs or IP options on the underlay can cause decapsulation to result in a short packet.
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Tunnel
Overlay switch - Source MAC equals destination MAC
Overlay packet’s source MAC address is the same as the destination MAC address
Drop aggregate upper
TCA_WJH_DROP_AGG_UPPER
Tunnel
Overlay switch - Source MAC is multicast
Overlay packet’s source MAC address is multicast
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Auto-negotiation failure
Negotiation of port speed with peer has failed
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Bad signal integrity
Integrity of the signal on port is not sufficient for good communication
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Cable/transceiver is not supported
The attached cable or transceiver is not supported by this port
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Cable/transceiver is unplugged
A cable or transceiver is missing or not fully inserted into the port
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Calibration failure
Calibration failure
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Link training failure
Link is not able to go operational up due to link training failure
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Peer is sending remote faults
Peer node is not operating correctly
Symbol error upper
TCA_WJH_SYMBOL_ERROR_UPPER
L1
Port admin down
Port has been purposely set down by user
WJH Events Reference
This reference lists all the NetQ-supported What Just Happened (WJH) metrics and provides a brief description of each. The full outputs vary slightly based on the type of drop and whether you are viewing the results in the NetQ UI or through one of the NetQ CLI commands.
Link is not able to go operational up due to link training failure
Peer is sending remote faults
Peer node is not operating correctly
Bad signal integrity
Integrity of the signal on port is not sufficient for good communication
Cable/transceiver is not supported
The attached cable or transceiver is not supported by this port
Cable/transceiver is unplugged
A cable or transceiver is missing or not fully inserted into the port
Calibration failure
Calibration failure
Port state changes counter
Cumulative number of state changes
Symbol error counter
Cumulative number of symbol errors
CRC error counter
Cumulative number of CRC errors
In addition to the reason, the information provided for these drops includes:
Parameter
Description
Corrective Action
Provides recommend actions to take to resolve the port down state
First Timestamp
Date and time this port was marked as down for the first time
Ingress Port
Port accepting incoming traffic
CRC Error Count
Number of CRC errors generated by this port
Symbol Error Count
Number of Symbol errors generated by this port
State Change Count
Number of state changes that have occurred on this port
OPID
Operation identifier; used for internal purposes
Is Port Up
Indicates whether the port is in an Up (true) or Down (false) state
Layer 2 Drops
Describes why a link is down.
Reason
Severity
Description
MLAG port isolation
Notice
Not supported for port isolation implemented with system ACL
Destination MAC is reserved (DMAC=01-80-C2-00-00-0x)
Error
The address cannot be used by this link
VLAN tagging mismatch
Error
VLAN tags on the source and destination do not match
Ingress VLAN filtering
Error
Frames whose port is not a member of the VLAN are discarded
Ingress spanning tree filter
Notice
Port is in Spanning Tree blocking state
Unicast MAC table action discard
Notice
Packet dropped due to a MAC table configuration rule
Multicast egress port list is empty
Warning
No ports are defined for multicast egress
Port loopback filter
Error
Port is operating in loopback mode; packets are being sent to itself (source MAC address is the same as the destination MAC address)
Source MAC is multicast
Error
Packets have multicast source MAC address
Source MAC equals destination MAC
Error
Source MAC address is the same as the destination MAC address
In addition to the reason, the information provided for these drops includes:
Parameter
Description
Source Port
Port ID where the link originates
Source IP
Port IP address where the link originates
Source MAC
Port MAC address where the link originates
Destination Port
Port ID where the link terminates
Destination IP
Port IP address where the link terminates
Destination MAC
Port MAC address where the link terminates
First Timestamp
Date and time this link was marked as down for the first time
Aggregate Count
Total number of dropped packets
Protocol
ID of the communication protocol running on this link
Ingress Port
Port accepting incoming traffic
OPID
Operation identifier; used for internal purposes
Router Drops
Describes why the server is unable to route a packet.
Reason
Severity
Description
Non-routable packet
Notice
Packet has no route in routing table
Blackhole route
Warning
Packet received with action equal to discard
Unresolved next hop
Warning
The next hop in the route is unknown
Blackhole ARP/neighbor
Warning
Packet received with blackhole adjacency
IPv6 destination in multicast scope FFx0:/16
Notice
Packet received with multicast destination address in FFx0:/16 address range
IPv6 destination in multicast scope FFx1:/16
Notice
Packet received with multicast destination address in FFx1:/16 address range
Non-IP packet
Notice
Cannot read packet header because it is not an IP packet
Unicast destination IP but non-unicast destination MAC
Error
Cannot read packet with IP unicast address when destination MAC address is not unicast (FF:FF:FF:FF:FF:FF)
Destination IP is loopback address
Error
Cannot read packet as destination IP address is a loopback address (dip=>127.0.0.0/8)
Source IP is multicast
Error
Cannot read packet as source IP address is a multicast address (ipv4 SIP => 224.0.0.0/4)
Source IP is in class E
Error
Cannot read packet as source IP address is a Class E address
Source IP is loopback address
Error
Cannot read packet as source IP address is a loopback address (ipv4 => 127.0.0.0/8 for ipv6 => ::1/128)
Source IP is unspecified
Error
Cannot read packet as source IP address is unspecified (ipv4 = 0.0.0.0/32; for ipv6 = ::0)
Checksum or IP ver or IPv4 IHL too short
Error
Cannot read packet due to header checksum error, IP version mismatch, or IPv4 header length is too short
Multicast MAC mismatch
Error
For IPv4, destination MAC address is not equal to {0x01-00-5E-0 (25 bits), DIP[22:0]} and DIP is multicast. For IPv6, destination MAC address is not equal to {0x3333, DIP[31:0]} and DIP is multicast
Source IP equals destination IP
Error
Packet has a source IP address equal to the destination IP address
IPv4 source IP is limited broadcast
Error
Packet has broadcast source IP address
IPv4 destination IP is local network (destination = 0.0.0.0/8)
Error
Packet has IPv4 destination address that is a local network (destination=0.0.0.0/8)
IPv4 destination IP is link-local (destination in 169.254.0.0/16)
Error
Packet has IPv4 destination address that is a local link
Ingress router interface is disabled
Warning
Packet destined to a different subnet cannot be routed because ingress router interface is disabled
Egress router interface is disabled
Warning
Packet destined to a different subnet cannot be routed because egress router interface is disabled
IPv4 routing table (LPM) unicast miss
Warning
No route available in routing table for packet
IPv6 routing table (LPM) unicast miss
Warning
No route available in routing table for packet
Router interface loopback
Warning
Packet has destination IP address that is local. For example, SIP = 1.1.1.1, DIP = 1.1.1.128.
Packet size is larger than MTU
Warning
Packet has larger MTU configured than the VLAN
TTL value is too small
Warning
Packet has TTL value of 1
Tunnel Drops
Describes why a tunnel is down.
Reason
Severity
Description
Overlay switch - source MAC is multicast
Error
Overlay packet’s source MAC address is multicast
Overlay switch - source MAC equals destination MAC
Error
Overlay packet’s source MAC address is the same as the destination MAC address
Decapsulation error
Error
De-capsulation produced incorrect format of packet. For example, encapsulation of packet with many VLANs or IP options on the underlay can cause de-capsulation to result in a short packet.
Tunnel interface is disabled
Error
Packet cannot de-capsulate because the tunnel interface is disabled
Buffer Drops
Describes why the server buffer has dropped packets.
Reason
Severity
Description
Tail drop
Warning
Tail drop is enabled, and buffer queue is filled to maximum capacity
WRED
Warning
Weighted Random Early Detection is enabled, and buffer queue is filled to maximum capacity or the RED engine dropped the packet as of random congestion prevention
Port TC Congestion Threshold Crossed
Warning
Percentage of the occupancy buffer exceeded or dropped below the specified high or low threshold
Packet Latency Threshold Crossed
Warning
Time a packet spent within the switch exceeded or dropped below the specified high or low threshold
ACL Drops
Describes why an ACL has dropped packets.
Reason
Severity
Description
Ingress port ACL
Notice
ACL action set to deny on the physical ingress port or bond
Ingress router ACL
Notice
ACL action set to deny on the ingress switch virtual interfaces (SVIs)
Egress port ACL
Notice
ACL action set to deny on the physical egress port or bond
Egress router ACL
Notice
ACL action set to deny on the egress SVIs
ECMP
Equal-cost multi-path (ECMP) is a routing strategy whereby packets are forwarded along multiple paths of equal cost. Load sharing occurs automatically for IPv4 and IPv6 routes with multiple installed next hops. The hardware or the routing protocol configuration determines the maximum number of routes for which load sharing occurs.
ECMP monitoring is supported on NVIDIA Spectrum switches running Cumulus Linux.
ECMP Commands
Monitor ECMP routing data with the following commands. See the command line reference for additional options, definitions, and examples.
netq show ecmp
netq show ecmp-hash-config
View ECMP Resource Utilization in the UI
You can view resource utilization for ECMP next hops in the full-screen switch card. Search for the device’s hostname in the global search field or from the header select Add card > Device card. Select a switch from the list. When the card opens on the dashboard, expand it to the largest size.
Select Forwarding resources from the side menu. The ECMP next hops column displays the maximum number of hops seen in the forwarding table, the number used, and the percentage of this usage compared to the maximum number.
Adaptive Routing
Adaptive routing is a load balancing feature that improves network utilization for eligible IP packets by selecting forwarding paths dynamically based on the state of the switch, such as queue occupancy and port utilization. You can use the adaptive routing dashboard to view switches with adaptive routing capabilities, events related to adaptive routing, RoCE settings, and egress queue lengths in the form of histograms.
Requirements
Adaptive routing monitoring is supported on Spectrum-4 switches. It requires a switch fabric running Cumulus Linux 5.5.0 or later.
To display adaptive routing data, you must configure adaptive routing on the switch; it can be either enabled or disabled. Switches without an adaptive routing configuration will not appear in the UI or CLI.
RoCE lossless mode must be enabled to display adaptive routing data. Switches with RoCE lossy mode enabled will appear in the UI and CLI, but will not display adaptive routing data.
To view a switch’s histogram data and adaptive routing imbalance events, you must enable
ASIC monitoring on the switch. If you stop the asic-monitor service, NetQ will report values of 0 for all histogram metrics (P95, standard deviation, mean, and maximum queue lengths).
Adaptive Routing Commands
Monitor adaptive routing with the netq show adaptive-routing config commands. The output of these commands display adaptive routing information either globally on the switch or at the interface level.
netq show adaptive-routing config global
netq show adaptive-routing config interface
Access the Adaptive Routing Dashboard
From the header or Menu, select Spectrum-X, then Adaptive routing.
The adaptive routing dashboard displays:
Devices with adaptive routing configured (enabled or disabled) and their RoCE modes (lossy or lossless).
A list of interfaces on the switch and their configurations.
In the Interfaces column, select View details to view interfaces with adaptive routing configured:
The Events tab displays a summary of adaptive routing events, including ECMP traffic imbalances. The table displays up to 10 switches, which can be sorted by highest P95 value, highest standard deviation, or ports with the widest deviation from the P95 value (aggregated over the past 3 minutes). From this panel, you can select View more in the View histogram column to display queue lengths in the form of histograms for any listed switch.
EVPN
Use the UI or CLI to monitor Ethernet VPN (EVPN) on a networkwide or per-session basis.
EVPN Commands
Monitor EVPN with the following commands. See the command line reference for additional options, definitions, and examples.
netq show evpn
netq show events message_type evpn
netq show events-config message_type evpn
The netq check evpn command verifies the communication status for all nodes (leafs, spines, and hosts) running instances of EVPN in your network fabric:
netq check evpn
View EVPN in the UI
To add the EVPN card to your workbench, navigate to the header and select Add card > Other card > Network services > All EVPN Sessions card > Open cards. In this example, there are 33 nodes running the EVPN service, 0 events (from the last 24 hours), and 23,070 VNIs.
View the Distribution of Layer-2 and -3 VNIs and Sessions
To view the number of sessions between devices and Virtual Network Identifiers (VNIs) that occur over layer 3, open the large EVPN Sessions card. In this example, there are 18 layer-3 VNIs.
Select the dropdown to display the switches with the most EVPN sessions, as well as the switches with the most layer-2 and layer-3 EVPN sessions.
You can view EVPN-related events by selecting the Events tab.
Expand the EVPN card to full-screen to view, filter, or export:
A list of switches and their associated VNIs
The address of the VNI endpoint
Whether the session is part of a layer 2 or layer 3 configuration
The associated VRF or VLAN (when defined)
The export and import route targets used for filtering
From this table, you can select a row, then click Add card above the table.
NetQ adds a new, EVPN ‘single-session’ card to your workbench. From this card, you can view the number of VTEPs (VXLAN Tunnel Endpoints) for a given EVPN session as well as the attributes of all EVPN sessions for a given VNI.
Monitor a Single EVPN Session
The EVPN single-session card displays the number of VTEPs for a given EVPN session (in this case, 48).
Expand the card to display the associated VRF (layer 3) or VLAN (layer 2) on each device participating in this session. The full-screen card displays all stored attributes of all EVPN sessions running networkwide.
The NetQ Agent monitors the following on Linux hosts:
netlink
Layer 2: LLDP and VLAN-aware bridge
Layer 3: IPv4, IPv6
systemctl for services
Using NetQ on a Linux host is the same as using it on a Cumulus Linux switch. For example, if you want to check LLDP neighbor information for a given host, run netq show lldp and specify the hostname:
cumulus@host:~$ netq server01 show lldp
Matching lldp records:
Hostname Interface Peer Hostname Peer Interface Last Changed
----------------- ------------------------- ----------------- ------------------------- -------------------------
server01 eth0 oob-mgmt-switch swp2 Mon Dec 2 21:22:55 2024
server01 eth1 leaf01 swp1 Mon Dec 2 21:22:55 2024
server01 eth2 leaf02 swp1 Mon Dec 2 21:22:55 2024
Then, to see LLDP from the switch perspective run the same command, specifying the hostname of the switch:
cumulus@switch:~$ netq leaf01 show lldp
Matching lldp records:
Hostname Interface Peer Hostname Peer Interface Last Changed
----------------- ------------------------- ----------------- ------------------------- -------------------------
leaf01 eth0 oob-mgmt-switch swp10 Mon Dec 2 20:54:07 2024
leaf01 swp1 server01 mac:48:b0:2d:22:00:2d Mon Dec 2 20:54:07 2024
leaf01 swp2 server02 mac:48:b0:2d:46:72:25 Mon Dec 2 20:54:07 2024
leaf01 swp3 server03 mac:48:b0:2d:48:f3:9e Mon Dec 2 20:54:07 2024
leaf01 swp49 leaf02 swp49 Mon Dec 2 20:54:07 2024
leaf01 swp50 leaf02 swp50 Mon Dec 2 20:54:07 2024
leaf01 swp52 spine02 swp1 Mon Dec 2 20:54:07 2024
leaf01 swp53 spine03 swp1 Mon Dec 2 20:54:07 2024
leaf01 swp54 spine04 swp1 Mon Dec 2 20:54:07 2024
To view a table of interfaces along with types, states, and basic details, select the Menu, then enter Interfaces in the search field.
Compare Link Interfaces
Link health view is a beta feature. It is not available in large-scale environments.
To troubleshoot link issues, expand the Menu, then select Link health view. From this dashboard you can compare links according to different parameters, such as link utilization, link flaps, transmit and receive counters, drops and errors, and other counter data. NetQ displays link data in the table ordered from highest to lowest egress queue length and the data is updated in real time.
To compare links, select up to 20 links from the dashboard, then click Compare selected above the table. The comparison charts update to reflect the data from the links you selected. Toggle the Show top 5 switch on or off to view the top five and bottom five devices and their respective links according to the parameters you selected. In each of the charts, the x-axis represents time in hours, according to a 24-hour clock and the y-axis represents a count of the parameter you selected. The yellow line displays the average values for the selected links.
You can click each of the comparison charts anywhere along the x-axis to open a tabular view of the data that is displayed in the chart during the hour you selected.
You can bookmark the link health view in your browser. When you navigate back to the bookmarked page or share the link, your settings will be preserved.
Interface Commands
NetQ uses
LLDP (Link Layer Discovery Protocol) to collect port information. NetQ can also identify peer ports connected to DACs (Direct Attached Cables) and AOCs (Active Optical Cables) without using LLDP, even if the link is not UP. To monitor OSI Layer 1 physical components on network devices, use the netq show interfaces physical command.
The netq check interfaces command verifies interface communication status for all nodes (leafs, spines, and hosts) or an interface between specific nodes in your network fabric. This command only checks the physical interfaces; it does not check bridges, bonds, or other software constructs.
netq check interfaces
You can view link and interface statistics with the following commands:
View statistics about a given node and interface, including frame errors, ACL drops, and buffer drops with netq show ethtool-stats
View how many compute resources—CPU, disk, and memory—the switches on your network consume with netq show resource-util
Check for MTU Inconsistencies
The maximum transmission unit (MTU) determines the largest size packet or frame that can be transmitted across a given communication link. When the MTU is not configured to the same value on both ends of the link, communication problems can occur. Use the netq check mtu command to verify that the MTU is correctly specified for each link.
IP Addresses
Use the UI or CLI to monitor Internet Protocol (IP) addresses, neighbors, and routes.
This information can help you:
Determine the IP neighbors for each switch.
Calculate the total number of IPv4 and IPv6 addresses and their corresponding interfaces.
Identify which routes are owned by which switches.
Pinpoint when changes occurred to an IP configuration.
netq show ip addresses
netq show ipv6 addresses
netq show ip neighbors
netq show ipv6 neighbors
netq show ip routes
netq show ipv6 routes
The netq show address-history command displays when an IP address configuration changed for an interface.
The netq show neighbor-history command displays when the neighbor configuration changed for an IP address.
The netq check addresses command searches for duplicate IPv4 and IPv6 addresses assigned to interfaces across devices in the inventory, and checks for duplicate /32 host routes in each VRF.
View IP Addresses in the UI
IPv4 and IPv6 address, neighbor, and route information is available in the NetQ UI.
To access this information, select the Menu, then enter IP addresses, IP neighbors, or IP routes in the search field. The following image displays a list of IP addresses:
Validate Network Protocol and Service Operations
In addition to the hourly validation checks that run by default, NetQ lets you validate the operation of the protocols and services running in your network either on-demand or according to a schedule. Both types can be customized to include or exclude particular tests or devices.
On-demand validations allow you to validate the operation of one or more network protocols and services right now.
Scheduled validations allow you to run validations according to a schedule. You can create and schedule up to 15 custom validation checks. The hourly, default validation checks do not count towards this limit.
Create a Validation
Using the NetQ UI, you can create an on-demand or scheduled validations for multiple protocols or services at the same time. This is handy when the protocols are strongly related regarding a possible issue or if you only want to create one validation request.
To create a validation in the UI:
In the workbench header, select Validation, then Create a validation. Choose whether the validation should run on all devices or on a group of devices.
Select the protocols or services you want to include as part of the validation. All tests that comprise the validation are included by default, but you can select an individual test to exclude it from the validation check. Hover over an individual test and select Customize to configure filters which can exclude individual devices or failure reasons from the validation. Then click Next.
Select the time and frequency parameters and specify the workbench where the validation results should appear. Then select Run or Schedule.
If you chose to run the validation now, the results are displayed on the workbench you specified in the previous step. If you selected more than one protocol or service, a card opens for each selection. To view additional information about the errors reported, hover over a check and click View details. To view all data for all on-demand validation results for a given protocol, click Show all results.
If you scheduled the validation to run later, NetQ will display a dashboard containing all existing validation checks, including the one you just created. If you want to run a validation you scheduled for later right now, in the header select Validation, then Existing validations. Select one or more validations, then click View results. The associated Validation Result cards open on your workbench.
To view the list of tests for a given protocol or service, use either netq show unit-tests.
To run on-demand validations use the netq check commands.
You can include or exclude one or more of the various tests performed during the validation by including or excluding the test's number. The check command’s <protocol-number-range-list> value is used with the include and exclude options to indicate which tests to include. It is a number list separated by commas, or a range using a dash, or a combination of these. Do not use spaces after commas. For example:
include 1,3,5
include 1-5
include 1,3-5
exclude 6,7
exclude 6-7
exclude 3,4-7,9
You can create filters that suppress validation failures based on hostnames, failure reason, and other parameters with the netq add check filter command.
The following example displays a list of all the checks included in a BGP validation, along with their respective test numbers and filters, if any:
cumulus@switch:~$ netq show unit-tests bgp
0 : Session Establishment - check if BGP session is in established state
1 : Address Families - check if tx and rx address family advertisement is consistent between peers of a BGP session
2 : Router ID - check for BGP router id conflict in the network
3 : Hold Time - check for mismatch of hold time between peers of a BGP session
4 : Keep Alive Interval - check for mismatch of keep alive interval between peers of a BGP session
5 : Ipv4 Stale Path Time - check for mismatch of ipv4 stale path timer between peers of a BGP session
6 : Ipv6 Stale Path Time - check for mismatch of ipv6 stale path timer between peers of a BGP session
7 : Interface MTU - check for consistency of Interface MTU for BGP peers
Configured global result filters:
Configured per test result filters:
The following BGP validation includes only the session establishment (test number 0) and router ID (test number 2) tests. Note that you can obtain the same results using either of the include or exclude options and that the tests that are not run are marked skipped.
cumulus@switch:~$ netq check bgp include 0,2
bgp check result summary:
Total nodes : 13
Checked nodes : 0
Failed nodes : 0
Rotten nodes : 13
Warning nodes : 0
Skipped nodes : 0
Additional summary:
Failed Sessions : 0
Total Sessions : 0
Session Establishment Test : passed
Address Families Test : skipped
Router ID Test : passed
Hold Time Test : skipped
Keep Alive Interval Test : skipped
Ipv4 Stale Path Time Test : skipped
Ipv6 Stale Path Time Test : skipped
Interface MTU Test : skipped
To create a request containing checks on a single protocol or service run the netq add validation type command. Using this command adds validation results that can be accessed in the UI.
The following example creates a BGP validation that runs every 15 minutes:
cumulus@switch:~$ netq add validation name Bgp15m type bgp interval 15m
Successfully added Bgp15m running every 15m
Re-run this command for each additional scheduled validation.
Manage Validations
To view a dashboard of all validations that run according to a schedule, in the header select Validation, then click Scheduled validations.
Edit or Delete a Scheduled Validation
You can edit or delete any scheduled validation that you created. This creates a new validation request and the original validation has the (old) label applied to the name. The old validation can no longer be edited. Default validations cannot be edited or deleted, but can be disabled.
To edit or delete a scheduled validation:
Click Validation, then click Scheduled validations.
Hover over the validation then click Edit or Delete.
If editing, select which checks to add or remove from the validation request, then click Update.
Change the schedule for the validation, then click Update.
You can run the modified validation immediately or wait for it to run according to the schedule you specified.
Determine the name of the scheduled validation you want to remove with the following command:
netq show validation summary
[name <text-validation-name>]
type (addr | agents | bgp | evpn | interfaces | mlag | mtu | ntp | roce | sensors | vlan | vxlan)
[around <text-time-hr>]
[json]
This example shows all scheduled validations for BGP:
cumulus@switch:~$ netq show validation summary type bgp
Name Type Job ID Checked Nodes Failed Nodes Total Nodes Timestamp
--------------- ---------------- ------------ -------------------------- ------------------------ ---------------------- -------------------------
Bgp30m scheduled 4c78cdf3-24a 0 0 0 Thu Nov 12 20:38:20 2020
6-4ecb-a39d-
0c2ec265505f
Bgp15m scheduled 2e891464-637 10 0 10 Thu Nov 12 20:28:58 2020
a-4e89-a692-
3bf5f7c8fd2a
Bgp30m scheduled 4c78cdf3-24a 0 0 0 Thu Nov 12 20:24:14 2020
6-4ecb-a39d-
0c2ec265505f
Bgp30m scheduled 4c78cdf3-24a 0 0 0 Thu Nov 12 20:15:20 2020
6-4ecb-a39d-
0c2ec265505f
Bgp15m scheduled 2e891464-637 10 0 10 Thu Nov 12 20:13:57 2020
a-4e89-a692-
3bf5f7c8fd2a
...
To remove the validation, run:
netq del validation <text-validation-name>
This example removes the scheduled validation named Bgp15m.
cumulus@switch:~$ netq del validation Bgp15m
Successfully deleted validation Bgp15m
Repeat these steps to remove additional scheduled validations.
Topology Validations
The topology validation compares your actual network topology derived from LLDP telemetry data against a topology blueprint (in Graphviz DOT format) that you upload to the UI.
Configure LLDP
You must configure the LLDP service on switches and hosts that are defined in the topology blueprint to send the port ID subtype that matches the connection defined in the topology DOT file. The lldpd service allows you to configure the port ID by specifying either the interface name (ifname) or MAC address (macaddress) using the configure lldp portidsubtype [ifname | macaddress] command.
For example, if your host is configured to send the interface name in the LLDP port ID field, define the interface name in the topology DOT file:
"switch1":"swp1" -- "host5":"eth1"
▼
DOT file example
Each line in the DOT file should depict the network’s physical cabling:
You can use the lldpctl command to validate the current port ID received from a connected device.
If you do not configure all of your network devices in the topology blueprint, the total number of devices accounted for in the validation results card might include additional devices that NetQ has received LLDP data from.
If you download a topology blueprint file from the UI and edit its parameters, give the file a different name before re-uploading it.
Create a Topology Validation
In the workbench header, select Validation, then Create a validation.
Select Topology and upload the topology blueprint file. The name of the blueprint file NetQ will use to validate the topology is displayed on the screen. To use a different file, upload it to the UI, then select Manage blueprint file. Select Activate to indicate the blueprint file you’d like NetQ to use.
Upon completion, the dashboard displays the devices that failed the topology validation, along with a table listing cabling issues. NetQ only displays the network links that were defined in the topology blueprint.
Network devices use Layer Link Discovery Protocol (LLDP) to advertise their identity, capabilities, and neighbors on a LAN. You can view this information for one or more devices. You can also view the information at an earlier point in time or view changes that have occurred to the information during a specified time period. For an overview and how to configure LLDP in your network, refer to
Link Layer Discovery Protocol.
LLDP Commands
Monitor LLDP with the following commands. See the command line reference for additional options, definitions, and examples.
netq show lldp
netq show events message_type lldp
View LLDP in the UI
To add the LLDP card to your workbench, navigate to the header and select Add card > Other card > Network services > All LLDP Sessions card > Open cards. In this example, there are 25 nodes running the LLDP protocol, 184 established sessions, and no LLDP-related events from the past 24 hours:
Expand to the large card for additional LLDP information. This view displays the number of missing neighbors and how that number has changed over time. This is a good indicator of link communication issues. This info is displayed in the bottom chart, under Total sessions with no NBR. The right half of the card displays the switches handling the most LLDP traffic. Select the dropdown to view switches with unestablished LLDP sessions.
Expand the LLDP card to full-screen to view, filter, or export:
A list of all switches running LLDP
Peer information and attributes
From this table, you can select a row, then click Add card above the table.
NetQ adds a new, LLDP ‘single-session’ card to your workbench.
Monitor a Single LLDP Session
From the LLDP single-session card, you can view the number of nodes running the LLDP service, view neighbor state changes, and monitor the running LLDP configuration and any changes to the configuration file. This view is helpful for determining the stability of the LLDP session between two devices.
Understanding the Heat Map
On the medium and large single-session cards, vertically stacked heat maps represent the status of the neighboring peers: one for peers that are reachable (neighbor detected) and one for peers that are unreachable (neighbor not detected). Depending on the time period of data on the card, the number of smaller time blocks used to indicate the status varies. A vertical stack of time blocks, one from each map, includes the results from all checks during that time. The results appear by how saturated the color is for each block. If LLDP detected all peers during that time period for the entire time block, then the top block is 100% saturated (white) and the neighbor not detected block is 0% saturated (gray). As peers become reachable, the neighbor-detected block increases in saturation and the peers that are unreachable (neighbor not detected) block is proportionally reduced in saturation. The following table lists the most common time periods, their corresponding number of blocks, and the amount of time represented by one block:
Time Period
Number of Runs
Number Time Blocks
Amount of Time in Each Block
6 hours
18
6
1 hour
12 hours
36
12
1 hour
24 hours
72
24
1 hour
1 week
504
7
1 day
1 month
2,086
30
1 day
1 quarter
7,000
13
1 week
View Changes to the LLDP Service Configuration File
Each time a change is made to the configuration file for the LLDP service, NetQ logs the change and lets you compare it with the last version using the NetQ UI. This can be useful when you are troubleshooting potential causes for alarms or sessions losing their connections.
From the large single-session card, select the Configuration file evolution tab.
Select the time.
Choose between the File view and the Diff view.
The File view displays the content of the file:
The Diff view highlights the changes (if any) between this version (on left) and the most recent version (on right) side by side:
MAC Addresses
A MAC (media access control) address is a layer 2 construct that uses 48 bits to uniquely identify a network interface controller (NIC) for communication within a network.
With NetQ, you can:
View MAC addresses across the network and for a given device, VLAN, egress port on a VLAN, and VRR
View a count of MAC addresses on a given device
View where MAC addresses have lived in the network (MAC history)
View commentary on changes to MAC addresses (MAC commentary)
View events related to MAC addresses
MAC addresses are associated with switch interfaces. They are classified as:
Origin: MAC address is owned by a particular switch, on one or more interfaces. A MAC address typically has only one origin node. The exceptions are when MLAG is configured, the MAC on the VRR interfaces for the MLAG pair is the same, and when EVPN is configured, the MAC is distributed across the layer 3 gateways.
Remote: MAC address is learned or distributed by the control plane on a tunnel interface pointing to a particular remote location. For a given MAC address and VLAN there is only one first-hop switch (or switch pair), but multiple nodes can have the same remote MAC address.
Local (not origin and not remote): MAC address is learned on a bridge and points to an interface on another switch. If the LLDP neighbor of the interface is a host, then this switch is the first-hop switch where the MAC address is learned. For a given MAC address and VLAN there is only one first-hop switch, except if the switches are part of an MLAG pair, and the interfaces on both switches form a dually or singly connected bond.
MAC Commands
Monitor MAC addresses with the following commands. Refer to the command line reference for additional options, definitions, and examples.
To view all MAC addresses across the network, use the netq show macs command.
To view MAC addresses associated with a VLAN, use the netq show macs vlan command.
The NetQ UI provides a listing of current MAC addresses that you can filter by hostname, timestamp, MAC address, VLAN, or origin. You can sort the list by these parameters and also remote, static, and next hop. To view MAC addresses, open the Menu and search for MACs:
From this screen, select Filters to filter the results by hostname, timestamp, MAC address, VLAN, or origin.
View MAC Address Commentary
You can get more descriptive information about changes to a given MAC address on a specific VLAN. Commentary is available for the following MAC address-related events based on their classification (refer to the definition of these at the beginning of this topic):
Event Triggers
Example Commentary
A MAC address is created, or the MAC address on the interface is changed via the hwaddress option in /etc/network/interface
leaf01 00:00:5e:00:00:03 configured on interface vlan1000-v0
An interface becomes a slave in, or is removed from, a bond
leaf01 00:00:5e:00:00:03 configured on interface vlan1000-v0
An interface is a bridge and it inherits a different MAC address due to a membership change
leaf01 00:00:5e:00:00:03 configured on interface vlan1000-v0
A remote MAC address is learned or installed by control plane on a tunnel interface
44:38:39:00:00:5d learned/installed on vni vni10 pointing to remote dest 10.0.1.34
A remote MAC address is flushed or expires
leaf01 44:38:39:00:00:5d is flushed or expired
A remote MAC address moves from behind one remote switch to another remote switch or becomes a local MAC address
leaf02: 00:08:00:00:aa:13 moved from remote dest 27.0.0.22 to remote dest 27.0.0.34 00:08:00:00:aa:13 moved from remote dest 27.0.0.22 to local interface hostbond2
A MAC address is learned at the first-hop switch (or MLAG switch pair)
leaf04 (and MLAG peer leaf05): 44:38:39:00:00:5d learned on first hop switch, pointing to local interface bond4
A local MAC address is flushed or expires
leaf04 (and MLAG peer leaf05) 44:38:39:00:00:5d is flushed or expires from bond4
A local MAC address moves from one interface to another interface or to another switch
leaf04: 00:08:00:00:aa:13 moved from hostbond2 to hostbond3 00:08:00:00:aa:13 moved from hostbond2 to remote dest 27.0.0.13
To view MAC address commentary:
Select the Menu.
Search for MACs.
Select the checkbox next to one of the entries, then select Open card above the table.
Choose a time range, then click Continue.
You can scroll through the list to see comments related to the MAC address moves and changes:
(Optional) From here, you can filter the list by a given device by selecting Filters.
A red dot on the filter icon indicates that filtering is active. To remove the filter, click again, then click Clear Filter.
To see MAC address commentary, use the netq show mac-commentary command. The following examples show the commentary seen in common situations.
MAC Address Configured Locally
In this example, the 46:38:39:00:00:44 MAC address was configured on the VlanA-1 interface of multiple switches, so we see the MAC configured commentary on each of them.
cumulus@server-01:~$ netq show mac-commentary 46:38:39:00:00:44 between now and 1hr
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Mon Aug 24 2020 14:14:33 leaf11 100 leaf11: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:15:03 leaf12 100 leaf12: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:15:19 leaf21 100 leaf21: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:15:40 leaf22 100 leaf22: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:15:19 leaf21 1003 leaf21: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:15:40 leaf22 1003 leaf22: 46:38:39:00:00:44 configured on interface VlanA-1
Mon Aug 24 2020 14:16:32 leaf02 1003 leaf02: 00:00:5e:00:01:01 configured on interface VlanA-1
MAC Address Configured on Server and Learned from a Peer
In this example, the 00:08:00:00:aa:13 MAC address was configured on server01. As a result, both leaf11 and leaf12 learned this address on the next hop interface serv01bond2 (learned locally), whereas, the leaf01 switch learned this address remotely on vx-34 (learned remotely).
cumulus@server11:~$ netq show mac-commentary 00:08:00:00:aa:13 vlan 1000 between now and 5hr
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Tue Aug 25 2020 10:29:23 leaf12 1000 leaf12: 00:08:00:00:aa:13 learned on first hop switch interface serv01bond2
Tue Aug 25 2020 10:29:23 leaf11 1000 leaf11: 00:08:00:00:aa:13 learned on first hop switch interface serv01bond2
Tue Aug 25 2020 10:29:23 leaf01 1000 leaf01: 00:08:00:00:aa:13 learned/installed on vni vx-34 pointing to remote dest 36.0.0.24
MAC Address Removed
In this example the bridge FDB entry for the 00:02:00:00:00:a0 MAC address, interface VlanA-1, and VLAN 100 was deleted impacting leaf11 and leaf12.
cumulus@server11:~$ netq show mac-commentary 00:02:00:00:00:a0 vlan 100 between now and 5hr
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Mon Aug 24 2020 14:14:33 leaf11 100 leaf11: 00:02:00:00:00:a0 configured on interface VlanA-1
Mon Aug 24 2020 14:15:03 leaf12 100 leaf12: 00:02:00:00:00:a0 learned on first hop switch interface peerlink-1
Tue Aug 25 2020 13:06:52 leaf11 100 leaf11: 00:02:00:00:00:a0 unconfigured on interface VlanA-1
MAC Address Moved on Server and Learned from a Peer
The MAC address on server11 changed from 00:08:00:00:aa:13. In this example, the MAC learned remotely on leaf01 is now a locally learned MAC address from its local interface swp6. Similarly, the locally learned MAC addresses on leaf11 and leaf12 are now learned from remote dest 27.0.0.22.
cumulus@server11:~$ netq show mac-commentary 00:08:00:00:aa:13 vlan 1000 between now and 5hr
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Tue Aug 25 2020 10:29:23 leaf12 1000 leaf12: 00:08:00:00:aa:13 learned on first hop switch interface serv01bond2
Tue Aug 25 2020 10:29:23 leaf11 1000 leaf11: 00:08:00:00:aa:13 learned on first hop switch interface serv01bond2
Tue Aug 25 2020 10:29:23 leaf01 1000 leaf01: 00:08:00:00:aa:13 learned/installed on vni vx-34 pointing to remote dest 36.0.0.24
Tue Aug 25 2020 10:33:06 leaf01 1000 leaf01: 00:08:00:00:aa:13 moved from remote dest 36.0.0.24 to local interface swp6
Tue Aug 25 2020 10:33:06 leaf12 1000 leaf12: 00:08:00:00:aa:13 moved from local interface serv01bond2 to remote dest 27.0.0.22
Tue Aug 25 2020 10:33:06 leaf11 1000 leaf11: 00:08:00:00:aa:13 moved from local interface serv01bond2 to remote dest 27.0.0.22
MAC Address Learned from MLAG Pair
In this example, after the local first hop learning of the 00:02:00:00:00:1c MAC address on leaf11 and leaf12, the MLAG exchanged the learning on the dually connected interface serv01bond3.
cumulus@server11:~$ netq show mac-commentary 00:02:00:00:00:1c vlan 105 between now and 2d
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Sun Aug 23 2020 14:13:39 leaf11 105 leaf11: 00:02:00:00:00:1c learned on first hop switch interface serv01bond3
Sun Aug 23 2020 14:14:02 leaf12 105 leaf12: 00:02:00:00:00:1c learned on first hop switch interface serv01bond3
Sun Aug 23 2020 14:14:16 leaf11 105 leaf11: 00:02:00:00:00:1c moved from interface serv01bond3 to interface serv01bond3
Sun Aug 23 2020 14:14:23 leaf12 105 leaf12: 00:02:00:00:00:1c learned on MLAG peer dually connected interface serv01bond3
Sun Aug 23 2020 14:14:37 leaf11 105 leaf11: 00:02:00:00:00:1c learned on MLAG peer dually connected interface serv01bond3
Sun Aug 23 2020 14:14:39 leaf12 105 leaf12: 00:02:00:00:00:1c moved from interface serv01bond3 to interface serv01bond3
Sun Aug 23 2020 14:53:31 leaf11 105 leaf11: 00:02:00:00:00:1c learned on MLAG peer dually connected interface serv01bond3
Mon Aug 24 2020 14:15:03 leaf12 105 leaf12: 00:02:00:00:00:1c learned on MLAG peer dually connected interface serv01bond3
MAC Address Flushed
In this example, the interface VlanA-1 associated with the 00:02:00:00:00:2d MAC address and VLAN 1008 is deleted, impacting leaf11 and leaf12.
cumulus@server11:~$ netq show mac-commentary 00:02:00:00:00:2d vlan 1008 between now and 5hr
Matching mac_commentary records:
Last Updated Hostname VLAN Commentary
------------------------- ---------------- ------ --------------------------------------------------------------------------------
Mon Aug 24 2020 14:14:33 leaf11 1008 leaf11: 00:02:00:00:00:2d learned/installed on vni vx-42 pointing to remote dest 27.0.0.22
Mon Aug 24 2020 14:15:03 leaf12 1008 leaf12: 00:02:00:00:00:2d learned/installed on vni vx-42 pointing to remote dest 27.0.0.22
Mon Aug 24 2020 14:16:03 leaf01 1008 leaf01: 00:02:00:00:00:2d learned on MLAG peer dually connected interface swp8
Tue Aug 25 2020 11:36:06 leaf11 1008 leaf11: 00:02:00:00:00:2d is flushed or expired
Tue Aug 25 2020 11:36:06 leaf11 1008 leaf11: 00:02:00:00:00:2d on vni 1008 remote dest changed to 27.0.0.22
MLAG
You use Multi-Chassis Link Aggregation (MLAG) to enable a server or switch with a two-port bond (such as a link aggregation group/LAG, EtherChannel, port group or trunk) to connect those ports to different switches and operate as if they have a connection to a single, logical switch. This provides greater redundancy and greater system throughput. Dual-connected devices can create LACP bonds that contain links to each physical switch. Therefore, NetQ supports active-active links from the dual-connected devices even though each switch connects to a different physical switch. For an overview and how to configure MLAG in your network, refer to
Multi-Chassis Link Aggregation - MLAG.
MLAG or CLAG?
Other vendors refer to the Cumulus Linux implementation of MLAG as MLAG, MC-LAG or VPC. The NetQ UI uses the MLAG terminology predominantly. However, the management daemon, named clagd, and other options in the code, such as clag-id, remain for historical purposes.
MLAG Commands
Monitor MLAG with the following commands. See the command line reference for additional options, definitions, and examples.
netq show mlag
netq show events message_type mlag
The netq check mlag command verifies MLAG session consistency by identifying all MLAG peers with errors or misconfigurations in the NetQ domain.
netq check mlag
View MLAG in the UI
To add the MLAG card to your workbench, navigate to the header and select Add card > Other card > Network services > All MLAG Sessions card > Open cards. This example shows the following for the last 24 hours:
Four nodes have been running the MLAG protocol with no changes in that number
Four sessions were established and remained so
No MLAG-related events have occurred
Expand to the large card for additional MLAG info. By default, the card displays the Sessions summary tab. From here you can see which devices are handling the most MLAG sessions, or select the dropdown to view nodes with the most unestablished MLAG sessions. You can view MLAG-related events by selecting the Events tab.
Expand the MLAG card to full-screen to view, filter, or export:
the number of bonds with only a single link (single bond)
the number of bonds with two links (dual bonds)
whether MLAG sessions have been assigned a backup IP address
sessions with conflicted bonds (bonds that conflict with existing bond relationships)
the MLAG configuration for a given device
all MLAG-related events
the attributes of all switches running MLAG in your network
the attributes of all MLAG sessions in your network
From this table, you can select a row, then click Add card above the table.
NetQ adds a new, MLAG ‘single-session’ card to your workbench. From this card, you can monitor the number of nodes running the MLAG service, view switches with the most peers alive and not alive, and view events triggered by the MLAG service.
Monitor a Single MLAG Session
The MLAG single-session card displays a summary of the MLAG session. In this example, the leaf01 switch plays the primary role in this session with leaf02 and the session is in good health. The heat map tells us that the peer switch has been alive for the entire 24-hour period.
From this card, you can also view the node role, peer role and state, and MLAG system MAC address which identify the session in further detail.
Granularity of Data Shown Based on Time Period
On the medium and large single MLAG session cards, vertically stacked heat maps represent the status of the peers; one for peers that are reachable (alive), and one for peers that are unreachable (not alive). Depending on the time period of data on the card, the number of smaller time blocks used to indicate the status varies. A vertical stack of time blocks, one from each map, includes the results from all checks during that time. The amount of saturation for each block indicates how many peers were alive. If all peers during that time period were alive for the entire time block, then the top block is 100% saturated (white) and the not alive block is zero percent saturated (gray). As peers that are not alive increase in saturation, the amount of saturation diminishes proportionally for peers that are in the alive block. The following table lists the most common time periods, their corresponding number of blocks, and the amount of time represented by one block:
Time Period
Number of Runs
Number Time Blocks
Amount of Time in Each Block
6 hours
18
6
1 hour
12 hours
36
12
1 hour
24 hours
72
24
1 hour
1 week
504
7
1 day
1 month
2,086
30
1 day
1 quarter
7,000
13
1 week
View Changes to the MLAG Service Configuration File
Each time a change is made to the configuration file for the MLAG service, NetQ logs the change and enables you to compare it with the last version using the NetQ UI. This can be useful when you are troubleshooting potential causes for alarms or sessions losing their connections.
From the large single-session card, select the MLAG Configuration File Evolution tab.
Select the time.
Choose between the File view and the Diff view.
The File view displays the content of the file:
The Diff view highlights the changes (if any) between this version (on left) and the most recent version (on right) side by side:
Network Topology
The network topology dashboard displays a visual representation of your network, showing connections and device information for all monitored nodes. The view allows you to understand your network’s architecture at a high-level, but also lets you isolate individual devices or network tiers.
Access the Topology View
To open the topology view, click Topology in the workbench header. The UI displays the highest-level view of your network’s topology, showing devices as part of tiers corresponding to your network’s architecture: a two-tier architecture is made up of leaf and spine devices; a three-tier architecture is made up of leaf, spine, and super-spine devices. The bottom-most tier is reserved for devices which do not have a role assigned to them.
If your devices appear as a single tier, navigate to the device tab and select the Assign roles button. Select the switches to assign to the same role, then select Assign role above the table and follow the steps in the UI.
For large networks with many devices, you can assign roles in batches by selecting Bulk assign role and creating rules based on device hostnames.
NVIDIA recommends using the dark theme for the topology dashboard.
After assigning roles to the switches, return to the topology view and select Auto arrange to clean up the view.
Interact with the Topology
The topology screen features a main panel displaying tiers or, when zoomed in, the individual devices that comprise the tiers. You can zoom in or out of the topology via the zoom controls at the bottom-right corner of the screen, a mouse with a scroll wheel, or with a trackpad on your computer. You can also adjust the focus by clicking anywhere on the topology and dragging it with your mouse to view a different portion of the network diagram. Above the zoom controls, a smaller screen reflects a macro view of your network and helps with orienting, similar to mapping applications.
View Device and Link Data
Select a device to view the connections between that devices and others in the network. A side panel displays additional device data, including:
Node Data
Description
ASIC
Name of the ASIC used in the switch. A value of Cumulus Networks VX indicates a virtual machine.
NetQ Agent status
Operational status of the NetQ Agent on the switch (fresh or rotten)
NetQ Agent version
Version ID of the NetQ Agent on the switch
OS name
Operating system running on the switch
Platform
Vendor and name of the switch hardware
Protocols
Protocols running on the switch
VNIs
Count of virtual network identifiers (VNIs) on the switch
Interface statistics
Transmit and receive data
Resource utilization
CPU, memory, and disk utilization
Events
Warning and info events
Select a link connection to open a side panel with additional configuration data, which can be sorted by link pairs.
From the side panel, you can view the following data about links:
Link Data
Description
Source hostname
Switch where the connection originates
Source interface
Port on the source switch used by the connection
Peer hostname
Switch where the connection ends
Peer interface
Port on the destination switch used by the connection
Rearrange and Edit the Topology
You can rearrange the topology’s tiers by selecting Edit at the top of the screen and dragging the tiers into different positions. Click Save to preserve the view or Reset to undo the changes.
Create Queries to View a Subset of Devices
You can create queries to segment a topology into smaller, more manageable parts. This can be especially helpful when you need to view a particular section of a very large topology or when you want to find and view connections between two or more devices. To create a query, select Queries on the left side of the screen, then Add query. The name of the query is pre-populated with a unique identifier that you can edit by expanding the query.
Select node_name and enter the parameters to display a subsection of nodes based on their hostnames. To combine multiple queries with logical operators, select Add filter group.
Select the three-dot menu on a given query to either delete or remove the query.
Lifecycle management is enabled for on-premises deployments and disabled for cloud deployments by default. Contact your NVIDIA sales representative or submit a support ticket to activate LCM on cloud deployments.
Access Lifecycle Management in the UI
To access the lifecycle management dashboard, expand the Menu, then select Manage switches:
LCM Summary
This table summarizes LCM functionalities in the UI and CLI:
Function
Description
NetQ UI Cards
NetQ CLI Commands
Switch management
Discover switches, view switch inventory, assign roles, set user access credentials, perform software installation and upgrade networkwide
Switches
Access profiles
netq lcm show switches
netq lcm add role
netq lcm upgrade
netq lcm add/del/show credentials
netq lcm discover
Image management
View, add, and remove images for software installation and upgrade
Cumulus Linux images
NetQ images
netq lcm add/del/show netq-image
netq lcm add/del/show cl-images
netq lcm add/show default-version
NetQ agent configurations
Customize configuration profiles for NetQ Agents running on switches
NetQ agent configurations
netq lcm add/del/show netq-config
Job history
View the results of installation, upgrade, and configuration assignment jobs
CL upgrade history
NetQ install and upgrade history
netq lcm show status
netq lcm show upgrade-jobs
LCM Support for In-band Management
If you manage a switch using an in-band network interface, the inband-interface option must be specified in the initial agent configuration for LCM operations to function as expected. You can configure the agent by specifying the in-band interface in the /etc/netq/netq.yml file. Alternately, you can use the CLI and include the inband-interface option.
With the NetQ UI, you can view the attributes of individual network interface controllers (NICs), including their connection adapters and firmware versions. For NIC inventory information, refer to NIC Inventory.
To view attributes per NIC, open a NIC device card:
Search for the device’s hostname in the global search field or from the header select Add card > Device card.
Select a NIC from the dropdown.
Click Add to open an individual NIC card on your workbench, displaying ports, packets, and bytes information:
For a quick look at the key attributes of a particular NIC, expand the NIC card. Attributes are displayed as the default tab on the large NIC card. Select the Interface stats tab at the top of the card to view detailed interface statistics, including frame and carrier errors.
Expand the card to its largest size to view this information as tabular data, which you can filter and export.
Use the CLI to view Network Time Protocol (NTP). The command output displays the time synchronization status for all devices. You can filter for devices that are either in synchronization or out of synchronization, currently or at a time in the past.
Monitor NTP with the following commands. See the command line reference for additional options, definitions, and examples.
netq show ntp
netq show events message_type ntp
netq show events-config message_type ntp
The netq check ntp command verifies network time synchronization for all nodes (leafs, spines, and hosts) in your network fabric.
netq check ntp
PTP
Use the UI or CLI to monitor Precision Time Protocol (PTP), including clock hierarchies and priorities, synchronization thresholds, and accuracy rates.
PTP monitoring is only supported on Spectrum switches running Cumulus Linux version 5.0.0 and later.
PTP Commands
Monitor PTP with the following commands. See the command line reference for additional options, definitions, and examples.
netq show ptp clock-details
netq show ptp counters (tx | rx)
netq show ptp global-config
netq show ptp port-status
netq show events message_type ptp
Access the PTP Dashboard
From the header or menu, select Spectrum-X, then PTP.
The PTP summary dashboard displays:
clock count, type, and distribution
an overview of PTP-related events
a summary of PTP violations (mean path delay and offset from master)
Navigate to the Events tab to view, filter, and sort PTP-related events:
View PTP on a Switch
Search for the device’s hostname in the global search field or from the header select Add card > Device card.
Select a switch and specify the large card.
Hover over the top of the card and select the PTP tab:
For more granular data, expand the card to full-size and navigate to PTP:
Hover over the chart at any point to display timestamped mean-path-delay and offset-from-master data. You can drag the bottom bar to expand and compress the period of time displayed in the graph.
Select the tabs above the chart to display information about domains, clocks, ports, and configurations:
Use the UI or CLI to monitor RDMA over Converged Ethernet (RoCE) for Spectrum switches and BlueField DPUs.
RoCE Commands
The following commands display your network’s RoCE configuration, RoCE counters and counter pools, and RoCE-related events. See the command line reference for additional options, definitions, and examples.
netq show roce-config
netq show roce-counters (dpu | nic)
netq show roce-counters pool
netq show events message_type tca_roce
netq show events message_type roceconfig
The netq check roce command checks for consistent RoCE and QoS configurations across all nodes in your network fabric.
netq check roce
View RoCE Counters Networkwide in the UI
From the header or Menu, select Spectrum-X, then RoCE.
Select either RoCE switches or RoCE DPUs.
The RoCE switches tab displays transmit (TX) and receive (RX) counters as well as counter pools for all switches running RoCE in your network.
The RoCE DPUs tab displays physical port, priority port, RoCE extended, RoCE, and peripheral component interconnect (PCI) information for all DPUs running RoCE in your network.
View RoCE Counters for a Given Switch
You can view the following RoCE counters for a given switch:
Receive and transmit counters
General, CNP, and RoCE-specific counters
Counter pools
Port-specific counters
To view RoCE counters on a switch, search for the device’s hostname in the global search field or from the header select Add card > Device card. Select a switch that is running RoCE and open the large card on your workbench. Click the RoCE tab to view RoCE counters and their associated ports:
Expand the card to the largest size, then select RoCE counters from the side menu. Use the controls above the table to view, filter, or export counter statistics by Rx, Tx, or Pool.
Disable RoCE Monitoring
To disable RoCE monitoring:
Edit /etc/netq/commands/cl4-netq-commands.yml and comment out the following lines:
Use the CLI to view the Spanning Tree Protocol (STP) topology on a bridge or switch.
Monitor STP with the following command. If you do not have a bridge in your configuration, the output indicates such. See the command line reference for additional options, definitions, and examples.
netq show stp topology
Switches
With the NetQ UI and NetQ CLI, you can monitor the health of individual switches, including interface performance and resource utilization.
NetQ reports switch performance metrics for the following categories:
System configuration: events, interfaces, IP and MAC addresses, VLANs, IP routes, and IP neighbors
Utilization statistics: CPU, memory, disk, ACL and forwarding resources, SSD, BTRFS, and processes
Physical sensing: digital optics and switch sensors
RoCE and Precision Time Protocol
View Switch Metrics and Attributes
The quickest way to access monitoring information for an individual switch is by searching for its hostname in the global search field. Search for the hostname and select the switch to open a full-screen overview of attributes and performance information.
Alternately, you can add a device card to your workbench:
From the header select Add card > Device card.
Select a switch from the list:
Click Add.
Adjust the card’s size to view information at different levels of granularity.
Attributes are displayed as the default tab on the large Switch card. You can view the static information about the switch, including its hostname, addresses, server and ASIC vendors and models, OS and NetQ software information. You can also view the state of the interfaces and NetQ Agent on the switch.
Hover over the top of the card and select the appropriate icon to view utilization info, interface statistics, digital optics info, RoCE metrics, and PTP clock graphs. This example displays utilization information, including CPU, memory, and disk utilization from the past 24 hours:
Expand the card to full-screen to view, filter, or export information about events, interfaces, MAC addresses, VLANs, IP routes, IP neighbors, IP addresses, BTRFS utilization, SSD utilization, forwarding resources, ACL resources, What Just Happened events, sensors, RoCE counters, digital optics, BGP and EVPN sessions, PTP, and process monitoring for a given switch:
The information available in the UI can also be displayed via the CLI with a corresponding netq show command. Each command that begins with netq show includes the option <hostname>. When the <hostname> option is included in the command, the output displays results limited to the switch or host you specified.
For example, you can view all events across your network with the netq show events command. To view all events on a particular switch, specify its name in the <hostname> field in netq <hostname> show events. The following example displays all events on the leaf01 switch:
cumulus@switch:~$ netq leaf01 show events
Matching events records:
Hostname Message Type Severity State Message Timestamp
----------------- ------------------------ ---------------- ---------- ----------------------------------- -------------------------
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:18:40 2024
for ifname: eth0 value: 228
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:18:11 2024
for ifname: swp50 value: 898
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:13:36 2024
for ifname: eth0 value: 253
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:13:05 2024
for ifname: swp50 value: 885
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:08:21 2024
for ifname: eth0 value: 240
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:07:50 2024
for ifname: swp50 value: 919
leaf01 tca_procdevstats info open RX bytes exceeded threshold, Tue Dec 3 16:03:08 2024
for ifname: eth0 value: 250
Refer to the command line reference for a comprehensive list of netq show commands.
View CPU and Memory Utilization for Processes and Services
Use the UI or CLI to visualize which services and processes are consuming the most CPU and memory on a switch. NetQ displays only active processes with the exception of netqd, which is displayed regardless of its active or inactive status. You can add or remove specific services that NetQ monitors using the CLI.
Process monitoring is only supported on Spectrum switches.
To visualize CPU and memory utilization at the process level, open a large device card and navigate to the Utilization tab. Then select Show process monitoring data. The UI depicts two charts—one each for CPU and memory utilization—along with a list of services and processes.
Select a process from the Process name column for its usage data to be reflected in the CPU and memory utilization charts. The data presented is aggregated over a 5-minute period; NetQ lists the process consuming the most CPU resources (aggregated over a 5-minute period or the CPU 5min column) from highest to lowest. The process whose data is reflected in the charts is indicated by an icon next to the name of the process.
The following graphs depict CPU and memory usage over a 6-hour time period from the system monitor daemon, smond.
cumulus@switch:~$ netq show services resource-util
Matching services records:
Hostname Service PID VRF Enabled Active Uptime CPU one Minute CPU five Minute Memory one Minute Memory five Minute Last Updated
----------------- -------------------- ----- -------------------- ------- ------ -------------------- -------------------- -------------------- -------------------- -------------------- ------------------------
r-3700-02 sx_sdk 19012 default yes yes 81 day 17h ago 7.7 24.65 9.44 9.44 Tue Jul 18 18:49:19 2023
r-3700-03 sx_sdk 13627 default yes yes 81 day 18h ago 0 17.82 9.44 9.44 Tue Jul 18 18:49:19 2023
r-3700-02 switchd 21100 default yes yes 81 day 17h ago 56.77 15.07 1.13 1.13 Tue Jul 18 18:49:19 2023
r-3700-03 switchd 15768 default yes yes 81 day 18h ago 0 8.28 1.11 1.11 Tue Jul 18 18:49:19 2023
neo-switch02 sx_sdk 1841 default yes yes 2h 29min ago 30.1 6.55 9.67 9.67 Tue Jul 18 18:49:19 2023
ufm-switch19 sx_sdk 2343 default yes yes 21h 3min ago 5.22 5.73 2.84 2.84 Tue Jul 18 18:49:19 2023
ufm-switch29 sx_sdk 2135 default yes yes 8 day 4h ago 2.88 5.73 9.54 9.54 Tue Jul 18 18:49:19 2023
r-3420-01 sx_sdk 1885 default yes yes 9 day 3h ago 5.28 5.01 9.3 9.3 Tue Jul 18 18:49:19 2023
ufm-switch29 clagd 7095 default no yes 8 day 4h ago 23.57 4.71 0.63 0.63 Tue Jul 18 18:49:19 2023
r-3700-01 smond 7301 default yes yes 9 day 3h ago 0 4.7 0.2 0.2 Tue Jul 18 18:49:19 2023
...
To configure the NetQ Agent to start monitoring additional services, run netq config add agent services, specifying the services you want the agent to monitor in the command. Restart the agent, then run netq config show agent services to display a list of services that the NetQ Agent is monitoring for CPU and memory usage.
To stop the agent from monitoring a service run netq config del agent services. Some services and processes cannot be excluded from monitoring.
To actively monitor process-level CPU and memory utilization, you can create threshold-crossing rules. These rules generate events when a process or service exceeds the utilization limit you defined when creating the rule. Refer to the resource utilization table in the TCA Events Reference for service memory and service CPU utilization event IDs.
View Queue Lengths as Histograms
Monitoring queue lengths in your network’s fabric is useful for detecting microbursts which can lead to higher packet latency or buffer congestion. The
Cumulus Linux documentation provides a detailed description of ASIC monitoring, including example bin configurations and information on interpreting histogram queue lengths.
Queue length monitoring is supported on Spectrum switches running Cumulus Linux 5.1 or later. To display queue histogram data, you must set the snapshot file count to at least 120 when you are configuring ASIC monitoring, as described in the Snapshots section in the
ASIC monitoring configuration documentation.
If you restart the asic-monitor service or edit the /monitor.conf configuration file, you must restart the NetQ agent with netq config restart agent.
The information available in the UI can also be displayed via the CLI with the netq show histogram command.
To view queue histograms in the UI:
Expand the Menu. In the Spectrum-X section, select Queue histogram.
Devices are grouped according to their roles: superspine, leaf, spine, or exit. If you haven’t assigned roles to your devices, they appear as ‘unassigned.’
Each device is represented by a card that displays its hostname, the port with the longest queue length (displayed horizontally, divided into bins), standard deviation, P95 value across all ports (with an ASIC monitoring configuration), and average queue length. The data updates when you change the time parameters using the controls at the top of the screen. The values reflected in the bins are color-coded, with higher values displayed in darker colors and lower values in lighter colors. Hover over a bin to view its corresponding queue length count.
Select View more to open a dashboard that displays the full range of ports configured to send histogram data along with their associated devices, which are visible when you hover over a section with your cursor. From this view, you can compare devices against each other or the same devices over a different time period. For example, the following view displays switch r-qa-sw-eth-2231 with queue length data from the past minute in the top panel and the past 30 minutes in the bottom panel.
The y-axis represents bins 0 through 9. The hostname associated with the port is displayed on the x-axis.
Use the UI or CLI to view Virtual Local Area Network (VLAN) information.
VLAN Commands
Monitor VLAN with the following commands. Use these commands to display configuration information, interfaces associated with VLANs, MAC addresses associated with a given VLAN, MAC addresses associated with you vRR (virtual route reflector) interface configuration, and VLAN events. See the command line reference for additional options, definitions, and examples.
netq show vlan
netq show interfaces type macvlan
netq show interfaces type vlan
netq show macs
netq show events message_type vlan
The netq check vlan command verifies consistency of the VLAN nodes and interfaces across all links in your network fabric:
netq check vlan
View VLAN in the UI
To view VLAN information, select the Menu, then enter VLANs in the search field.
From here you can view a list of switches or hostnames and their associated VLANs, interfaces, SVIs (switch virtual interfaces), and ports. To view MAC addresses associated with a given VLAN, select the Menu, then enter MACs in the search field.
VXLAN
Use the CLI to monitor Virtual Extensible LAN (VXLAN) and validate overlay communication paths. See the command line reference for additional options, definitions, and examples.
netq show vxlan
netq show interfaces type vxlan
netq show events message_type vxlan
The netq check vxlan command verifies the consistency of the VXLAN nodes and interfaces across all links in your network fabric.
netq check vxlan
Device Inventory
This section describes how to monitor your inventory from networkwide and device-specific perspectives. Use the UI or CLI to view all hardware and software components installed and running on switches, hosts, DPUs, and NICs.
NetQ collects data that validates the health of your network fabric, devices, and interfaces. You can create and run validations with either the NetQ UI or the NetQ CLI. The number of checks and the type of checks are tailored to the particular protocol or element being validated.
Use the value in the Test Number column in the tables below with the CLI when you want to include or exclude specific tests with the netq check command. You can get the test numbers by running the netq show unit-tests command.
Addresses Validation Tests
The duplicate address detection tests look for duplicate IPv4 and IPv6 addresses assigned to interfaces across devices in the inventory. It also checks for duplicate /32 host routes in each VRF.
Test Number
Test Name
Description
0
IPv4 duplicate addresses
Checks for duplicate IPv4 addresses
1
IPv6 duplicate addresses
Checks for duplicate IPv6 addresses
Agent Validation Tests
NetQ Agent validation looks for an agent status of rotten for each node in the network. A fresh status indicates the agent is running as expected. The agent sends a ‘heartbeat’ every 30 seconds, and if it does not send three consecutive heartbeats, its status changes to rotten.
Test Number
Test Name
Description
0
Agent health
Checks for nodes that have failed or lost communication
BGP Validation Tests
The BGP validation tests look for status and configuration anomalies.
Test Number
Test Name
Description
0
Session establishment
Checks that BGP sessions are in an established state
1
Address families
Checks if transmit and receive address family advertisement is consistent between peers of a BGP session
2
Router ID
Checks for BGP router ID conflict in the network
3
Hold time
Checks for mismatch of hold time between peers of a BGP session
4
Keep alive interval
Checks for mismatch of keep alive interval between peers of a BGP session
5
IPv4 stale path time
Checks for mismatch of IPv4 stale path timer between peers of a BGP session
6
IPv6 stale path time
Checks for mismatch of IPv6 stale path timer between peers of a BGP session
7
Interface MTU
Checks for consistency of interface MTU for BGP peers
Cumulus Linux Version Tests
The Cumulus Linux version test looks for version consistency.
Test Number
Test Name
Description
0
Cumulus Linux Image Version
Checks the following:
No version specified, checks that all switches in the network have consistent version
match-version specified, checks that a switch’s OS version is equals the specified version
min-version specified, checks that a switch’s OS version is equal to or greater than the specified version
EVPN Validation Tests
The EVPN validation tests look for status and configuration anomalies.
Test Number
Test Name
Description
0
EVPN BGP session
Checks if:
BGP EVPN sessions are established
The EVPN address family advertisement is consistent
1
EVPN VNI type consistency
Because a VNI can be of type L2 or L3, checks that for a given VNI, its type is consistent across the network
2
EVPN type 2
Checks for consistency of IP-MAC binding and the location of a given IP-MAC across all VTEPs
3
EVPN type 3
Checks for consistency of replication group across all VTEPs
4
EVPN session
For each EVPN session, checks if:
adv_all_vni is enabled
FDB learning is disabled on tunnel interface
5
VLAN consistency
Checks for consistency of VLAN to VNI mapping across the network
6
VRF consistency
Checks for consistency of VRF to L3 VNI mapping across the network
Interface Validation Tests
The interface validation tests look for consistent configuration between two nodes.
Test Number
Test Name
Description
0
Administrative state
Checks for consistency of administrative state on two sides of a physical interface
1
Operational state
Checks for consistency of operational state on two sides of a physical interface
2
Speed
Checks for consistency of the speed setting on two sides of a physical interface
3
Auto-negotiation
Checks for consistency of the auto-negotiation setting on two sides of a physical interface
Link MTU Validation Tests
The link MTU validation tests look for consistency across an interface and appropriate size MTU for VLAN and bridge interfaces.
Test Number
Test Name
Description
0
Link MTU consistency
Checks for consistency of MTU setting on two sides of a physical interface
1
VLAN interface
Checks if the MTU of an SVI is no smaller than the parent interface, subtracting the VLAN tag size
2
Bridge interface
Checks if the MTU on a bridge is not arbitrarily smaller than the smallest MTU among its members
MLAG Validation Tests
The MLAG validation tests look for misconfigurations, peering status, and bond error states.
Test Number
Test Name
Description
0
Peering
Checks if:
MLAG peerlink is up
MLAG peerlink bond slaves are down (not in full capacity and redundancy)
Peering is established between two nodes in an MLAG pair
1
Backup IP
Checks if:
MLAG backup IP configuration is missing on an MLAG node
MLAG backup IP is correctly pointing to the MLAG peer and its connectivity is available
2
MLAG Sysmac
Checks if:
MLAG Sysmac is consistently configured on both nodes in an MLAG pair
Any duplication of an MLAG sysmac exists within a bridge domain
3
VXLAN Anycast IP
Checks if the VXLAN anycast IP address is consistently configured on both nodes in an MLAG pair
4
Bridge membership
Checks if the MLAG peerlink is part of bridge
5
Spanning tree*
Checks if:
STP is enabled and running on the MLAG nodes
MLAG peerlink role is correct from STP perspective
The bridge ID is consistent between two nodes of an MLAG pair
The VNI in the bridge has BPDU guard and BPDU filter enabled
*Not supported in per-VLAN rapid spanning tree (PVRST) mode
6
Dual home
Checks for:
MLAG bonds that are not in dually connected state
Dually connected bonds have consistent VLAN and MTU configuration on both sides
STP has consistent view of bonds' dual connectedness
7
Single home
Checks for:
Singly connected bonds
STP has consistent view of bond’s single connectedness
8
Conflicted bonds
Checks for bonds in MLAG conflicted state and shows the reason
9
ProtoDown bonds
Checks for bonds in protodown state and shows the reason
10
SVI
Checks if:
Both sides of an MLAG pair have an SVI configured
SVI on both sides have consistent MTU setting
11
Package mismatch
Checks for package mismatch on an MLAG pair
NTP Validation Tests
The NTP validation test looks for poor operational status of the NTP service.
Test Number
Test Name
Description
0
NTP sync
Checks if the NTP service is running and in sync state
RoCE Validation Tests
The RoCE validation tests look for consistent RoCE and QoS configurations across nodes.
Test Number
Test Name
Description
0
RoCE mode
Checks whether RoCE is configured for lossy or lossless mode
1
RoCE classification
Checks for consistency of DSCP, service pool, port group, and traffic class settings
2
RoCE congestion control
Checks for consistency of ECN and RED threshold settings
3
RoCE flow control
Checks for consistency of PFC configuration for RoCE lossless mode
4
RoCE ETS mode
Checks for consistency of Enhanced Transmission Selection settings
5
RoCE miscellaneous
Checks for consistency across related services
Sensor Validation Tests
The sensor validation tests looks for chassis power supply, fan, and temperature sensors that are not operating as expected.
Test Number
Test Name
Description
0
PSU sensors
Checks for power supply unit sensors that are not in ok state
1
Fan sensors
Checks for fan sensors that are not in ok state
2
Temperature sensors
Checks for temperature sensors that are not in ok state
Topology Validation Tests
The topology validation tests look for inconsistencies between a network’s topology (as derived from LLDP telemetry data) and the user-provided topology blueprint.
Test Number
Test Name
Description
0
LLDP service
Checks that the LLDP service is running
1
Topology blueprint
Checks for differences between a network’s actual topology and the network’s blueprint file
VLAN Validation Tests
The VLAN validation tests look for configuration consistency between two nodes.
Test Number
Test Name
Description
0
Link neighbor VLAN consistency
Checks for consistency of VLAN configuration on two sides of a port or a bond
1
MLAG nond VLAN consistency
Checks for consistent VLAN membership of an MLAG bond on each side of the MLAG pair
VXLAN Validation Tests
The VXLAN validation tests look for configuration consistency across all VTEPs.
Test Number
Test Name
Description
0
VLAN consistency
Checks for consistent VLAN to VXLAN mapping across all VTEPs
1
BUM replication
Checks for consistent replication group membership across all VTEPs
Flow Analysis
Create a flow analysis to sample data from TCP and UDP flows in your environment and to review latency and buffer utilization statistics across network paths.
Flow analysis is supported on NVIDIA Spectrum-2 switches and later. It requires a switch fabric running Cumulus Linux version 5.0 or later.
You must enable Lifecycle Management (LCM) to run a flow analysis. If LCM is disabled, you will not see the flow analysis icon in the UI. LCM is enabled for on-premises deployments by default and disabled for cloud deployments by default. Contact your local NVIDIA sales representative or submit a support ticket to activate LCM on cloud deployments.
Create a New Flow Analysis
To start a new flow analysis, in the header select Flow analysis then Create new flow analysis.
Enter the flow analysis parameters, including the source IP address, destination IP address, source port, and destination port of the flow you wish to analyze. Select the protocol and VRF for the flow from the dropdown menus.
After you enter the application parameters, enter the monitor settings, including the sampling rate and time parameters.
Running a flow analysis will affect switch CPU performance. For high-volume flows, set a lower sampling rate to limit switch CPU impact.
If you attempt to run a flow analysis that includes switches assigned a default, unmodified access profile, the process will fail. Create a unique access profile (or update the default profile with unique credentials), then assign the profile to the switches you want to include in the flow analysis.
After starting the flow analysis, a flow analysis card will appear on the NetQ Workbench.
View Flow Analysis Data
To view a previous flow analysis, in the header select Flow analysis then View previous flow analysis.
Select View details next to the name of the flow analysis to display the analysis dashboard. You can use this dashboard to view latency and buffer statistics for the monitored flow. If bi-directional monitoring was enabled, you can view the reverse direction of the flow by selecting the icon.
The dashboard header shows the monitored flow settings:
Flow Settings
Description
Lifetime
The lifetime of the flow analysis. This example completed in 11 minutes.
Source IP
The source IP address of the flow. In this example it is 10.1.100.125.
Destination IP
The destination IP address of the flow. In this example it is 10.1.10.105.
Source Port
The source port of the flow. In this example it displays N/A because it was not set.
Destination Port
The destination port of the flow. In this example it is 2222.
Protocol
The protocol of the monitored flow. In this example it is UDP.
Sampling Rate
The sampling rate of the flow. In this example it is low.
VRF
The VRF the flow is present in. In this example it is the default VRF.
Bi-directional Monitoring
This determines if the flow is monitored in both directions between the source IP address and the destination IP address. In this example it is enabled. Click to change the direction that is displayed.
Understanding the Flow Analysis Graph
The flow analysis graph is color coded relative to the values measured across devices. Lower values are displayed in green, and higher values are displayed in orange. The color gradient is displayed below the graph along with the low and high values from the collected flow data. Each hop in the path is represented in the graph with a vertical, gray-striped line labeled by hostname. The following example shows a single path:
The flow graph panel on the right side of the dashboard displays the devices along the selected path.
View Flow Latency
The latency measured by the flow analysis is the total transit time of the sampled packets through individual devices. A summary of measured latency for each device is displayed above the main flow analysis graph.
The average latency for packets in the flow is displayed under the hostname of each device, along with the minimum and maximum latencies observed during the analysis lifetime. The 95th percentile (P95) latency value for sampled packets is also displayed. The P95 calculation means that 95% of the sampled packets have a latency value less than or equal to the calculation.
Use your cursor to hover over sections of the main analysis graph to view average latency values for each device in a path.
The left panel of the flow analysis dashboard also displays a timeline of measured latency for each device on that path. Use your cursor to hover over the plotted data points on the timeline for each device to view the latency measured at each time interval.
View Buffer Occupancy
The main flow analysis dashboard also displays the buffer occupancy of each device along the path. To change the graph view to display buffer occupancy for the flow, click next to Avg. flow latency and select Avg. buffer occupancy. You can view an overview graph of buffer occupancy or select each device to see the buffer occupancy for the analyzed flow:
The percentages represent the amount of buffer space on the switch that the analyzed flow occupied while the analysis was running.
View Multiple Paths
When packets matching the flow settings traverse multiple paths in the topology, the flow graph displays latency and buffer occupancy for each path:
You can switch between paths by clicking on an alternate path in the Flow graph panel, or by clicking on an unselected path on the main analysis graph:
In the detail panel on the left side of the dashboard, you can select a path to view the percent of packets distributed over each path.
Partial Path Support
Some flows can still be analyzed if they traverse a network path that includes switches lacking flow analysis support. Partial-path flow analysis is supported in the following conditions:
The unsupported device cannot be the initial ingress or terminating egress device in the path of the analyzed flow.
If there is more than one consecutive transit device in the path that lacks flow analysis support, the path discovery will terminate at that point in the topology and some devices will not be displayed in the flow graph.
An unsupported device is represented in the flow analysis graph as a black bar lined with red x’s. Flow statistics are not displayed for that device.
Path discovery will terminate if multiple consecutive switches do not support flow analysis. When additional data is available from switches outside of discovered paths, you can view data from those devices from the menu at the top of the page:
The left panel displays the data, along with ingress and egress ports.
View Device Statistics
You can view latency, buffer occupancy, interface statistics, resource utilization, and WJH events for each device by clicking on a device in the Flow Graph panel, or by clicking on the line associated with a device in the main flow analysis graph. The left panel will then update to reflect statistics for the respective device.
After selecting a device, click to expand the statistics chart:
In this view, you can select additional categories to add to the chart:
The Flow Graph panel allows you to access the topology view, where you can also click the paths and devices to view statistics. Click View in topology to switch to the topology view.
View WJH Events
Flow analysis monitors the path for WJH events and records any drops for the flow. Switches with WJH events recorded are represented in the flow analysis graph as a red bar with white stripes. Hover over the device to see a WJH event summary:
You can also view devices with WJH events in the flow graph panel:
Click on a device with WJH events to see the statistics in the left panel. Hover over the data to reveal the type of drops over time:
WJH drops can also be viewed from the expanded device chart by selecting the WJH category:
Select Show all drops to display a list of all WJH drops for the device:
Verify Network Connectivity
You can verify the connectivity between two devices in both an ad-hoc fashion and by defining connectivity checks to occur on a scheduled basis.
Specifying Source and Destination Values
When specifying traces, the following options are available for the source and destination values:
Trace Type
Source
Destination
Layer 2
Hostname
MAC address plus VLAN
Layer 2
IPv4/IPv6 address plus VRF (if not default)
MAC address plus VLAN
Layer 2
MAC Address
MAC address plus VLAN
Layer 3
Hostname
IPv4/IPv6 address
Layer 3
IPv4/IPv6 address plus VRF (if not default)
IPv4/IPv6 address
If you use an IPv6 address, you must enter the complete, non-truncated address.
Known Addresses
The tracing function only knows about previously learned addresses. If you find that a path is invalid or incomplete, ping the identified device so that its address becomes known.
Create On-demand Traces
You can view the current connectivity between two devices in your network by creating an on-demand trace. You can perform these traces at layer 2 or layer 3 using the NetQ UI or the NetQ CLI.
Create a Layer 3 On-demand Trace Request
It is helpful to verify the connectivity between two devices when you suspect an issue is preventing proper communication between them. If you cannot find a layer 3 path, you might also try checking connectivity through a layer 2 path.
Determine the IP addresses of the two devices you want to trace.
Open the Menu. Under Bulk data, select IP addresses.
Select Filter and enter a hostname.
From the list of results, note the relevant address.
Filter the list again for the other hostname, and note its address.
Open the Trace Request card.
On a new workbench: Type trace in the Global search field and select the card.
On a current workbench: Click Add card, then select the Trace card.
In the Source field, enter the hostname or IP address of the device where you want to start the trace.
In the Destination field, enter the IP address of the device where you want to end the trace.
If you mistype an address, you must double-click it, or backspace over the error, and retype the address. You cannot select the address by dragging over it as this action attempts to move the card to another location.
Click Run now. A corresponding Trace Results card is opened on your workbench.
Use the netq trace command to view the results in the terminal window. Use the netq add trace command to view the results in the NetQ UI.
To create a layer 3 on-demand trace and see the results in the terminal window, run:
Note the syntax requires the destination device address first and then the source device address or hostname.
This example shows a trace from 10.10.10.1 (source, leaf01) to 10.10.10.63 (destination, border01) on the underlay in pretty output. You could have used leaf01 as the source instead of its IP address. The example first identifies the addresses for the source and destination devices using netq show ip addresses then runs the trace.
cumulus@switch:~$ netq border01 show ip addresses
Matching address records:
Address Hostname Interface VRF Last Changed
------------------------- ----------------- ------------------------- --------------- -------------------------
192.168.200.63/24 border01 eth0 Tue Nov 3 15:45:31 2020
10.0.1.254/32 border01 lo default Mon Nov 2 22:28:54 2020
10.10.10.63/32 border01 lo default Mon Nov 2 22:28:54 2020
cumulus@switch:~$ netq trace 10.10.10.63 from 10.10.10.1 pretty
Number of Paths: 12
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9216
leaf01 swp54 -- swp1 spine04 swp6 -- swp54 border02 peerlink.4094 -- peerlink.4094 border01 lo
peerlink.4094 -- peerlink.4094 border01 lo
leaf01 swp53 -- swp1 spine03 swp6 -- swp53 border02 peerlink.4094 -- peerlink.4094 border01 lo
peerlink.4094 -- peerlink.4094 border01 lo
leaf01 swp52 -- swp1 spine02 swp6 -- swp52 border02 peerlink.4094 -- peerlink.4094 border01 lo
peerlink.4094 -- peerlink.4094 border01 lo
leaf01 swp51 -- swp1 spine01 swp6 -- swp51 border02 peerlink.4094 -- peerlink.4094 border01 lo
peerlink.4094 -- peerlink.4094 border01 lo
leaf01 swp54 -- swp1 spine04 swp5 -- swp54 border01 lo
leaf01 swp53 -- swp1 spine03 swp5 -- swp53 border01 lo
leaf01 swp52 -- swp1 spine02 swp5 -- swp52 border01 lo
leaf01 swp51 -- swp1 spine01 swp5 -- swp51 border01 lo
Each row of the pretty output shows one of the 12 available paths, with each path described by hops using the following format:
source hostname and source egress port – ingress port of first hop and device hostname and egress port – n*(ingress port of next hop and device hostname and egress port) – ingress port of destination device hostname
In this example, 8 of 12 paths use four hops to get to the destination and four use three hops. The overall MTU for all paths is 9216. No errors or warnings are present on any of the paths.
To create a layer 3 on-demand trace and see the results in the On-demand Trace Results card, run:
netq add trace <ip> from (<src-hostname> | <ip-src>) [alert-on-failure]
This example shows a trace from 10.10.10.1 (source, leaf01) to 10.10.10.63 (destination, border01).
Note the syntax requires the destination device address first and then the source device address or hostname.
This example shows a trace from 10.1.10.101 (source, server01) to 10.1.10.104 (destination, server04) through VRF RED in detail output. It first identifies the addresses for the source and destination devices and a VRF between them using netq show ip addresses then runs the trace. Note that the VRF name is case sensitive. The trace job might take some time to compile all the available paths, especially if there are many of them.
To create a layer 3 on-demand trace and see the results in the On-demand Trace Results card, run:
netq add trace <ip> from (<src-hostname> | <ip-src>) vrf <vrf>
This example shows a trace from 10.1.10.101 (source, server01) to 10.1.10.104 (destination, server04) through VRF RED.
cumulus@switch:~$ netq add trace 10.1.10.104 from 10.1.10.101 vrf RED
Create a Layer 2 On-demand Trace
It is helpful to verify the connectivity between two devices when you suspect an issue is preventing proper communication between them. If you cannot find a path through a layer 2 path, you might also try checking connectivity through a layer 3 path.
Note the syntax requires the destination device address first and then the source device address or hostname.
This example shows a trace from 44:38:39:00:00:32 (source, server01) to 44:38:39:00:00:3e (destination, server04) through VLAN 10 in detail output. It first identifies the MAC addresses for the two devices using netq show ip neighbors. Then it determines the VLAN using netq show macs. Then it runs the trace.
Use the netq add trace command to view on-demand trace results in the NetQ UI.
To create a layer 2 on-demand trace and see the results in the On-demand Trace Results card, run:
netq add trace <mac> vlan <1-4096> from <mac-src>
This example shows a trace from 44:38:39:00:00:32 (source, server01) to 44:38:39:00:00:3e (destination, server04) through VLAN 10.
cumulus@switch:~$ netq add trace 44:38:39:00:00:3e vlan 10 from 44:38:39:00:00:32
View On-demand Trace Results
After you have started an on-demand trace or run the netq add trace command, the results appear in either the UI or CLI. In the CLI, run the netq show trace results command. In the UI, locate the On-demand Trace Result card:
After you click Run now, the corresponding results card opens on your workbench. While it is working on the trace, a notice appears on the card indicating it is running. When it is finished, the results are displayed:
To view additional information, expand the card to its largest size and click on a trace. From this screen, you can view configuration details, error and warning messages, and granular data for individual paths.
Create Scheduled Traces
There might be paths through your network that you consider critical or particularly important to your everyday operations. In these cases, it might be useful to create one or more traces to periodically confirm that at least one path is available between the relevant two devices. You can create scheduled traces at layer 2 or layer 3 in your network, from the NetQ UI and the NetQ CLI.
Select a timeframe under Schedule to specify how often you want to run the trace.
Accept the default starting time, or click in the Starting field to specify the day you want the trace to run for the first time.
Verify your entries are correct, then click Save as new.
Provide a name for the trace. Note: This name must be unique for a given user.
Click Save.
You can now run this trace on demand by selecting it from the dropdown list, or wait for it to run on its defined schedule.
To create a layer 3 scheduled trace and see the results in the Scheduled Trace Results card, run:
netq add trace name <text-new-trace-name> <ip> from (<src-hostname>|<ip-src>) interval <text-time-min>
This example shows the creation of a scheduled trace between leaf01 (source, 10.10.10.1) and border01 (destination, 10.10.10.63) with a name of L01toB01Daily that runs on an daily basis. The interval option value is 1440 minutes, as denoted by the units indicator (m).
cumulus@switch:~$ netq add trace name Lf01toBor01Daily 10.10.10.63 from 10.10.10.1 interval 1440m
Successfully added/updated Lf01toBor01Daily running every 1440m
View the results in the NetQ UI.
Create a Layer 3 Scheduled Trace through a Given VRF
Enter a VRF interface if you are using anything other than the default VRF.
Select a timeframe under Schedule to specify how often you want to run the trace.
Accept the default starting time, or click in the Starting field to specify the day you want the trace to run for the first time.
Verify your entries are correct, then click Save as new.
Provide a name for the trace. Note: This name must be unique for a given user.
Click Save.
You can now run this trace on demand by selecting it from the dropdown list, or wait for it to run on its defined schedule.
To create a layer 3 scheduled trace that uses a VRF other than default and then see the results in the Scheduled Trace Results card, run:
netq add trace name <text-new-trace-name> <ip> from (<src-hostname>|<ip-src>) vrf <vrf> interval <text-time-min>
This example shows the creation of a scheduled trace between server01 (source, 10.1.10.101) and server04 (destination, 10.1.10.104) with a name of Svr01toSvr04Hrly that runs on an hourly basis. The interval option value is 60 minutes, as denoted by the units indicator (m).
cumulus@switch:~$ netq add trace name Svr01toSvr04Hrly 10.1.10.104 from 10.10.10.1 interval 60m
Successfully added/updated Svr01toSvr04Hrly running every 60m
In the VLAN field, enter the VLAN ID associated with the destination device.
Select a timeframe under Schedule to specify how often you want to run the trace.
Accept the default starting time, or click in the Starting field to specify the day you want the trace to run for the first time.
Verify your entries are correct, then click Save as new.
Provide a name for the trace. Note: This name must be unique for a given user.
Click Save.
You can now run this trace on demand by selecting it from the dropdown list, or wait for it to run on its defined schedule.
To create a layer 2 scheduled trace and then see the results in the Scheduled Trace Result card, run:
netq add trace name <text-new-trace-name> <mac> vlan <1-4096> from (<src-hostname> | <ip-src>) [vrf <vrf>] interval <text-time-min>
This example shows the creation of a scheduled trace between server01 (source, 10.1.10.101) and server04 (destination, 44:38:39:00:00:3e) on VLAN 10 with a name of Svr01toSvr04x3Hrs that runs every three hours. The interval option value is 180 minutes, as denoted by the units indicator (m).
cumulus@switch:~$ netq add trace name Svr01toSvr04x3Hrs 44:38:39:00:00:3e vlan 10 from 10.1.10.101 interval 180m
Successfully added/updated Svr01toSvr04x3Hrs running every 180m
View the results in the NetQ UI.
View Scheduled Trace Results
The results of scheduled traces are displayed on the Scheduled Trace Results card. To view the results:
Locate the Scheduled Trace Request card on your workbench and expand it to its largest size:
Select the scheduled trace results you want to view. Above the table, select Open card. This opens the medium Scheduled Trace Results card(s) for the selected items.
View a Summary of All Scheduled Traces
You can view a summary of all scheduled traces using the netq show trace summary command. The summary displays the name of the trace, a job ID, status, and timestamps for when was run and when it completed.
This example shows all scheduled traces run in the last 24 hours.
cumulus@switch:~$ netq show trace summary
Name Job ID Status Status Details Start Time End Time
--------------- ------------ ---------------- ---------------------------- -------------------- ----------------
leaf01toborder0 f8d6a2c5-54d Complete 0 Fri Nov 6 15:04:54 Fri Nov 6 15:05
1 b-44a8-9a5d- 2020 :21 2020
9d31f4e4701d
New Trace 0e65e196-ac0 Complete 1 Fri Nov 6 15:04:48 Fri Nov 6 15:05
5-49d7-8c81- 2020 :03 2020
6e6691e191ae
Svr01toSvr04Hrl 4c580c97-8af Complete 0 Fri Nov 6 15:01:16 Fri Nov 6 15:01
y 8-4ea2-8c09- 2020 :44 2020
038cde9e196c
Abc c7174fad-71c Complete 1 Fri Nov 6 14:57:18 Fri Nov 6 14:58
a-49d3-8c1d- 2020 :11 2020
67962039ebf9
Lf01toBor01Dail f501f9b0-cca Complete 0 Fri Nov 6 14:52:35 Fri Nov 6 14:57
y 3-4fa1-a60d- 2020 :55 2020
fb6f495b7a0e
L01toB01Daily 38a75e0e-7f9 Complete 0 Fri Nov 6 14:50:23 Fri Nov 6 14:57
9-4e0c-8449- 2020 :38 2020
f63def1ab726
leaf01toborder0 f8d6a2c5-54d Complete 0 Fri Nov 6 14:34:54 Fri Nov 6 14:57
1 b-44a8-9a5d- 2020 :20 2020
9d31f4e4701d
leaf01toborder0 f8d6a2c5-54d Complete 0 Fri Nov 6 14:04:54 Fri Nov 6 14:05
1 b-44a8-9a5d- 2020 :20 2020
9d31f4e4701d
New Trace 0e65e196-ac0 Complete 1 Fri Nov 6 14:04:48 Fri Nov 6 14:05
5-49d7-8c81- 2020 :02 2020
6e6691e191ae
Svr01toSvr04Hrl 4c580c97-8af Complete 0 Fri Nov 6 14:01:16 Fri Nov 6 14:01
y 8-4ea2-8c09- 2020 :43 2020
038cde9e196c
...
L01toB01Daily 38a75e0e-7f9 Complete 0 Thu Nov 5 15:50:23 Thu Nov 5 15:58
9-4e0c-8449- 2020 :22 2020
f63def1ab726
leaf01toborder0 f8d6a2c5-54d Complete 0 Thu Nov 5 15:34:54 Thu Nov 5 15:58
1 b-44a8-9a5d- 2020 :03 2020
9d31f4e4701d
View Scheduled Trace Settings for a Given Trace
You can view the configuration settings used by a give scheduled trace using the netq show trace settings command.
This example shows the settings for the scheduled trace named Lf01toBor01Daily.
cumulus@switch:~$ netq show trace settings name Lf01toBor01Daily
View Scheduled Trace Results for a Given Trace
You can view the results for a give scheduled trace using the netq show trace results command.
This example obtains the job ID for the trace named Lf01toBor01Daily, then shows the results.
cumulus@switch:~$ netq show trace summary name Lf01toBor01Daily json
cumulus@switch:~$ netq show trace results f501f9b0-cca3-4fa1-a60d-fb6f495b7a0e
Modify a Scheduled Trace
You can modify scheduled traces at any time as described below. An administrator can also manage scheduled traces through the NetQ management dashboard.
Be aware that changing the configuration of a trace can cause the results to be inconsistent with prior runs of the trace. If this is an unacceptable result, create a new scheduled trace. Optionally you can remove the original trace.
To modify a scheduled trace:
Open the Trace Request card.
Select the trace from the New trace request dropdown.
Edit the schedule, VLAN, or VRF and select Update.
From the confirmation dialog, click Yes to complete the changes or select the link to change the name of the previous version of this scheduled trace.
The validation can now be selected from the New Trace listing and run immediately by selecting Go or Run now. Alternately, you can wait for it to run the first time according to the schedule you specified.
Remove Scheduled Traces
If you have reached the maximum of 15 scheduled traces for your premises, you will need to remove traces to create additional ones.
Both a standard user and an administrative user can remove scheduled traces. No notification is generated on removal. Be sure to communicate with other users before removing a scheduled trace to avoid confusion and support issues.
Open the Trace Request card and expand the card to the largest size.
Select one or more traces.
Above the table, select Delete.
Find the name of the scheduled trace you want to remove:
netq show trace summary [name <text-trace-name>] [around <text-time-hr>] [json]
The following example shows all scheduled traces in JSON format:
cumulus@switch:~$ netq del trace leaf01toborder01
Successfully deleted schedule trace leaf01toborder01
Repeat these steps to remove additional traces.
Troubleshoot NetQ
This page describes how to troubleshoot issues with NetQ itself. If you need additional assistance, contact the NVIDIA support team with the support file you created using the steps outlined on this page.
Browse Configuration and Log Files
The following configuration and log files contain information that can help with troubleshooting:
File
Description
/etc/netq/netq.yml
The NetQ configuration file. This file appears only if you installed either the netq-apps package or the NetQ Agent on the system.
/var/log/netqd.log
The NetQ daemon log file for the NetQ CLI. This log file appears only if you installed the netq-apps package on the system.
/var/log/netq-agent.log
The NetQ Agent log file. This log file appears only if you installed the NetQ Agent on the system.
Check NetQ System Installation Status
The netq show status verbose command shows the status of NetQ components after installation. Use this command to validate NetQ system readiness.
▼
netq show status verbose
cumulus@netq:~$ netq show status verbose
NetQ Live State: Active
Installation Status: FINISHED
Version: 4.12.0
Installer Version: 4.12.0
Installation Type: Standalone
Activation Key: EhVuZXRxLWasdW50LWdhdGV3YXkYsagDIixkWUNmVmhVV2dWelVUOVF3bXozSk8vb2lSNGFCaE1FR2FVU2dHK1k3RzJVPQ==
Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCfdIVVJHVmZvckNLMHRJL0FrQnd1N2FtUGxObW9ERHg2cHNHaU1EQkM0WHdud1lmSlNleUpmdTUvaDFKQ2NuRXpOVnVWRjUgcm9vdEBhbmlscmVzdG9yZQ==
Is Cloud: False
Kubernetes Cluster Nodes Status:
IP Address Hostname Role NodeStatus
------------- ------------- ------ ------------
10.188.46.243 10.188.46.243 Role Ready
Task Status
------------------------------------------------------------------ --------
Prepared for download and extraction FINISHED
Completed setting up python virtual environment FINISHED
Checked connectivity from master node FINISHED
Installed Kubernetes control plane services FINISHED
Installed Calico CNI FINISHED
Installed K8 Certificates FINISHED
Updated etc host file with master node IP address FINISHED
Stored master node hostname FINISHED
Generated and copied master node configuration FINISHED
Updated cluster information FINISHED
Plugged in release bundle FINISHED
Downloaded, installed, and started node service FINISHED
Downloaded, installed, and started port service FINISHED
Patched Kubernetes infrastructure FINISHED
Removed unsupported conditions from master node FINISHED
Installed NetQ Custom Resource Definitions FINISHED
Installed Master Operator FINISHED
Updated Master Custom Resources FINISHED
Updated NetQ cluster manager custom resource FINISHED
Installed Cassandra FINISHED
Created new database FINISHED
Updated Master Custom Resources FINISHED
Updated Kafka Custom Resources FINISHED
Read Config Key ConfigMap FINISHED
Backed up ConfigKey FINISHED
Read ConfigKey FINISHED
Created Keys FINISHED
Verified installer version FINISHED
...
Troubleshoot NetQ Installation and Upgrade Issues
Before you attempt a NetQ installation or upgrade, verify that your system meets the minimum VM requirements for your deployment type.
If an upgrade or installation process stalls or fails, run the netq bootstrap reset command to stop the process, followed by the netq install command to re-attempt the installation or upgrade.
Error Message
Deployment Type
Solution
Cannot upgrade a non-bootstrapped NetQ server. Please reset the cluster and re-install.
Only a server that has been bootstrapped and has a valid /etc/app-release file can be upgraded. 1. Run the netq bootstrap reset command. 2. Run the netq install command according to your deployment type.
Unable to get response from admin app.
Re-run the netq upgrade bundle <text-bundle-url> command. If the retry fails with same error, reset the server and run the install command: 1. Run the netq bootstrap reset command. 2. Run the netq install command according to your deployment type.
Unable to get response from kubernetes api server.
Re-run the netq upgrade bundle <text-bundle-url> command. If the retry fails with same error, reset the server and run the install command: 1. Run the netq bootstrap reset command 2. Run the netq install command according to your deployment type.
Cluster vip is an invalid parameter for standalone upgrade.
Please free up disk as {} is {}% utilised. Recommended to keep under 70%.
Delete previous software tarballs in the /mnt/installables/ directory to regain space. If you cannot decrease disk usage to under 70%, contact the NVIDIA support team.
Did not find the vmw_pvscsi driver enabled on this NetQ VM. Please re-install the NetQ VM on ESXi server.
VMware
The NetQ VM must have the vmw_pvscsi driver enabled.
Error: Bootstrapped IP does not belong to any local IPs
The IP address used for bootstrapping should be from the local network.
NVIDIA might provide hook scripts to patch issues encountered during a NetQ installation or upgrade. When you run the netq install or netq upgrade command, NetQ checks for specific hook script filenames in the /usr/bin directory. The expected filenames for NetQ 4.12.0 are:
After copying the script to the expected path and setting it to executable, the script will run during the next installation or upgrade attempt.
Verify Connectivity between Agents and Appliances
The sudo opta-info.py command displays the status of and connectivity between agents and appliances. This command is typically used when debugging NetQ.
In the output below, the Opta Health Status column displays a healthy status, which indicates that the appliance is functioning properly. The Opta-Gateway Channel Status column displays the connectivity status between the appliance and cloud endpoint. The Agent ID column displays the switches connected to the appliance.
cumulus@netq-appliance:~$ sudo opta-info.py
[sudo] password for cumulus:
Service IP: 10.102.57.27
Opta Health Status Opta-Gateway Channel Status
-------------------- -----------------------------
Healthy READY
Agent ID Remote Address Status Messages Exchanged Time Since Last Communicated
---------- ---------------- -------- -------------------- ------------------------------
switch1 /20.1.1.10:46420 UP 906 2023-02-14 00:32:43.920000
netq-appliance /20.1.1.10:44717 UP 1234 2023-02-14 00:32:31.757000
cumulus@sm-telem-06:~$ sudo opta-info.py
Service IP: 10.97.49.106
Agent ID Remote Address Status Messages Exchanged Time Since Last Communicated
----------------------------------------- --------------------- -------- -------------------- ------------------------------
netq-lcm-executor-deploy-65c984fc7c-x97bl /10.244.207.135:52314 UP 1340 2023-02-13 19:31:37.311000
sm-telem-06 /10.188.47.228:2414 UP 1449 2023-02-14 06:42:12.215000
mlx-2010a1-14 /10.188.47.228:12888 UP 15 2023-02-14 06:42:27.003000
Generate a Support File on the NetQ System
The opta-support command generates information for troubleshooting issues with NetQ. It provides information about the NetQ Platform configuration and runtime statistics as well as output from the docker ps command.
cumulus@server:~$ sudo opta-support
Please send /var/support/opta_support_server_2021119_165552.txz to Nvidia support.
To export network validation check data in addition to OPTA health data to the support bundle, the NetQ CLI must be activated with AuthKeys. If the CLI access key is not activated, the command output displays a notification and data collection excludes netq show output:
cumulus@server:~$ sudo opta-support
Access key is not found. Please check the access key entered or generate a fresh access_key,secret_key pair and add it to the CLI configuration
Proceeding with opta-support generation without netq show outputs
Please send /var/support/opta_support_server_20211122_22259.txz to Nvidia support.
Generate a Support File on Switches and Hosts
The netq-support command generates information for troubleshooting NetQ issues on a host or switch. Similar to collecting a support bundle on the NetQ system, the NVIDIA support team might request this output to gather more information about switch and host status.
When you run the netq-support command on a switch running Cumulus Linux, a cl-support file will also be created and bundled within the NetQ support archive:
The following sections contain NetQ reference materials.
NetQ CLI Reference
This reference provides details about each of the NetQ CLI commands. For an overview of the CLI structure and usage, read the NetQ Command Line Overview.
The commands appear alphabetically by command name.
Because all commands begin with netq, the next required keyword determines the order
Punctuation and numbers appear before letters
When options are available, you should use them in the order listed.
Integrate NetQ API with Your Applications
The NetQ API provides data about the performance and operation of your network and its devices. You can view the data with your internal tools or with third-party analytic tools. The API displays the health of individual switches, network protocols and services, trace and validation results, as well as networkwide inventory and events.
This guide provides an overview of the NetQ API framework, including the basics of using Swagger UI 2.0 or bash plus curl to view and test the APIs. Descriptions of each endpoint and model parameter are in individual API JSON files.
Inventory and Devices: Address, Inventory, MAC Address tables, Node, Sensors
Events: Events
Each endpoint has its own API. You can make requests for all data and all devices or you can filter the request by a given hostname. Each API returns a predetermined set of data as defined in the API models.
The Swagger interface displays both public and internal APIs. Public APIs do not have internal in their name. Internal APIs are not supported for public use and subject to change without notice.
Get Started
You can access the API gateway and execute requests from the Swagger UI or a terminal interface. If you are using a terminal window, proceed to the next section.
Open the Swagger interface by entering one of the following in your browser’s address bar:
Select auth from the Select a definition dropdown at the top right of the window. This opens the authorization API.
Log In
Although you can view the API endpoints without authorization, you can only execute the API endpoints if you have been authorized.
You must first obtain an access key and then use that key to authorize your access to the API.
Click POST/login.
Click Try it out.
Enter the username and password you use to log in to the NetQ UI. Do not change the access-key value.
Click Execute.
Scroll down to view the Responses. In the Server response section, in the Response body of the 200 code response, copy the access token in the top line.
Click Authorize.
Paste the access key into the Value field, and click Authorize.
Click Close.
To log in and obtain authorization:
Open a terminal window.
Log in to obtain the access token. You will need the following information:
Hostname or IP address, and port (443 for cloud deployments, 32708 for on-premises deployments) of your API gateway
Your login credentials that were provided as part of the NetQ installation process. For this release, the default is username admin and password admin.
This example uses an IP address of 192.168.0.10, port of 443, and the default credentials:
The output provides the access token as the first parameter.
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9....","customer_id":0,"expires_at":1597200346504,"id":"admin","is_cloud":true,"premises":[{"name":"OPID0","namespace":"NAN","opid":0},{"name":"ea-demo-dc-1","namespace":"ea1","opid":30000},{"name":"ea-demo-dc-2","namespace":"ea1","opid":30001},{"name":"ea-demo-dc-3","namespace":"ea1","opid":30002},{"name":"ea-demo-dc-4","namespace":"ea1","opid":30003},{"name":"ea-demo-dc-5","namespace":"ea1","opid":30004},{"name":"ea-demo-dc-6","namespace":"ea1","opid":30005},{"name":"ea-demo-dc-7","namespace":"ea1","opid":80006},{"name":"Cumulus Data Center","namespace":"NAN","opid":1568962206}],"reset_password":false,"terms_of_use_accepted":true}
Copy the access token to a text file. You will need this token to make API data requests.
You are now able to create and execute API requests against the endpoints.
By default, authorization is valid for 24 hours, after which users must sign in again and reauthorize their account.
API Requests
You can use either the Swagger UI or a terminal window with bash and curl commands to create and execute API requests.
Select the endpoint from the definition dropdown at the top right of the application.
This example shows the BGP endpoint selected:
Select the endpoint object.
This example shows the results of selecting the GET bgp object:
A description is provided for each object and the various parameters that can be specified. In the Responses section, you can see the data that is returned when the request is successful.
Click Try it out.
Enter values for the required parameters.
Click Execute.
In a terminal window, use bash plus curl to execute requests. Each request contains an API method (GET, POST, etc.), the address and API endpoint object to query, a variety of headers, and sometimes a body. For example, in the log in step above:
API method = POST
Address and API object = “https://<netq.domain>:443/netq/auth/v1/login”
Headers = -H “accept: application/json” and -H “Content-Type: application/json”
Body = -d “{ "username": "admin", "password": "admin", "access_key": "string"}”
API Responses
A NetQ API response comprises a status code, any relevant error codes (if unsuccessful), and the collected data (if successful).
The following HTTP status codes might be presented in the API responses:
Code
Name
Description
Action
200
Success
Request was successfully processed.
Review response.
400
Bad Request
Invalid input was detected in request.
Check the syntax of your request and make sure it matches the schema.
401
Unauthorized
Authentication has failed or credentials were not provided.
Provide or verify your credentials, or request access from your administrator.
403
Forbidden
Request was valid, but user might not have the needed permissions.
Verify your credentials or request an account from your administrator.
404
Not Found
Requested resource could not be found.
Try the request again after a period of time or verify status of resource.
409
Conflict
Request cannot be processed due to conflict in current state of the resource.
Verify status of resource and remove conflict.
500
Internal Server Error
Unexpected condition has occurred.
Perform general troubleshooting and try the request again.
503
Service Unavailable
The service being requested is currently unavailable.
Verify the status of the NetQ Platform or appliance, and the associated service.
Example Requests and Responses
Some command requests and their responses are shown here, but feel free to run your own requests. To run a request, you will need your authorization token. When using the curl commands, the responses have been piped through a python tool to make them more readable. You can choose to do so as well.
Validate Networkwide Status of the BGP Service
Make your request to the bgp endpoint to obtain validate the operation of the BGP service on all nodes running the service.
Open the check endpoint.
Open the check object.
Click Try it out.
Enter values for time, duration, by, and proto parameters.
In this example, time=1597256560, duration=24, by=scheduled, and proto=bgp.
Click Execute, then scroll down to see the results under Server response.
Run the following curl command, entering values for the various parameters. In this example, time=1597256560, duration=24 (hours), by=scheduled, and proto=bgp.
Make your request to the interfaces endpoint to view the status of all interfaces. By specifying the eq-timestamp option and entering a date and time in epoch format, you indicate the data for that time (versus in the last hour by default), as follows:
Several NetQ features function exclusively on NVIDIA Spectrum switches. The following table summarizes supported features:
Spectrum-1
Spectrum-2
Spectrum-3
Spectrum-4
Adaptive routing monitoring
No
No
No
Yes
Bit error rate monitoring
No
No
Yes
Yes
ECMP monitoring
Yes
Yes
Yes
Yes
Flow analysis
No
Yes
Yes
Yes
Process monitoring
Yes
Yes
Yes
Yes
PTP monitoring
Yes
Yes
Yes
Yes
Queue length histograms
Yes
Yes
Yes
Yes
RoCE monitoring
Yes
Yes
Yes
Yes
What Just Happened
Partial support; no latency and congestion monitoring
Yes
Yes
Yes
Glossary
Common Cumulus Linux and NetQ Terminology
The following table covers some basic terms used throughout the NetQ
user documentation.
Term
Definition
Agent
NetQ software that resides on a host server that provides metrics about the host to the NetQ Telemetry Server for network health analysis.
Bridge
Device that connects two communication networks or network segments. Occurs at OSI Model Layer 2, Data Link Layer.
Clos
Multistage circuit switching network used by the telecommunications industry, first formalized by Charles Clos in 1952.
Device
UI term referring to a switch, host, or chassis or combination of these. Typically used when describing hardware and components versus a software or network topology. See also Node.
Event
Change or occurrence in network or component that can trigger a notification. Events are categorized by severity: error or info.
Fabric
Network topology where a set of network nodes interconnects through one or more network switches.
Fresh
Node that has been communicative for the last 120 seconds.
High Availability
Software used to provide a high percentage of uptime (running and available) for network devices.
Host
A device connected to a TCP/IP network. It can run one or more virtual machines.
Hypervisor
Software which creates and runs virtual machines. Also called a virtual machine monitor.
IP Address
An Internet Protocol address comprises a series of numbers assigned to a network device to uniquely identify it on a given network. Version 4 addresses are 32 bits and written in dotted decimal notation with 8-bit binary numbers separated by decimal points. Example: 10.10.10.255. Version 6 addresses are 128 bits and written in 16-bit hexadecimal numbers separated by colons. Example: 2018:3468:1B5F::6482:D673.
Leaf
An access layer switch in a Spine-Leaf or Clos topology. An Exit-Leaf is a switch that connects to services outside of the data center such as firewalls, load balancers, and internet routers. See also Spine, Clos, Top of Rack, and Access Switch.
Linux
Set of free and open-source software operating systems built around the Linux kernel. Cumulus Linux is one of the available distribution packages.
Node
UI term referring to a switch, host, or chassis in a topology.
Notification
Item that informs a user of an event. Notifications are received through third-party applications, such as email or Slack.
Peer link
Link, or bonded links, used to connect two switches in an MLAG pair.
Rotten
Node that has been silent for 120 seconds or more.
Router
Device that forwards data packets (directs traffic) from nodes on one communication network to nodes on another network. Occurs at the OSI Model Layer 3, Network Layer.
Spine
Used to describe the role of a switch in a Spine-Leaf or Clos topology. See also Aggregation switch, End of Row switch, and distribution switch.
Switch
High-speed device that receives data packets from one device or node and redirects them to other devices or nodes on a network.
Telemetry server
NetQ server that receives metrics and other data from NetQ agents on leaf and spine switches and hosts.
Top of Rack
Switch that connects to the network (versus internally); also known as a ToR switch.
Virtual Machine
Emulation of a computer system that provides all the functions of a particular architecture.
Web-scale
A network architecture designed to deliver capabilities of large cloud service providers within an enterprise IT environment.
Whitebox
Generic, off-the-shelf, switch or router hardware used in Software Defined Networks (SDN).
Common Cumulus Linux and NetQ Acronyms
The following table covers some common acronyms used throughout the NetQ
user documentation.