NetEngine 8000 X V800R012C00SPC300 Configuration Guide - System Monitor 04
NetEngine 8000 X V800R012C00SPC300 Configuration Guide - System Monitor 04
V800R012C00SPC300
Issue 04
Date 2020-04-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: [email protected]
Contents
3 NetStream Configuration.................................................................................................... 68
3.1 Overview of NetStream...................................................................................................................................................... 69
3.2 NetStream Licensing Requirements and Configuration Precautions.................................................................. 71
3.3 Collecting Statistics About IPv4 Original Flows......................................................................................................... 72
3.3.1 Specifying a NetStream Service Processing Mode................................................................................................. 73
3.3.2 Outputting Original Flows............................................................................................................................................. 74
3.3.3 (Optional) Configuring NetStream Monitoring Services..................................................................................... 76
3.3.4 (Optional) Adjusting the AS Field Mode and Interface Index Type.................................................................77
3.3.5 (Optional) Enabling Statistics Collection of TCP Flags........................................................................................ 78
3.3.6 (Optional) Configuring NetStream Interface Option Packets and Setting Option Template
Refreshing Parameters............................................................................................................................................................... 78
3.3.7 (Optional) Enabling the Storage Function for Aged Original Flows............................................................... 79
3.3.8 Sampling IPv4 Flows........................................................................................................................................................ 80
3.3.9 (Optional) Disabling MPLS Packet Sampling on an Interface...........................................................................81
3.3.10 Verifying the Configuration of Statistics Collection of IPv4 Original Flows............................................... 82
5 Ping/Tracert.......................................................................................................................... 271
5.1 Ping/Tracert Licensing Requirements and Configuration Precautions............................................................. 272
5.2 Using Ping/Tracert on an IP Network.......................................................................................................................... 272
5.2.1 Using Ping to Check Link Connectivity on an IPv4 or IPv6 Network............................................................ 272
5.2.2 Using Ping to Monitor the Reachability of Layer 3 Trunk Member Interfaces..........................................274
5.2.3 Using Tracert to Monitor the Forwarding Path on an IPv4 or IPv6 Network............................................ 275
5.3 Using Ping/Tracert on an MPLS Network.................................................................................................................. 277
12 eMDI Configuration..........................................................................................................447
12.1 EMDI Overview................................................................................................................................................................. 447
12.2 EMDI Licensing Requirements and Configuration Precautions........................................................................449
12.3 Configuring Basic eMDI Detection Functions.........................................................................................................451
12.3.1 Configuring an eMDI Channel Group.................................................................................................................... 451
12.3.2 Configuring an eMDI Board Group........................................................................................................................ 452
12.3.3 Binding a Channel Group to a Board Group....................................................................................................... 452
12.3.4 (Optional) Configuring eMDI Jitter Detection....................................................................................................453
12.3.5 (Optional) Configuring eMDI Detection on Ps.................................................................................................. 454
12.4 Configuring eMDI Attributes........................................................................................................................................ 454
12.4.1 Configuring an eMDI Detection Period.................................................................................................................455
12.4.2 Configuring eMDI Alarm Thresholds and the Number of Alarm Suppression Times...........................455
12.4.3 Configuring an eMDI Detection Rate.................................................................................................................... 456
12.5 Maintaining eMDI............................................................................................................................................................ 457
12.6 Configuration Examples for eMDI.............................................................................................................................. 457
12.6.1 Example for Configuring eMDI Detection for a Common Layer 3 Multicast Service........................... 457
12.6.2 Example for Configuring eMDI Detection on an Intra-AS NG MVPN with an mLDP P2MP LSP..... 466
Purpose
This document provides the basic concepts, configuration procedures, and
configuration examples in different application scenarios of the system monitor
feature supported by the NetEngine 8000.
Related Version
The following table lists the product version related to this document.
Intended Audience
This document is intended for:
Security Declaration
● Encryption algorithm declaration
The encryption algorithms DES/3DES/RSA (RSA-2048 or lower)/MD5 (in
digital signature scenarios and password encryption)/SHA1 (in digital
signature scenarios) have a low security, which may bring security risks. If
protocols allowed, using more secure encryption algorithms, such as AES/RSA
(RSA-2048 or higher)/SHA2/HMAC-SHA2 is recommended.
● Password configuration declaration
– Do not set both the start and end characters of a password to "%^%#".
This causes the password to be displayed directly in the configuration file.
– To further improve device security, periodically change the password.
● Personal data declaration
Your purchased products, services, or features may use users' some personal
data during service operation or fault locating. You must define user privacy
policies in compliance with local laws and take proper measures to fully
protect personal data.
● Feature declaration
– The NetStream feature may be used to analyze the communication
information of terminal customers for network traffic statistics and
management purposes. Before enabling the NetStream feature, ensure
that it is performed within the boundaries permitted by applicable laws
and regulations. Effective measures must be taken to ensure that
information is securely protected.
– The mirroring feature may be used to analyze the communication
information of terminal customers for a maintenance purpose. Before
enabling the mirroring function, ensure that it is performed within the
boundaries permitted by applicable laws and regulations. Effective
measures must be taken to ensure that information is securely protected.
– The packet header obtaining feature may be used to collect or store
some communication information about specific customers for
transmission fault and error detection purposes. Huawei cannot offer
services to collect or store this information unilaterally. Before enabling
the function, ensure that it is performed within the boundaries permitted
by applicable laws and regulations. Effective measures must be taken to
ensure that information is securely protected.
● Reliability design declaration
Network planning and site design must comply with reliability design
principles and provide device- and solution-level protection. Device-level
protection includes planning principles of dual-network and inter-board dual-
link to avoid single point or single link of failure. Solution-level protection
refers to a fast convergence mechanism, such as FRR and VRRP. If solution-
level protection is used, ensure that the primary and backup paths do not
share links or transmission devices. Otherwise, solution-level protection may
fail to take effect.
Special Declaration
● This document serves only as a guide. The content is written based on device
information gathered under lab conditions. The content provided by this
document is intended to be taken as general guidance, and does not cover all
scenarios. The content provided by this document may be different from the
information on user device interfaces due to factors such as version upgrades
and differences in device models, board restrictions, and configuration files.
The actual user device information takes precedence over the content
provided by this document. The preceding differences are beyond the scope of
this document.
● The maximum values provided in this document are obtained in specific lab
environments (for example, only a certain type of board or protocol is
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as
follows.
Convention Description
Change History
Changes between document issues are cumulative. The latest document issue
contains all the changes made in earlier issues.
● Changes in Issue 04 (2020-04-30)
This issue is the fourth official release. The software version of this issue is
V800R012C00SPC300.
● Changes in Issue 03 (2020-03-30)
This issue is the third official release. The software version of this issue is
V800R012C00SPC300.
● Changes in Issue 02 (2020-03-15)
This issue is the second official release. The software version of this issue is
V800R012C00SPC100.
● Changes in Issue 01 (2019-10-30)
This issue is the first official release. The software version of this issue is
V800R012C00.
2 IP FPM Configuration
Background
As IP services are more widely used, fault diagnosis and end-to-end service quality
analysis are becoming an increasingly pressing concern for carriers. However,
absence of effective measures prolongs fault diagnosis and increases the
workload. IP FPM is developed to help carriers collect statistics and monitor end-
to-end network performance.
Basic Concepts
The IP Flow Performance Measurement (FPM) model describes how service flows
are measured to obtain the packet loss rate and delay.Figure 2-1 shows the IP
FPM statistical model. The IP FPM model is composed of three objects: target
flows, a transit network, and the statistical system. The statistical system is further
classified into the Target Logical Port (TLP), Data Collecting Point (DCP), and
Measurement Control Point (MCP).
● Target flow
Target flows must be pre-defined.
One or more fields in IP headers can be specified to identify target flows. The
field can be the source IP address or prefix, destination IP address or prefix,
protocol type, source port number, destination port number, or type of service
(ToS). The more fields specified, the more accurately flows can be identified.
Specifying as many fields as possible is recommended to maximize
measurement accuracy.
● Transit network
The transit network only bears target flows. The target flows are not
generated or terminated on the transit network. The transit network can be a
Layer 2 (L2), Layer 3 (L3), or L2+L3 hybrid network. Each node on the transit
network must be reachable at the network layer.
● TLP
TLPs are interfaces on the edge nodes of the transit network. TLPs perform
the following actions:
– Compile statistics on the packet loss rate and delay.
– Generate statistics, such as the number of packets sent and received,
traffic bandwidth, and timestamp.
An In-Point-TLP collects statistics about service flows it receives. An Out-Point-
TLP collects statistics about service flows it sends.
● DCP
DCPs are edge nodes on the transit network. DCPs perform the following
actions:
– Manage and control TLPs.
– Collect statistics generated by TLPs.
– Report the statistics to an MCP.
● MCP
MCPs can be any nodes on the transit network. MCPs perform the following
actions:
– Collect statistics reported by DCPs.
– Summarize and calculate the statistics.
– Report measurement results to user terminals or the network
management system (NMS).
IP FPM also defines measurement flags. Measurement flags, also called
identification flags, identifies whether a specific packet is used to measure packet
loss or delay. A specific bit in the IPv4 packet header can be specified as a
measurement flag for packet loss or delay measurement.
Currently, the IP FPM Measurement flags cannot use the same Bit with Qos flags.
Implementation
IP Flow Performance Measurement (FPM) measures multipoint-to-multipoint
(MP2MP) service flows to obtain the packet loss rate and delay.In statistical terms,
the statistical objects are the service flows, and statistical calculations determine
the packet loss rate and delay of the service flows traveling across the transit
network. Service flow statistical analysis is performed on the ingress and egress of
the transit network. On the IP/MPLS network shown in Figure 2-2, the number of
packets entering the network in the ingress direction on R(n) is PI(n), and the
number of packets leaving the network in the egress direction on HUAWEI (n) is
PE(n).
The difference between the number of packets entering the network and the
number of packets leaving the network within a specified period is the packet loss.
● The number of packets entering the network is the sum of all packets moving
in the ingress direction: PI = PI(1) + PI(2) + PI(3)
● The number of packets leaving the network is the sum of all packets moving
in the egress direction: PE = PE(1) + PE(2) + PE(3)
The difference between the time a service flow enters the network and the time
the service flow leaves the network within a specified period is the delay.
Benefits
IP FPM brings the following benefits to carriers:
● Allows carriers to use the network management system (NMS) to monitor
the network running status to determine whether the network quality
complies with the service level agreement (SLA).
● Allows carriers to promptly adjust services based on measurement results to
ensure proper transmission of voice and data services, improving user
experience.
The following examples describe how to configure packet loss measurement and two-way
delay measurement in end-to-end proactive performance statistics and how to configure
packet loss measurement and one-way delay measurement in hop-by-hop on-demand
performance statistics.
IP FPM has the following limitations:
● IP FPM does not support statistics collection on multicast and broadcast streams.
● IP FPM supports statistics collection on only IPv4 packets and not IPv6 packets.
● For the packets fragmented within a measurement domain, IP FPM supports statistics
collection only on the first fragments. This may lead to byte loss or incorrect byte loss
rate.
● If the interface where the TLP resides is an inter-board trunk interface, a measurement
flag is added to delay packets in polling mode in the upstream direction. That is, delay
measurement is enabled on all the boards where the trunk interface resides one by
one to ensure that a measurement flag is added to one packet each time. If there are
N boards where the trunk interface resides and the measurement period is Interval, a
delay measurement result is generated after a period of N x Interval at least.
● If switching from a 1588 clock to an NTP clock occurs during IP FPM, the collected
statistics are incorrect.
● If a configuration change (such as a TLP or flow configuration change) occurs during
IP FPM, the collected statistics are incorrect.
● If a master/slave switchover occurs on a device during IP FPM, the data generated
during the switchover does not take effect.
● During 1588 clock synchronization, IP FPM delay statistics are incorrect.
● A statistical instance supports at most two InPoint nodes.
● For multipoint delay statistics collection, the packets that do not support ingress are
copied. The statistics cannot be viewed, and no statistics result is available.
● During traffic switching in master/slave RSG scenarios, there is a low probability that
the delay statistics collected within the first period are incorrect.
● Multipoint delay statistics collection is supported only after all the devices deployed
with IP FPM (including MCP and DCP) are upgraded. Otherwise, only single-point
delay statistics collection is supported.
In P2MP (MP being two points) and MP2P (MP being two points) delay measurement
scenarios, all devices in the delay measurement area must support P2MP delay
measurement. Otherwise, delay measurement fails.
Configuration Precautions
Restrictions Guidelines Impact
For the same VPN or in a Plan the flow granularity Packet loss statistics are
native IP scenario, the for measurement inaccurate, and no delay
flow characteristics instances, and ensure measurement result is
(including the source IP that the characteristics available.
address, destination IP of one measurement
address, source port, instance do not include
destination port, or overlap the
protocol number, and characteristics of
DSCP) of different another.
measurement instances
under the same DCP
cannot conflict. For
example, the
characteristics of one
measurement instance
cannot include or
overlap the
characteristics of
another. If the
characteristics conflict,
the measurement result
is inaccurate.
Usage Scenario
The NetEngine 8000 supports proactive and on-demand IP FPM end-to-end
performance statistics. These functions apply to different scenarios:
● Proactive performance statistics apply when you want to monitor network
performance in real-time. After you configure this function, the system
continuously implements performance statistics on packet loss or delay.
● On-demand performance statistics apply when you want to diagnose network
faults or monitor network performance over a specified period. After you
configure this function, the system periodically implements performance
statistics on packet loss or delay.
These measurements serve as a reliable reference for network operation and
maintenance and fault diagnosis, improving network reliability and user
experience.
Pre-configuration Tasks
Before configuring IP FPM end-to-end performance statistics collection, complete
the following tasks:
● Configure a dynamic routing protocol or static routes so that devices are
reachable at the network layer.
● Configure the network time protocol (NTP) or 1588v2 so that all device clocks
can be synchronized.
Context
On the network shown in Figure 2-3, IP Flow Performance Measurement (FPM)
end-to-end performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To monitor transport network
performance or diagnose faults, configure IP FPM end-to-end performance
statistics collection on both Device A and Device C.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Follow-up Procedure
When DCP configurations are being changed, the MCP may receive incorrect
statistics from the DCP. To prevent this, run the measure disable command to
disable IP FPM performance statistics collection of a specified instance on the
MCP. After the DCP configuration change is complete, run the undo measure
disable or measure enable command to enable IP FPM performance statistics
collection for the specified instance on the MCP. This ensures accurate
measurement.
Context
On the network shown in Figure 2-4, IP Flow Performance Measurement (FPM)
end-to-end performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To monitor transport network
performance or diagnose faults, configure IP FPM end-to-end performance
statistics collection on both Device A and Device C.
As shown in Figure 2-4, Device A and Device C function as DCPs to manage and
control TLP100 and TLP310, respectively. Device A and Device C collect statistics
generated by TLP100 and TLP310 and report the statistics to the MCP.
Perform the following steps on Device A and Device C:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm dcp
DCP is enabled globally, and the IPFPM-DCP view is displayed.
Step 3 Run dcp id dcp-id
A DCP ID is configured.
Using the Router ID of a device that is configured as a DCP as its DCP ID is
recommended.
The DCP ID configured on a DCP must be the same as that specified in the dcp
dcp-id command run in the IP FPM instance view of the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 4 (Optional) Run authentication-mode hmac-sha256 key-id key-id [ cipher ]
[ password | password ]
The authentication mode and password are configured on the DCP.
The authentication mode and password configured on a DCP must be the same as
those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 5 (Optional) Run color-flag loss-measure { tos-bit tos-bit | flags-bit0 } delay-
measure { tos-bit tos-bit | flags-bit0 }
IP FPM measurement flags are configured.
The loss and delay measurement flags cannot use the same bit, and the bits used
for loss and delay measurement must not have been used in other measurement
tasks.
Step 6 Run mcp mcp-id [ port port-number ] [ vpn-instance vpn-instance-name | net-
manager-vpn ]
An MCP ID is specified for the DCP, and the UDP port number is configured for the
DCP to communicate with the MCP.
The UDP port number configured on the DCP must be the same as that
configured in the protocol udp port port-number command run on the MCP
associated with this DCP. Otherwise, the DCP cannot report the statistics to the
MCP.
The VPN instance has been created on the DCP before you configure vpn-instance
vpn-instance-name or net-manager-vpn to allow the DCP to report the statistics
to the MCP through the specified VPN or management VPN.
Step 7 (Optional) Run period source ntp
The DCP is configured to select NTP as the clock source when calculating an IP
FPM statistical period ID.
In P2MP (MP being two points) delay measurement scenarios, if the ingress of the
service traffic uses NTP as the clock source, but the egresses use a different clock
source, for example, NTP or 1588v2, you must configure the egresses to select
NTP as the clock source when calculating an IP FPM statistical period ID to ensure
consistent clock sources on the ingress and egresses.
Step 8 Run instance instance-id
An IP FPM instance is created, and the instance view is displayed.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
Step 9 (Optional) Run description text
The description is configured for the IP FPM instance.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
Step 10 (Optional) Run interval interval
The statistical period is configured for the IP FPM instance.
Step 11 Perform either of the following operations to configure the target flow
characteristics in the IP FPM instance.
Configure the forward or backward target flow characteristics.
● When protocol is specified as TCP or UDP, run:
flow { forward | backward } { protocol { tcp | udp } { source-port src-port-
number1 [ to src-port-number2 ] | destination-port dest-port-number1 [ to
dest-port-number2 ] } * | dscp dscp-value | source src-ip-address [ src-mask-
length ] | destination dest-ip-address [ dest-mask-length ] } *
● When protocol is specified as any protocol other than TCP or UDP, run:
● If the target flow in an IP FPM instance is unidirectional, only forward can be specified.
● If the target flow in an IP FPM instance is bidirectional, two situations are available:
– If the bidirectional target flow is asymmetrical, you must configure forward and
backward in two command instances to configure the forward and backward flow
characteristics.
– If the bidirectional target flow is symmetrical, you can specify bidirectional to
configure the bidirectional target flow characteristics. By default, the characteristics
specified are used for the forward flow, and the reverse of those are used for the
backward flow. Specifically, the source and destination IP addresses and port numbers
specified for the forward flow are used respectively as the destination and source IP
addresses and port numbers for the backward flow. If the target flow is symmetrical
bidirectional, set src-ip-address to specify a source IP address and dest-ip-address to
specify a destination IP address for the target flow.
Step 12 Run tlp tlp-id { in-point | out-point } { ingress | egress } [ vpn-label vpn-label
[ lsp-label lsp-label ] ] [ backward-vpn-label backward-vpn-label [ backward-
lsp-label backward-lsp-label ] ]
A TLP is configured and its role is specified.
A TLP compiles statistics and outputs data in the IP FPM model. A TLP can be
specified as an in-point or an out-point. The system sets the measurement flags of
target flows on an in-point, and clears the measurement flags of target flows on
an out-point. TLP100 and TLP310 in Figure 2-4 are the in-point and out-point,
respectively.
Step 13 Run commit
The configuration is committed.
Step 14 Run quit
Return to the IPFPM-DCP view.
Step 15 Run quit
Return to the system view.
Step 16 Bind the TLP to an interface.
1. Run the interface interface-type interface-name command to enter the
interface view.
2. Run either of the following commands:
– If the interface is a Layer 3 interface, run the ipfpm tlp tlp-id command.
– If the interface is a Layer 2 interface, run the ipfpm tlp tlp-id { ce-
default-vlan | vlan-id vlan-id } command.
Step 17 Configure IP FPM end-to-end performance statistics collection.
1. Run the system-view command to enter the system view.
2. Run the nqa ipfpm dcp command to enter the IPFPM-DCP view.
3. Run the instance instance-id command to enter the IP FPM instance view.
4. Run either of the following commands to enable packet loss measurement:
– To enable on-demand packet loss measurement, run the loss-measure
enable [ time-range time-range ] command.
----End
Prerequisites
The IP FPM end-to-end performance statistics collection function has been
configured.
Procedure
● Run the display ipfpm mcp command to check MCP configurations.
● Run the display ipfpm dcp command to check DCP configurations.
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id command to check the performance statistics for
a specified IP FPM instance.
----End
Usage Scenario
IP FPM hop-by-hop performance statistics collection helps locate faults hop by
hop from the source node that initiates traffic.
● When a target flow is unidirectional, you can directly implement hop-by-hop
performance statistics collection for the flow.
● When a target flow is bidirectional, two situations are available:
– If the target flow is symmetrical, you can implement hop-by-hop
performance statistics collection for the forward or backward flow, and
the measurement is the same either way.
– If the target flow is asymmetrical, you must implement hop-by-hop
performance statistics collection for both the forward and backward flows
to obtain their respective measurements.
Pre-configuration Tasks
Before configuring IP FPM hop-by-hop performance statistics collection, complete
the following tasks:
Context
On the network shown in Figure 2-5, IP Flow Performance Measurement (FPM)
hop-by-hop performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To locate faults when network
performance deteriorates, configure IP FPM hop-by-hop performance statistics
collection on Device A, Device B, and Device C to measure packet loss and delay
hop by hop.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm mcp
MCP is enabled globally, and the IPFPM-MCP view is displayed.
Step 3 Run mcp id mcp-id
An MCP ID is configured.
Using the Router ID of a device that is configured as an MCP as its MCP ID is
recommended.
The MCP ID must be an IP address reachable to DCPs. The MCP ID configured on
an MCP must be the same as that specified in the mcp mcp-id [ port port-
number ] command run in the IP FPM instance view of all DCPs associated with
this MCP. If an MCP ID is changed on an MCP, it must be changed for all DCPs
associated with this MCP in an IP FPM instance. Otherwise, the MCP cannot
process the statistics reported by the DCPs.
Step 4 (Optional) Run protocol udp port port-number
A UDP port number is specified for the MCP to communicate with DCPs.
The UDP port number configured on an MCP must be the same as that specified
in the mcp mcp-id [ port port-number ] command run in the IP FPM instance
view of all DCPs associated with this MCP. If a UDP port number is changed on an
MCP, it must be changed for all DCPs associated with this MCP in an IP FPM
instance. Otherwise, the MCP cannot process the statistics reported by the DCPs.
The authentication mode and password configured on an MCP must be the same
as those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on all DCPs associated with this
MCP. Otherwise, the MCP cannot process the statistics reported by the DCPs.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
The DCP ID configured in an IP FPM instance must be the same as that specified
in the dcp id dcp-id command run on a DCP. Otherwise, the MCP associated with
this DCP cannot process the statistics reported by the DCP.
Step 9 Run the following commands to configure Atomic Closed Hops (ACHs).
1. Run the ach ach-id command to create an ACH and enter the ACH view.
2. Run the flow { forward | backward | bidirectional } command to specify the
direction in which hop-by-hop delay measurement is implemented for the
target flow.
3. Run the in-group dcp dcp-id tlp tlp-id command to configure the TLP in-
group.
4. Run the out-group dcp dcp-id tlp tlp-id command to configure the TLP out-
group.
----End
Follow-up Procedure
When DCP configurations are being changed, the MCP may receive incorrect
statistics from the DCP. To prevent this, run the measure disable command to
disable IP FPM performance statistics collection of a specified instance on the
MCP. After the DCP configuration change is complete, run the undo measure
disable or measure enable command to enable IP FPM performance statistics
collection for the specified instance on the MCP. This ensures accurate
measurement.
Context
On the network shown in Figure 2-6, IP Flow Performance Measurement (FPM)
hop-by-hop performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To locate faults when network
performance deteriorates, configure IP FPM hop-by-hop performance statistics
collection on Device A, Device B, and Device C to measure packet loss and delay
hop by hop.
As shown in Figure 2-6, Device A, Device B, and Device C function as DCPs. Device
A manages and controls TLP100, Device B manages and controls TLP200, and
Device C manages and control TLP300 and TLP310. Device A, Device B, and Device
C collect statistics generated by these TLPs and report the statistics to the MCP.
Perform the following steps on Device A, Device B, and Device C:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm dcp
DCP is enabled globally, and the IPFPM-DCP view is displayed.
Step 3 Run dcp id dcp-id
A DCP ID is configured.
Using the Router ID of a device that is configured as a DCP as its DCP ID is
recommended.
The DCP ID configured on a DCP must be the same as that specified in the dcp
dcp-id command run in the IP FPM instance view of the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 4 (Optional) Run authentication-mode hmac-sha256 key-id key-id [ cipher ]
[ password | password ]
The authentication mode and password are configured on the DCP.
The authentication mode and password configured on a DCP must be the same as
those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on the MCP associated with the
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 5 (Optional) Run color-flag loss-measure { tos-bit tos-bit | flags-bit0 } delay-
measure { tos-bit tos-bit | flags-bit0 }
IP FPM measurement flags are configured.
The loss and delay measurement flags cannot use the same bit, and the bits used
for loss and delay measurement must not have been used in other measurement
tasks.
Step 6 Run mcp mcp-id [ port port-number ] [ vpn-instance vpn-instance-name | net-
manager-vpn ]
An MCP ID is specified for the DCP, and the UDP port number is configured for the
DCP to communicate with the MCP.
The UDP port number configured on the DCP must be the same as that
configured in the protocol udp port port-number command run on the MCP
associated with this DCP. Otherwise, the DCP cannot report the statistics to the
MCP.
The VPN instance has been created on the DCP before you configure vpn-instance
vpn-instance-name or net-manager-vpn to allow the DCP to report the statistics
to the MCP through the specified VPN or management VPN.
Step 7 (Optional) Run period source ntp
The DCP is configured to select NTP as the clock source when calculating an IP
FPM statistical period ID.
In P2MP (MP being two points) delay measurement scenarios, if the ingress of the
service traffic uses NTP as the clock source, but the egresses use a different clock
source, for example, NTP or 1588v2, you must configure the egresses to select
NTP as the clock source when calculating an IP FPM statistical period ID to ensure
consistent clock sources on the ingress and egresses.
Step 8 Run instance instance-id
An IP FPM instance is created, and the instance view is displayed.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
Step 9 (Optional) Run description text
The description is configured for the IP FPM instance.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
Step 10 (Optional) Run interval interval
The statistical period is configured for the IP FPM instance.
Step 11 Perform either of the following operations to configure the target flow
characteristics in the IP FPM instance.
Configure the forward or backward target flow characteristics.
● When protocol is specified as TCP or UDP, run:
flow { forward | backward } { protocol { tcp | udp } { source-port src-port-
number1 [ to src-port-number2 ] | destination-port dest-port-number1 [ to
dest-port-number2 ] } * | dscp dscp-value | source src-ip-address [ src-mask-
length ] | destination dest-ip-address [ dest-mask-length ] }
● When protocol is specified as any protocol other than TCP or UDP, run:
flow { forward | backward } { protocol protocol-number | dscp dscp-value |
source src-ip-address [ src-mask-length ] | destination dest-ip-address [ dest-
mask-length ] }
Configure the characteristics for the bidirectional target flow.
● When protocol is specified as TCP or UDP, run:
flow bidirectional { protocol { tcp | udp } { source-port src-port-number1
[ to src-port-number2 ] | destination-port dest-port-number1 [ to dest-port-
number2 ] } * | dscp dscp-value | source src-ip-address [ src-mask-length ] |
destination dest-ip-address [ dest-mask-length ] } *
● When protocol is specified as any protocol other than TCP or UDP, run:
flow bidirectional { protocol protocol-number | dscp dscp-value | source src-
ip-address [ src-mask-length ] | destination dest-ip-address [ dest-mask-
length ] }
● If the target flow in an IP FPM instance is unidirectional, only forward can be specified.
● If the target flow in an IP FPM instance is bidirectional, two situations are available:
– If the bidirectional target flow is asymmetrical, you must configure forward and
backward in two command instances to configure the characteristics for the forward
and backward flows, respectively.
– If the bidirectional target flow is symmetrical, you can specify bidirectional to
configure the bidirectional target flow characteristics. By default, the characteristics
specified are used for the forward flow, and the reverse of those are used for the
backward flow. Specifically, the source and destination IP addresses and port numbers
specified for the forward flow are used respectively as the destination and source IP
addresses and port numbers for the backward flow. If the target flow is symmetrical
bidirectional, set src-ip-address to specify a source IP address and dest-ip-address to
specify a destination IP address for the target flow.
----End
Prerequisites
The IP FPM hop-by-hop performance statistics collection function has been
configured.
Procedure
● Run the display ipfpm mcp command to check MCP configurations.
● Run the display ipfpm dcp command to check DCP configurations.
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id ach ach-id command to check the hop-by-hop
performance statistics for a specified ACH.
----End
Context
If the packet loss rate or delay on a network is detected high but left unattended,
the packet loss rate or delay may increase and potentially affect user experience.
To help network operation and maintenance, configure the alarm threshold and
its clear alarm threshold for packet loss or delay.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
Run the display commands in any view to check the IP FPM performance statistics
and monitor the IP FPM running status in routine maintenance.
Procedure
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id command to check the performance statistics for
a specified IP FPM instance.
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id ach ach-id [ verbose ] command to check the
hop-by-hop performance statistics for a specified ACH.
----End
Networking Requirements
Various value-added services, such as IPTV, video conferencing, and Voice over
Internet Protocol (VoIP) are widely used on networks. As these services rely heavily
on high speed and robust networks, link connectivity and network performance
are essential to service transmission.
● When voice services are deployed, users will not detect any change in the
voice quality if the packet loss rate on links is lower than 5%. If the packet
loss rate is higher than 10%, the voice quality will deteriorate significantly.
● Real-time services, such as VoIP, online games, and video conferencing,
require a delay lower than 100 ms, or even 50 ms. As the delay increases, user
experience worsens.
To meet users' service quality requirements, carriers need to promptly measure the
packet loss rate and delay so that they can quickly respond to resolve network
issues if the service quality deteriorates.
The IPRAN network shown in Figure 2-7 transmits voice services. Voice flows are
symmetrical and bidirectional, and therefore one voice flow can be divided into
two unidirectional service flows. The forward service flow enters the network
through the UPE, travels across SPE1, and leaves the network through the NPE.
The backward service flow enters the network through the NPE, also travels across
SPE1, and leaves the network through the UPE.
To meet users' service quality requirements and take measures when service
quality deteriorates, configure IP FPM end-to-end performance statistics collection
to monitor the packet loss and delay of the links between the UPE and NPE in real
time.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
provider edge devices (PEs) can communicate at the network layer. This
example uses Open Shortest Path First (OSPF) as the routing protocol.
2. Configure Multiprotocol Label Switching (MPLS) functions and public network
tunnels. In this example, RSVP-TE tunnels are established between the UPE
and SPEs, and Label Distribution Protocol (LDP) LSPs are established between
the SPEs and between the NPE and SPEs.
3. Create a VPN instance on the UPE and NPE and import the local direct routes
on the UPE and NPE to their respective VPN instance routing tables.
4. Establish MP-IBGP peer relationships between the UPE and SPEs and between
the NPE and SPEs.
5. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE as
RR clients.
6. Configure VPN FRR on the UPE and NPE.
7. Configure the Network Time Protocol (NTP) to synchronize the clocks of the
UPE, SPE1, and the NPE.
8. Configure proactive packet loss and delay measurement on the UPE and NPE
to collect packet loss and delay statistics at intervals.
9. Configure the packet loss and two-way delay alarm thresholds and clear
alarm thresholds on the UPE.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 2-1
● Interior Gateway Protocol (IGP) protocol type, process ID, and area ID
● Label switching router (LSR) IDs of the UPE and SPEs
● Tunnel interface names, tunnel IDs, and tunnel interface addresses (loopback
interface addresses) for the bidirectional tunnels between the UPE and SPEs
● Tunnel policy names for the bidirectional tunnels between the UPE and SPEs
and tunnel selector names on the SPEs
● Names, route distinguishers (RDs), and VPN targets of the VPN instances on
the UPE and NPE
● UPE's NTP stratum (1); clock synchronization interval (180s) for the UPE,
SPEs, and the NPE; offset (50s) between the clock server and client; maximum
polling time (64s)
● UPE's DCP ID and MCP ID (both 1.1.1.1); NPE's MCP ID (4.4.4.4)
● IP FPM instance ID (1) and statistical period (10s)
● Forward target flow's source IP address (10.1.1.1) and destination IP address
(10.2.1.1); backward target flow's source IP address (10.2.1.1) and destination
IP address (10.1.1.1)
● Measurement points (TLP100 and TLP310)
● Loss and delay measurement flags (respectively the third and fourth bits in
the ToS field of the IPv4 packet header)
Before you deploy IP FPM for packet loss and delay measurement, if two or more bits
in the IPv4 packet header have not been planned for other purposes, they can be used
for packet loss and delay measurement at the same time. If only one bit in the IPv4
packet header has not been planned, it can be used for either packet loss or delay
measurement in one IP FPM instance.
● Authentication mode (HMAC-SHA256), password (Huawei-123), key ID (1),
and UDP port number (2048) on the UPE and NPE
● Packet loss alarm threshold and its clear alarm threshold (respectively 10%
and 5%); two-way delay alarm threshold and its clear alarm threshold
(respectively 100 ms and 50 ms)
Procedure
Step 1 Configure interface IP addresses.
Assign an IP address to each interface according to Table 2-1 and create a
loopback interface on each node. For configuration details, see Configuration
Files in this section.
Step 2 Configure OSPF.
Configure OSPF on each node to allow the nodes to communicate at the network
layer. For detailed configurations, see Configuration Files in this section.
Step 3 Configure basic MPLS functions and public network tunnels.
● Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and Constraint
Shortest Path First (CSPF).
# Configure the UPE.
<UPE> system-view
[~UPE] mpls lsr-id 1.1.1.1
[*UPE] mpls
[*UPE-mpls] mpls te
[*UPE-mpls] mpls rsvp-te
[*UPE-mpls] mpls te cspf
[*UPE-mpls] quit
[*UPE] interface gigabitethernet 1/0/1
[*UPE-GigabitEthernet1/0/1] mpls
[*UPE-GigabitEthernet1/0/1] mpls te
[*UPE-GigabitEthernet1/0/1] mpls rsvp-te
[*UPE-GigabitEthernet1/0/1] quit
[*UPE] interface gigabitethernet 1/0/2
[*UPE-GigabitEthernet1/0/2] mpls
[*UPE-GigabitEthernet1/0/2] mpls te
[*UPE-GigabitEthernet1/0/2] mpls rsvp-te
[*UPE-GigabitEthernet1/0/2] quit
[*UPE] ospf 1
[*UPE-ospf-1] opaque-capability enable
[*UPE-ospf-1] area 0
[*UPE-ospf-1-area-0.0.0.0] mpls-te enable
[*UPE-ospf-1-area-0.0.0.0] quit
[*UPE-ospf-1] quit
[*UPE] commit
# Configure SPE1.
<SPE1> system-view
[~SPE1] mpls lsr-id 2.2.2.2
[*SPE1] mpls
[*SPE1-mpls] mpls te
[*SPE1-mpls] mpls rsvp-te
# Configure SPE1.
[~SPE1] mpls
[*SPE1-mpls] label advertise non-null
[*SPE1-mpls] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] mpls
[*SPE2-mpls] label advertise non-null
[*SPE2-mpls] quit
[*SPE2] commit
# Configure SPE1.
[~SPE1] interface Tunnel 11
[*SPE1-Tunnel11] ip address unnumbered interface loopback 1
[*SPE1-Tunnel11] tunnel-protocol mpls te
[*SPE1-Tunnel11] destination 1.1.1.1
[*SPE1-Tunnel11] mpls te tunnel-id 100
[*SPE1-Tunnel11] mpls te signal-protocol rsvp-te
[*SPE1-Tunnel11] mpls te reserved-for-binding
[*SPE1-Tunnel11] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] interface Tunnel 12
[*SPE2-Tunnel12] ip address unnumbered interface loopback 1
[*SPE2-Tunnel12] tunnel-protocol mpls te
[*SPE2-Tunnel12] destination 1.1.1.1
[*SPE2-Tunnel12] mpls te tunnel-id 200
[*SPE2-Tunnel12] mpls te signal-protocol rsvp-te
[*SPE2-Tunnel12] mpls te reserved-for-binding
[*SPE2-Tunnel12] quit
[*SPE2] commit
# Configure SPE1.
[~SPE1] tunnel-policy policy1
# Configure SPE2.
[~SPE2] tunnel-policy policy1
[*SPE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 12
[*SPE2-tunnel-policy-policy1] quit
[*SPE2] commit
Step 4 Create a VPN instance on the UPE and NPE and import the local direct routes on
the UPE and NPE to their respective VPN instance routing tables.
Step 5 Establish MP-IBGP peer relationships between the UPE and SPEs and between the
NPE and SPEs.
# Configure SPE1.
[~SPE1] bgp 100
[*SPE1-bgp] router-id 2.2.2.2
[*SPE1-bgp] peer 1.1.1.1 as-number 100
[*SPE1-bgp] peer 1.1.1.1 connect-interface loopback 1
[*SPE1-bgp] peer 3.3.3.3 as-number 100
[*SPE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*SPE1-bgp] peer 4.4.4.4 as-number 100
[*SPE1-bgp] peer 4.4.4.4 connect-interface loopback 1
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] undo policy vpn-target
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*SPE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 enable
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
Step 6 Configure the SPEs as RRs and specify the UPE and NPE as RR clients.
[~SPE1] bgp 100
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 reflect-client
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 next-hop-local
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 reflect-client
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 next-hop-local
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
The configuration of the NPE is similar to the configuration of the UPE. For
configuration details, see Configuration Files in this section. After completing the
configurations, run the display bgp vpnv4 vpn-instancevpna routing-table
command on the UPE and NPE to view detailed information about received
routes.
[~UPE] display bgp vpnv4 vpn-instance vpna routing-table
BGP Local router ID is 1.1.1.1
Status codes: * - valid, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V - valid, I - invalid, N - not-found
The command output shows that the UPE and NPE both preferentially select the
routes advertised by SPE1 and use UPE <-> SPE1 <-> NPE as the primary path.
Step 9 Configure NTP to synchronize the clocks of the UPE, SPE1, and the NPE.
# Configure SPE1.
[~SPE1] ntp-service sync-interval 180
[*SPE1] ntp-service unicast-server 172.16.1.1
[*SPE1] commit
After completing the configuration, the UPE, SPE1, and the NPE have synchronized
their clocks.
Run the display ntp-service status command on the UPE to check its NTP status.
The command output shows that the clock status is synchronized, which means
that synchronization is complete.
[~UPE] display ntp-service status
clock status: synchronized
clock stratum: 1
reference clock ID: LOCAL(0)
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 0.00 ms
root dispersion: 26.49 ms
peer dispersion: 10.00 ms
reference time: 08:55:35.000 UTC Apr 2 2013(D5051B87.0020C49B)
synchronization state: clock synchronized
Run the display ntp-service status command on SPE1 to check its NTP status.
The command output shows that the clock status is synchronized and the clock
stratum is 2, lower than that of the UPE.
[~SPE1] display ntp-service status
clock status: synchronized
clock stratum: 2
reference clock ID: 172.16.1.1
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: -0.0099 ms
root delay: 0.08 ms
root dispersion: 51.00 ms
peer dispersion: 34.30 ms
reference time: 08:56:45.000 UTC Apr 2 2013(D5051BCD.00346DC5)
synchronization state: clock synchronized
Run the display ntp-service status command on the NPE to check its NTP status.
The command output shows that the clock status is synchronized and the clock
stratum is 3, lower than that of SPE1.
[~NPE] display ntp-service status
clock status: synchronized
clock stratum: 3
reference clock ID: 172.16.4.1
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: -0.0192 ms
root delay: 0.18 ms
root dispersion: 201.41 ms
peer dispersion: 58.64 ms
Step 10 Configure proactive packet loss and delay measurement on the UPE and NPE;
configure the UPE as the MCP and also a DCP and configure TLP310 on the UPE;
configure the NPE as a DCP and configure TLP100 on the NPE.
After completing the configuration, run the display ipfpm mcp command on
the UPE. The command output shows MCP configurations on the UPE.
[~UPE] display ipfpm mcp
Specification Information:
Max Instance Number :64
Max DCP Number Per Instance :256
Max ACH Number Per Instance :16
Max TLP Number Per ACH :16
Configuration Information:
MCP ID :1.1.1.1
Status :Active
Protocol Port :2048
Current Instance Number :1
● Configure a DCP.
[~UPE] nqa ipfpm dcp
[*UPE-nqa-ipfpm-dcp] dcp id 1.1.1.1
[*UPE-nqa-ipfpm-dcp] authentication-mode hmac-sha256 key-id 1 cipher Huawei-123
[*UPE-nqa-ipfpm-dcp] color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
[*UPE-nqa-ipfpm-dcp] mcp 1.1.1.1 port 2048
[*UPE-nqa-ipfpm-dcp] instance 1
[*UPE-nqa-ipfpm-dcp-instance-1] interval 10
[*UPE-nqa-ipfpm-dcp-instance-1] flow bidirectional source 10.1.1.1 destination 10.2.1.1
[*UPE-nqa-ipfpm-dcp-instance-1] tlp 100 in-point ingress
[*UPE-nqa-ipfpm-dcp-instance-1] quit
[*UPE-nqa-ipfpm-dcp] quit
[*UPE] commit
After completing the configuration, run the display ipfpm dcp command on
the UPE. The command output shows DCP configurations on the UPE.
[~UPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 1.1.1.1
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
After completing the configuration, run the display ipfpm dcp command on
the NPE. The command output shows DCP configurations on the NPE.
[~NPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 4.4.4.4
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Step 11 Configure alarm thresholds and clear alarm thresholds for IP FPM performance
counters on the UPE.
# Configure the packet loss alarm threshold and its clear alarm threshold.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] instance 1
# Configure the two-way delay alarm threshold and its clear alarm threshold.
[~UPE-nqa-ipfpm-mcp-instance-1] delay-measure two-way delay-threshold upper-limit 100000 lower-
limit 50000
[*UPE-nqa-ipfpm-mcp-instance-1] commit
136118747 800 0
136118746 800 0
136118745 800 0
Latest one-way delay statistics of bidirectional flow:
--------------------------------------------------------------------------------
Period Forward ForwardDelay Backward BackwardDelay
Delay(usec) Variation(usec) Delay(usec) Variation(usec)
--------------------------------------------------------------------------------
136118757 400 0 400 0
136118756 400 0 400 0
136118755 400 0 400 0
136118753 400 0 400 0
136118752 400 0 400 0
136118751 400 0 400 0
136118750 400 0 400 0
136118749 400 0 400 0
136118748 400 0 400 0
136118747 400 0 400 0
136118746 400 0 400 0
136118745 400 0 400 0
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
ntp-service sync-interval 180
ntp-service refclock-master 1
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
ipfpm tlp 100
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 100
mpls te reserved-for-binding
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 1.1.1.1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.2.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 2.2.2.2 te Tunnel11
tunnel binding destination 3.3.3.3 te Tunnel12
#
nqa ipfpm dcp
dcp id 1.1.1.1
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%c^)+6\&Xmec@('3&m,d%1C,d%1C<#%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
flow bidirectional source 10.1.1.1 destination 10.2.1.1
tlp 100 in-point ingress
loss-measure enable continual
delay-measure enable two-way tlp 100 continual
#
nqa ipfpm mcp
mcp id 1.1.1.1
protocol udp port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%\8u;Ufa-'-+mtJG0r#:00dV[#%#
%
instance 1
dcp 1.1.1.1
dcp 4.4.4.4
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.4.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel11
#
return
● SPE2 configuration file
#
sysname SPE2
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 3.3.3.3
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.2 255.255.255.0
mpls
mpls te
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
ntp-service sync-interval 180
ntp-service unicast-server 172.16.4.1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
ipfpm tlp 310
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
router-id 4.4.4.4
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
network 172.16.5.0 0.0.0.255
#
nqa ipfpm dcp
dcp id 4.4.4.4
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%;\VV*UAUfP'8+uS{,4v+1Gjv#%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
flow bidirectional source 10.1.1.1 destination 10.2.1.1
tlp 310 out-point egress
loss-measure enable continual
delay-measure enable two-way tlp 310 continual
#
return
Networking Requirements
Various value-added services, such as IPTV, video conferencing, and Voice over
Internet Protocol (VoIP) are widely used on networks. As these services rely heavily
on high speed and robust networks, link connectivity and network performance
are essential to service transmission. The performance measurement function can
be used to verify performance of links that transmit services.
● When voice services are deployed, users will not detect any change in the
voice quality if the packet loss rate on links is lower than 5%. If the packet
loss rate is higher than 10%, the voice quality will deteriorate significantly.
● Real-time services, such as VoIP, online games, and video conferencing,
require a delay lower than 100 ms, or even 50 ms. As the delay increases, user
experience worsens.
To locate faults when network performance deteriorates, configure IP FPM hop-
by-hop performance statistics collection.
The IPRAN network shown in Figure 2-8 transmits video services. A unidirectional
service flow enters the network through the UPE, travels across SPE1, and leaves
the network through the NPE.
To locate faults when network performance deteriorates, configure hop-by-hop
packet loss and delay measurement on the UPE and NPE to locate faults segment
by segment.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
provider edge devices (PEs) can communicate at the network layer. This
example uses Open Shortest Path First (OSPF) as the routing protocol.
2. Configure Multiprotocol Label Switching (MPLS) functions and public network
tunnels. In this example, RSVP-TE tunnels are established between the UPE
and SPEs, and Label Distribution Protocol (LDP) LSPs are established between
the SPEs and between the NPE and SPEs.
3. Create a VPN instance on the UPE and NPE and import the local direct routes
on the UPE and NPE to their respective VPN instance routing tables.
4. Establish MP-IBGP peer relationships between the UPE and SPEs and between
the NPE and SPEs.
5. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE as
RR clients.
6. Configure VPN FRR on the UPE and NPE.
7. Configure the Network Time Protocol (1588v2) to synchronize the clocks of
the UPE, SPE1, and the NPE.
8. Configure hop-by-hop packet loss and delay measurement on the UPE and
NPE to locate faults segment by segment.
9. Configure the packet loss and two-way delay alarm thresholds and clear
alarm thresholds on the UPE.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 2-2
● Interior Gateway Protocol (IGP) protocol type, process ID, and area ID
● Label switching router (LSR) IDs of the UPE and SPEs
● Tunnel interface names, tunnel IDs, and tunnel interface addresses (loopback
interface addresses) for the bidirectional tunnels between the UPE and SPEs
● Tunnel policy names for the bidirectional tunnels between the UPE and SPEs
and tunnel selector names on the SPEs
● Names, route distinguishers (RDs), and VPN targets of the VPN instances on
the UPE and NPE
● UPE's DCP ID and MCP ID (both 1.1.1.1); SPE1's DCP ID (2.2.2.2); NPE's MCP
ID (4.4.4.4)
● IP FPM instance ID (1) and statistical period (10s)
● Target flow's source IP address (10.1.1.1) and destination IP address (10.2.1.1)
● ACH1 {TLP100, TLP200} and ACH2 {TLP200, TLP310}
● Loss and delay measurement flags (respectively the third and fourth bits in
the ToS field of the IPv4 packet header)
Before you deploy IP FPM for packet loss and delay measurement, if two or more bits
in the IPv4 packet header have not been planned for other purposes, they can be used
for packet loss and delay measurement at the same time. If only one bit in the IPv4
packet header has not been planned, it can be used for either packet loss or delay
measurement in one IP FPM instance.
● Authentication mode (HMAC-SHA256), password (Huawei-123), key ID (1),
and UDP port number (2048) on the UPE, SPE1, and NPE
● Hop-by-hop packet loss and delay measurement intervals (30min)
● Packet loss alarm threshold and its clear alarm threshold (respectively 10%
and 5%); two-way delay alarm threshold and its clear alarm threshold
(respectively 100 ms and 50 ms)
Procedure
Step 1 Configure interface IP addresses.
Assign an IP address to each interface according to Table 2-2 and create a
loopback interface on each node. For configuration details, see Configuration
Files in this section.
Step 2 Configure OSPF.
Configure OSPF on each node to allow the nodes to communicate at the network
layer. For detailed configurations, see Configuration Files in this section.
Step 3 Configure basic MPLS functions and public network tunnels.
● Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and Constraint
Shortest Path First (CSPF).
# Configure the UPE.
<UPE> system-view
[~UPE] mpls lsr-id 1.1.1.1
[*UPE] mpls
[*UPE-mpls] mpls te
[*UPE-mpls] mpls rsvp-te
[*UPE-mpls] mpls te cspf
[*UPE-mpls] quit
[*UPE] interface gigabitethernet 1/0/1
[*UPE-GigabitEthernet1/0/1] mpls
[*UPE-GigabitEthernet1/0/1] mpls te
[*UPE-GigabitEthernet1/0/1] mpls rsvp-te
[*UPE-GigabitEthernet1/0/1] quit
[*UPE] interface gigabitethernet 1/0/2
[*UPE-GigabitEthernet1/0/2] mpls
[*UPE-GigabitEthernet1/0/2] mpls te
[*UPE-GigabitEthernet1/0/2] mpls rsvp-te
[*UPE-GigabitEthernet1/0/2] quit
[*UPE] ospf 1
[*UPE-ospf-1] opaque-capability enable
[*UPE-ospf-1] area 0
[*UPE-ospf-1-area-0.0.0.0] mpls-te enable
[*UPE-ospf-1-area-0.0.0.0] quit
[*UPE-ospf-1] quit
[*UPE] commit
# Configure SPE1.
<SPE1> system-view
[~SPE1] mpls lsr-id 2.2.2.2
[*SPE1] mpls
[*SPE1-mpls] mpls te
[*SPE1-mpls] mpls rsvp-te
[*SPE1-mpls] mpls te cspf
[*SPE1-mpls] quit
[*SPE1] mpls ldp
[*SPE1-mpls-ldp] quit
[*SPE1] interface gigabitethernet 1/0/1
[*SPE1-GigabitEthernet1/0/1] mpls
[*SPE1-GigabitEthernet1/0/1] mpls te
[*SPE1-GigabitEthernet1/0/1] mpls rsvp-te
[*SPE1-GigabitEthernet1/0/1] quit
[*SPE1] interface gigabitethernet 1/0/3
[*SPE1-GigabitEthernet1/0/3] mpls
[*SPE1-GigabitEthernet1/0/3] mpls ldp
[*SPE1-GigabitEthernet1/0/3] quit
[*SPE1] ospf 1
[*SPE1-ospf-1] opaque-capability enable
[*SPE1-ospf-1] area 0
[*SPE1-ospf-1-area-0.0.0.0] mpls-te enable
[*SPE1-ospf-1-area-0.0.0.0] quit
[*SPE1-ospf-1] quit
[*SPE1] commit
# Configure SPE2.
<SPE2> system-view
[~SPE2] mpls lsr-id 3.3.3.3
[*SPE2] mpls
[*SPE2-mpls] mpls te
[*SPE2-mpls] mpls rsvp-te
[*SPE2-mpls] mpls te cspf
[*SPE2-mpls] quit
[*SPE2] mpls ldp
[*SPE2-mpls-ldp] quit
[*SPE2] interface gigabitethernet 1/0/2
[*SPE2-GigabitEthernet1/0/2] mpls
[*SPE2-GigabitEthernet1/0/2] mpls te
[*UPE] commit
# Configure SPE1.
[~SPE1] interface Tunnel 11
[*SPE1-Tunnel11] ip address unnumbered interface loopback 1
[*SPE1-Tunnel11] tunnel-protocol mpls te
[*SPE1-Tunnel11] destination 1.1.1.1
[*SPE1-Tunnel11] mpls te tunnel-id 100
[*SPE1-Tunnel11] mpls te signal-protocol rsvp-te
[*SPE1-Tunnel11] mpls te reserved-for-binding
[*SPE1-Tunnel11] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] interface Tunnel 12
[*SPE2-Tunnel12] ip address unnumbered interface loopback 1
[*SPE2-Tunnel12] tunnel-protocol mpls te
[*SPE2-Tunnel12] destination 1.1.1.1
[*SPE2-Tunnel12] mpls te tunnel-id 200
[*SPE2-Tunnel12] mpls te signal-protocol rsvp-te
[*SPE2-Tunnel12] mpls te reserved-for-binding
[*SPE2-Tunnel12] quit
[*SPE2] commit
● Configure tunnel policies.
# Configure the UPE.
[~UPE] tunnel-policy policy1
[*UPE-tunnel-policy-policy1] tunnel binding destination 2.2.2.2 te Tunnel 11
[*UPE-tunnel-policy-policy1] tunnel binding destination 3.3.3.3 te Tunnel 12
[*UPE-tunnel-policy-policy1] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] tunnel-policy policy1
[*SPE1-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 11
[*SPE1-tunnel-policy-policy1] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] tunnel-policy policy1
[*SPE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 12
[*SPE2-tunnel-policy-policy1] quit
[*SPE2] commit
Step 4 Create a VPN instance on the UPE and NPE and import the local direct routes on
the UPE and NPE to their respective VPN instance routing tables.
# Configure the UPE.
[~UPE] ip vpn-instance vpna
[*UPE-vpn-instance-vpna] ipv4-family
[*UPE-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*UPE-vpn-instance-vpna-af-ipv4] vpn-target 1:1
[*UPE-vpn-instance-vpna-af-ipv4] quit
[*UPE-vpn-instance-vpna] quit
[*UPE] interface gigabitethernet 1/0/0
[*UPE-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*UPE-GigabitEthernet1/0/0] ip address 192.168.1.1 24
[*UPE-GigabitEthernet1/0/0] quit
[*UPE] bgp 100
[*UPE-bgp] ipv4-family vpn-instance vpna
[*UPE-bgp-vpna] import-route direct
[*UPE-bgp-vpna] quit
[*UPE-bgp] quit
[*UPE] commit
[*NPE-vpn-instance-vpna] ipv4-family
[*NPE-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*NPE-vpn-instance-vpna-af-ipv4] vpn-target 1:1
[*NPE-vpn-instance-vpna-af-ipv4] quit
[*NPE-vpn-instance-vpna] quit
[*NPE] interface gigabitethernet 1/0/3
[*NPE-GigabitEthernet1/0/3] ip binding vpn-instance vpna
[*NPE-GigabitEthernet1/0/3] ip address 192.168.2.1 24
[*NPE-GigabitEthernet1/0/3] quit
[*NPE] bgp 100
[*NPE-bgp] ipv4-family vpn-instance vpna
[*NPE-bgp-vpna] import-route direct
[*NPE-bgp-vpna] quit
[*NPE-bgp] quit
[*NPE] commit
Step 5 Establish MP-IBGP peer relationships between the UPE and SPEs and between the
NPE and SPEs.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] router-id 1.1.1.1
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface loopback 1
[*UPE-bgp] peer 3.3.3.3 as-number 100
[*UPE-bgp] peer 3.3.3.3 connect-interface loopback 1
[*UPE-bgp] ipv4-family vpnv4
[*UPE-bgp-af-vpnv4] peer 2.2.2.2 enable
[*UPE-bgp-af-vpnv4] peer 3.3.3.3 enable
[*UPE-bgp-af-vpnv4] quit
[*UPE-bgp] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] bgp 100
[*SPE1-bgp] router-id 2.2.2.2
[*SPE1-bgp] peer 1.1.1.1 as-number 100
[*SPE1-bgp] peer 1.1.1.1 connect-interface loopback 1
[*SPE1-bgp] peer 3.3.3.3 as-number 100
[*SPE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*SPE1-bgp] peer 4.4.4.4 as-number 100
[*SPE1-bgp] peer 4.4.4.4 connect-interface loopback 1
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] undo policy vpn-target
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*SPE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 enable
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
[*NPE] commit
Step 6 Configure the SPEs as RRs and specify the UPE and NPE as RR clients.
[~SPE1] bgp 100
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 reflect-client
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 next-hop-local
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 reflect-client
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 next-hop-local
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
Step 7 Apply the tunnel policy on the UPE and configure a tunnel selector on each SPE
because SPEs do not have VPN instances, so that the UPE and SPEs use RSVP-TE
tunnels to transmit traffic.
# Apply the tunnel policy on the UPE.
[~UPE] ip vpn-instance vpna
[*UPE-vpn-instance-vpna] ipv4-family
[*UPE-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*UPE-vpn-instance-vpna-af-ipv4] tnl-policy policy1
[*UPE-vpn-instance-vpna-af-ipv4] quit
[*UPE-vpn-instance-vpna] quit
[*UPE] commit
The configuration of the NPE is similar to the configuration of the UPE. For
configuration details, see Configuration Files in this section. After completing the
configurations, run the display bgp vpnv4 vpn-instancevpna routing-table
command on the UPE and NPE to view detailed information about received
routes.
[~UPE] display bgp vpnv4 vpn-instance vpna routing-table
BGP Local router ID is 1.1.1.1
Status codes: * - valid, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V - valid, I - invalid, N - not-found
The command output shows that the UPE and NPE both preferentially select the
routes advertised by SPE1 and use UPE <-> SPE1 <-> NPE as the primary path.
Step 9 Configure 1588v2 to synchronize the clocks of the UPE, SPE1, and the NPE.
1. # Import BITS0 signals to SPE1.
[~SPE1] clock bits-type bits0 2mhz
[*SPE1] clock source bits0 synchronization enable
[*SPE1] clock source bits0 priority 1
[*SPE1] commit
2. # Enable 1588v2 globally.
# Configure SPE1.
[~SPE1] ptp enable
[*SPE1] ptp domain 1
[*SPE1] ptp device-type bc
[*SPE1] ptp clock-source local clock-class 185
[*SPE1] clock source ptp synchronization enable
[*SPE1] clock source ptp priority 1
[*SPE1] commit
# Configure UPE.
[~UPE] ptp enable
[*UPE] ptp domain 1
[*UPE] ptp device-type bc
[*UPE] ptp clock-source local clock-class 185
[*UPE] clock source ptp synchronization enable
[*UPE] clock source ptp priority 1
[*UPE] commit
# Configure NPE.
[~NPE] ptp enable
[*NPE] ptp domain 1
[*NPE] ptp device-type bc
[*NPE] ptp clock-source local clock-class 185
[*NPE] clock source ptp synchronization enable
[*NPE] clock source ptp priority 1
[*NPE] commit
3. Enable 1588v2 on an interface.
# Configure SPE1.
[~SPE1] interface gigabitethernet 1/0/1
[~SPE1-GigabitEthernet1/0/1] ptp enable
[*SPE1-GigabitEthernet1/0/1] commit
[~SPE1-GigabitEthernet1/0/1] quit
[~SPE1] interface gigabitethernet 1/0/2
[~SPE1-GigabitEthernet1/0/2] ptp enable
[*SPE1-GigabitEthernet1/0/2] commit
[~SPE1-GigabitEthernet1/0/2] quit
[~SPE1] interface gigabitethernet 1/0/4
[~SPE1-GigabitEthernet1/0/2] ptp enable
[*SPE1-GigabitEthernet1/0/2] commit
[~SPE1-GigabitEthernet1/0/2] quit
# Configure UPE.
[~UPE] interface gigabitethernet 1/0/0
[~UPE-GigabitEthernet1/0/0] ptp enable
[*UPE-GigabitEthernet1/0/0] commit
[~UPE-GigabitEthernet1/0/0] quit
[~UPE] interface gigabitethernet 1/0/1
[~UPE-GigabitEthernet1/0/1] ptp enable
[*UPE-GigabitEthernet1/0/1] commit
[~UPE-GigabitEthernet1/0/1] quit
# Configure NPE.
[~NPE] interface gigabitethernet 1/0/2
[~NPE-GigabitEthernet1/0/2] ptp enable
[*NPE-GigabitEthernet1/0/2] commit
[~NPE-GigabitEthernet1/0/2] quit
[~NPE] interface gigabitethernet 1/0/3
[~NPE-GigabitEthernet1/0/3] ptp enable
[*NPE-GigabitEthernet1/0/3] commit
[~NPE-GigabitEthernet1/0/3] quit
Step 10 Configure hop-by-hop packet loss and delay measurement on the UPE, SPE1, and
the NPE; configure two ACHs on the link between the UPE and NPE: ACH1
{TLP100, TLP200} and ACH2 {TLP200, TLP310}.
# Configure UPE.
● Configure the MCP.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] mcp id 1.1.1.1
[*UPE-nqa-ipfpm-mcp] protocol udp port 2048
[*UPE-nqa-ipfpm-mcp] authentication-mode hmac-sha256 key-id 1 cipher Huawei-123
[*UPE-nqa-ipfpm-mcp] instance 1
[*UPE-nqa-ipfpm-mcp] description Instanceforpoint-by-pointtest
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 1.1.1.1
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 2.2.2.2
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 4.4.4.4
[*UPE-nqa-ipfpm-mcp-instance-1] ach 1
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] flow forward
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] in-group dcp 1.1.1.1 tlp 100
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] out-group dcp 2.2.2.2 tlp 200
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] quit
[*UPE-nqa-ipfpm-mcp-instance-1] ach 2
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] flow forward
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] in-group dcp 2.2.2.2 tlp 200
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] out-group dcp 4.4.4.4 tlp 310
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] quit
[*UPE-nqa-ipfpm-mcp-instance-1] quit
[*UPE-nqa-ipfpm-mcp] quit
[*UPE] commit
After completing the configuration, run the display ipfpm mcp command on
the UPE. The command output shows MCP configurations on the UPE.
[~UPE] display ipfpm mcp
Specification Information:
Max Instance Number :64
Max DCP Number Per Instance :256
Max ACH Number Per Instance :16
Max TLP Number Per ACH :16
Configuration Information:
MCP ID :1.1.1.1
Status :Active
Protocol Port :2048
Current Instance Number :1
● Configure a DCP.
[~UPE] nqa ipfpm dcp
[*UPE-nqa-ipfpm-dcp] dcp id 1.1.1.1
[*UPE-nqa-ipfpm-dcp] mcp 1.1.1.1 port 2048
[*UPE-nqa-ipfpm-dcp] authentication-mode hmac-sha256 key-id 1 cipher Huawei-123
[*UPE-nqa-ipfpm-dcp] color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
[*UPE-nqa-ipfpm-dcp] instance 1
[*UPE-nqa-ipfpm-dcp-instance-1] description Instanceforpointbypointtest
[*UPE-nqa-ipfpm-dcp-instance-1] interval 10
[*UPE-nqa-ipfpm-dcp-instance-1] flow forward source 10.1.1.1 destination 10.2.1.1
[*UPE-nqa-ipfpm-dcp-instance-1] tlp 100 in-point ingress
[*UPE-nqa-ipfpm-dcp-instance-1] quit
[*UPE-nqa-ipfpm-dcp] quit
[*UPE] commit
After completing the configuration, run the display ipfpm dcp command on
the UPE. The command output shows DCP configurations on the UPE.
[~UPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 1.1.1.1
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
# Configure SPE1.
● Configure a DCP.
[~SPE1] nqa ipfpm dcp
[*SPE1-nqa-ipfpm-dcp] dcp id 2.2.2.2
[*SPE1-nqa-ipfpm-dcp] authentication-mode hmac-sha256 key-id 1 cipher Huawei-123
[*SPE1-nqa-ipfpm-dcp] color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
[*SPE1-nqa-ipfpm-dcp] mcp 1.1.1.1 port 2048
[*SPE1-nqa-ipfpm-dcp] instance 1
[*SPE1-nqa-ipfpm-dcp-instance-1] description Instanceforpointbypointtest
[*SPE1-nqa-ipfpm-dcp-instance-1] interval 10
[*SPE1-nqa-ipfpm-dcp-instance-1] flow forward source 10.1.1.1 destination 10.2.1.1
[*SPE1-nqa-ipfpm-dcp-instance-1] tlp 200 mid-point flow forward ingress vpn-label 17 lsp-label 18
[*SPE1-nqa-ipfpm-dcp-instance-1] quit
[*SPE1-nqa-ipfpm-dcp] quit
[*SPE1] commit
After completing the configuration, run the display ipfpm dcp command on
SPE1. The command output shows DCP configurations on SPE1.
[~SPE1] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 2.2.2.2
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
After completing the configuration, run the display ipfpm dcp command on
the NPE. The command output shows DCP configurations on the NPE.
[~NPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 4.4.4.4
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
Step 11 Configure alarm thresholds and clear alarm thresholds for IP FPM performance
counters on the UPE.
# Configure the packet loss alarm threshold and its clear alarm threshold.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] instance 1
[*UPE-nqa-ipfpm-mcp-instance-1] loss-measure ratio-threshold upper-limit 10 lower-limit 5
[*UPE-nqa-ipfpm-mcp-instance-1] commit
# Configure the two-way delay alarm threshold and its clear alarm threshold.
[~UPE-nqa-ipfpm-mcp-instance-1] delay-measure two-way delay-threshold upper-limit 100000 lower-
limit 50000
[*UPE-nqa-ipfpm-mcp-instance-1] commit
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
ntp-service sync-interval 180
ntp-service refclock-master 1
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
ptp enable
ipfpm tlp 100
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
ptp enable
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
ach 2
flow forward
in-group dcp 2.2.2.2 tlp 200
out-group dcp 4.4.4.4 tlp 310
#
return
● SPE1 configuration file
#
sysname SPE1
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 2.2.2.2
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
ntp-service sync-interval 180
ntp-service unicast-server 172.16.1.1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
ptp enable
ipfpm tlp 200
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.1 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/4
undo shutdown
ip address 172.16.6.1 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
mpls te reserved-for-binding
#
bgp 100
router-id 2.2.2.2
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.4.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel11
#
nqa ipfpm dcp
dcp id 2.2.2.2
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%/#(8ARUz1+=(sUrXdsM1P.x#%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
description Instanceforpointbypointtest
flow forward source 10.1.1.1 destination 10.2.1.1
tlp 200 mid-point flow forward ingress vpn-label 17 lsp-label 18
#
return
● SPE2 configuration file
#
sysname SPE2
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 3.3.3.3
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.2 255.255.255.0
mpls
mpls te
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
ntp-service sync-interval 180
ntp-service unicast-server 172.16.4.1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.2 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
ptp enable
ipfpm tlp 310
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
router-id 4.4.4.4
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
network 172.16.5.0 0.0.0.255
#
nqa ipfpm dcp
dcp id 4.4.4.4
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%Se9P>q>D>~v\Es$K{z2H1VW##%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
description Instanceforpointbypointtest
flow forward source 10.1.1.1 destination 10.2.1.1
tlp 310 out-point egress
#
return
3 NetStream Configuration
Context
The NetStream feature may be used to analyze the communication information of terminal
customers for network traffic statistics and management purposes. Before enabling the
NetStream feature, ensure that it is performed within the boundaries permitted by
applicable laws and regulations. Effective measures must be taken to ensure that
information is securely protected.
Before collecting statistics about IPv6 original flows, familiarize yourself with the
usage scenario, complete the pre-configuration tasks, and obtain the data
required for the configuration.
3.6 Collecting Statistics About IPv6 Aggregated Flows
Before collecting statistics about IPv6 aggregated flows, familiarize yourself with
the usage scenario, complete the pre-configuration tasks, and obtain the data
required for the configuration.
3.7 Collecting Statistics About IPv4 Flexible Flows
Before collecting statistics about IPv4 flexible flows, familiarize yourself with the
applicable environment and complete the pre-configuration tasks. This can help
you complete the configuration task quickly and accurately.
3.8 Collecting Statistics About IPv6 Flexible Flows
Before collecting statistics about IPv6 flexible flows, familiarize yourself with the
applicable environment and complete the pre-configuration tasks. This can help
you complete the configuration task quickly and accurately.
3.9 Collecting Statistics About MPLS IPv4 Packets
Collecting packet statistics on MPLS networks helps you monitor MPLS network
status.
3.10 Collecting Statistics About MPLS IPv6 Packet
Collecting packet statistics on MPLS networks helps you to monitor MPLS network
conditions.
3.11 Collecting Statistics About BGP/MPLS VPN Flows
Collecting traffic statistics on BGP/MPLS VPN networks helps monitor the BGP/
MPLS VPN network condition.
3.12 Maintaining NetStream
This section describes how to maintain NetStream.
3.13 Configuration Examples for NetStream
This section provides NetStream configuration examples.
Configuration Precautions
Restrictions Guidelines Impact
Usage Scenario
On the network shown in Figure 3-2, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about original flows are collected based on the 7-tuple information. The
NDE samples IPv4 flows passing through it, collects statistics about sampled flows,
encapsulates the aging NetStream original flows into UDP packets, and sends the
packets to the NetStream Collector (NSC) for processing. Unlike collecting
statistics about aggregated flows, collecting statistics about original flows imposes
less impact on NDE performance. Original flows consume more storage space and
network bandwidth resources because the volume of original flows is greater than
that of aggregated flows.
Pre-configuration Tasks
Before collecting the statistics about IPv4 original flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ip netstream sampler to slot self
----End
Procedure
Step 1 Run system-view
The V9 format allows the output original flows to carry more variable statistics, to
expand newly defined flow elements more flexibly, and to generate new records
more easily.
Compared with the V9 format, the IPFIX format improves packet extensibility and
compatibility, security, and reliability. In addition, the IPFIX format has an
enterprise identifier field added. When setting this field, you must use the IPFIX
format for the outputting of NetStream IPv4 original flows.
The V5 format is fixed, and the system cost is low. In most cases, NetStream
original flows are output in V5 format. In any of the following situations,
NetStream original flows must be output in V9 format or IPFIX:
● NetStream original flows need to carry BGP next-hop information.
● Interface indexes carried in the output NetStream original flows need to be
extended from 16 bits to 32 bits.
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
3. Run quit
The system view is displayed.
The sequence numbers of template packets and option template packets in IPFIX
format are configured to remain unchanged, but data packets and option data
packets in IPFIX format are still consecutively numbered.
The interval at which the template for outputting original flows in the V9 or IPFIX
format is refreshed.
A source IP address and a source port are specified for original flows.
Step 7 Specify the destination IP address and UDP port number of the peer NetStream
Collector (NSC) for NetStream original flows in the system or slot view.
● In the system view:
Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instancevpn-
instance-name ] [ dscpdscp-value ]
The destination IP address and UDP port number of the peer NSC are
specified for NetStream original flows to be output.
● In the slot view:
a. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
b. Run ip netstream export host [ ipv6 ] ip-address port [ vpn-
instancevpn-instance-name ] [ dscpdscp-value ]
The destination IP address and UDP port number of the peer NSC are
specified for NetStream original flows to be output.
c. Run quit
The system view is displayed.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
Procedure
Step 1 Run system-view
A source IP address and a source port are configured for output NetStream flows.
Step 4 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 5 | 9 | ipfix } ] [ dscp dscp-value ]
The destination IP address and destination port number for traffic statistics are
specified.
If NetStream monitoring services have been configured on the interface, statistics about original
flows are sent to the destination IP address specified in the NetStream monitoring service view,
not the system view. The source address and source port configured in the NetStream
monitoring service view are also used for output NetStream flows.
----End
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The system view is displayed.
The type of the interface index carried in the NetStream packet output by the
router is configured. By default, the interface index carried in the NetStream
packet output by the router is 16 bits long. An interface index can be changed
from 16 bits to 32 bits only after the following conditions are met:
● Original flows are output in V9 or IPFIX format.
● The NetStream packet format for all aggregated flows is V9 or IPFIX format.
----End
Procedure
Step 1 Run system-view
An original flow for each flag value is created. If statistics collection for TCP flags
is enabled, the number of original flows will greatly increase.
----End
Context
No matter whether traffic statistics are exported as original flows or aggregated
flows, option packet data is exported to the NetStream Collector (NSC) as a
supplement. In this way, the NetStream Data Exporter (NDE) can obtain
information, such as the sampling ratio and whether the sampling function is
enabled, to reflect the actual network traffic.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the required option template is sent to the
NMS for parsing option packets. You can set option template refreshing
parameters as needed to regularly refresh the template to notify the NSC of the
latest option template format.
Procedure
● Configure interface option packets to be exported in V9 or IPFIX format.
a. Run system-view
The packet sending interval and timeout interval are set for option
template refreshing. An option template can be refreshed at a fixed
packet sending interval or timeout interval. The two intervals can both
take effect. In the command, refresh-rate packet-interval indicates that
the option template is refreshed at a fixed packet sending interval, and
timeout-rate timeout-interval indicates that the option template is
refreshed at a fixed timeout interval.
----End
Procedure
Step 1 Run system-view
----End
Context
Procedure
Step 1 Run system-view
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
undo ip netstream sampler { inbound | outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interfaceinterface-typeinterface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interfaceinterface-typeinterface-number
The interface view is displayed.
b. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
undo ip netstream sampler { inbound | outbound }
The sampling mode and sampling ratio are configured for the interface.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
Statistics about packets' BGP next-hop information can also be collected. Original
flows output in V5 format, however, cannot carry the BGP next-hop information.
NetStream is not applied to traffic matching the ACL rule or traffic behavior that
contains deny.
The traffic behavior view must be displayed before you run this command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
----End
Procedure
● Run the display ip netstream cache origin slot [ source-ip source-ip ]
[ source-port source-port ] [ destination-ip destination-ip ] [ destination-
port destination-port ] [ protocol { udp | tcp | protocol-number } ] [ time-
range from start-time to end-time ] slot slot-id command to check
information about the NetStream buffer.
Usage Scenario
On the network shown in Figure 3-3, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about NetStream aggregated flows contain information about original
flows with the same attributes, whereas statistics about NetStream original flows
Pre-configuration Tasks
Before collecting statistics about IPv4 aggregated flows, complete the following
tasks:
● Configure static routes or enable an IGP to implement network connectivity.
● Enable statistics collection for NetStream original flows.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
b. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ip netstream sampler to slot self
The distributed NetStream service processing mode is specified.
d. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream aggregation { as | as-tos | bgp-nexthop-tos | destination-
prefix | destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | source-
index-tos | vlan-id | bgp-community | vni-sip-dip } The NetStream aggregation
view is created
If the NetStream flow aggregation function is enabled on a device, the device classifies and
aggregates original flows based on specified rules and sends the aggregated flows to the
NetStream Data Analyzer (NDA) for analysis. Aggregating original flows minimizes the
consumption of network bandwidths, CPU resources, and memory resources. Flow
attributes based on which flows are aggregated vary according to flow aggregation modes.
The length of the aggregate mask is set. The effective mask is the greater one
between the mask in the FIB table and the configured mask. If no aggregate mask
is set, the system uses the mask in the FIB table for flow aggregation.
The aggregate mask takes effect only on flows aggregated in the following modes:
destination-prefix, destination-prefix-tos, prefix, prefix-tos, source-prefix, and source-prefix-
tos.
----End
Procedure
Step 1 Run system-view
Step 2 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ dscp dscp-value ]
The destination IP address and UDP port number of the peer NSC are specified for
NetStream original flows to be output.
If the destination IP addresses are specified in both the system and the
aggregation views, the configuration in the aggregation view takes effect.
The output format is specified for the aggregated flows. Flows aggregated in as,
as-tos, destination-prefix, destination-prefix-tos, prefix, prefix-tos, protocol-
port, protocol-port-tos, source-prefix, or source-prefix-tos mode are output in
V8 format by default. You can specify the output format for aggregated flows as
needed.
For the vlan-id, bgp-nhp-tos, vni-sip-dip, and index-tos aggregation modes, aggregated
packets can be encapsulated only in the default V9 format. You can change the format to
IPFIX using the export version command.
Step 6 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
3. Run quit
Return to the system view.
The interval at which the template for outputting aggregated flows in the V9 or
IPFIX format is refreshed is set.
The source IP address and source port are specified for aggregated flows.
The source IP address and source port specified in the aggregation view take
precedence over that specified in the system view. If no source IP address or
source port is specified in the aggregation view, the source IP address and source
port specified in the system view take effect.
The destination IP address and UDP port number of the peer NSC are specified for
NetStream original flows to be output.
The destination IP address specified in the NetStream aggregation view takes precedence
over that specified in the system view.
Step 11 (Optional) Exit the IPv4 aggregated configuration mode view. In the system view,
run ip netstream export template sequence-number fixed
The sequence numbers of template packets and option template packets in IPFIX
format are configured to remain unchanged, but data packets and option data
packets in IPFIX format are still consecutively numbered.
Step 12 Run commit
The configuration is committed.
----End
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream as-mode { 16 | 32 }
The AS field mode is specified on the router.
Step 3 Run ip netstream export index-switch { 16 | 32 }
The type of the interface index carried in the NetStream packet output by the
router is configured. By default, the interface index carried in the NetStream
packet output by the router is 16 bits long. An interface index can be changed
from 16 bits to 32 bits only after the following conditions are met:
● Original flows are output in V9 or IPFIX format.
● The NetStream packet format for all aggregated flows is V9 or IPFIX format.
----End
Context
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
undo ip netstream sampler { inbound | outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interfaceinterface-typeinterface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interfaceinterface-typeinterface-number
The interface view is displayed.
b. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
undo ip netstream sampler { inbound | outbound }
The sampling mode and sampling ratio are configured for the interface.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
Statistics about packets' BGP next-hop information can also be collected. Original
flows output in V5 format, however, cannot carry the BGP next-hop information.
NetStream is not applied to traffic matching the ACL rule or traffic behavior that
contains deny.
The traffic behavior view must be displayed before you run this command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
----End
Procedure
● Run the display ip netstream cache { as | as-tos | bgp-nexthop-tos | bgp-
community | destination-prefix | destination-prefix-tos | index-tos | mpls-
label | prefix | prefix-tos | protocol-port | protocol-port-tos | source-prefix |
source-prefix-tos | source-index-tos | vni-sip-dip | vlan-id | flexflowtpl
record-name } slot slot-id command to check flows aggregated in different
modes in the buffer.
● Run the display ip netstream statistics slot slot-id command to check
statistics about NetStream flows.
● Run the display ip netstream statistics interface interface-type interface-
number command to check the statistics about the sampled packets on an
interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream cache aggregation statistics slot slot-id
command to check aggregation flow table specifications and the number of
current flows of a specific board.
----End
Usage Scenario
On the network shown in Figure 3-4, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about original flows are collected based on the 7-tuple information. The
NetStream Data Exporter (NDE) samples IPv6 flows passing through it, collects
statistics about sampled flows, encapsulates the aging NetStream original flows
into UDP packets, and sends the packets to the NetStream Collector (NSC) for
processing. Unlike collecting statistics about aggregated flows, collecting statistics
about original flows imposes less impact on NDE performance. Original flows
consume more storage space and network bandwidth resources because the
volume of original flows is greater than that of aggregated flows.
Pre-configuration Tasks
Before collecting the statistics about IPv6 original flows, complete the following
task:
● Configure parameters of the link layer protocol and IP addresses for interfaces
so that the link layer protocol on the interfaces can go Up.
● Configure static routes or enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
b. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
The distributed NetStream service processing mode is specified.
d. Run commit
The configuration is committed.
----End
Context
IPv6 original flows can be output only in V9 or IPFIX format.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream export version{ 9 [ origin-as | peer-as ] [ bgp-nexthop ]
[ ttl ] | ipfix [ origin-as | peer-as ] [ bgp-nexthop ] [ ttl ] }
The output format of original flows is configured.
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
3. Run quit
Return to the system view.
The sequence numbers of template packets and option template packets in IPFIX
format are configured to remain unchanged, but data packets and option data
packets in IPFIX format are still consecutively numbered.
The interval at which the template for outputting original flows in the V9 or IPFIX
format is refreshed.
The source IP address and source port are specified for original flows.
Step 7 Specify the destination IP address and UDP port number of the peer NSC for
NetStream original flows in the system or slot view.
● In the system view:
Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instancevpn-
instance-name ] [ dscpdscp-value ]
The destination IP address and destination port number for traffic statistics
are specified.
● In the slot view:
a. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
b. Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-
instancevpn-instance-name ] [ dscpdscp-value ]
The destination IP address and destination port number for traffic
statistics are specified.
c. Run quit
Return to the system view.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
If NetStream is configured on multiple interfaces on an NDE, all interfaces send
traffic statistics to a single NetStream Collector (NSC). The NSC cannot distinguish
interfaces, and therefore, cannot manage or analyze traffic statistics based on
interfaces. In addition, the NSC will be overloaded due to a great amount of
information.
NetStream monitoring configured on an NDE allows the NDE to send traffic
statistics collected on specified interfaces to specified NSCs for analysis, which
achieves interface-specific service monitoring. Traffic statistics can be balanced
among these NSCs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream monitor monitor-name
A NetStream monitoring service view is created and displayed. If a NetStream
monitoring service view already exists, the view is displayed.
Step 3 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 9 | ipfix } ] [ dscp dscp-value ]
The destination IP address and destination port number for traffic statistics are
specified.
Step 4 (Optional) Run ipv6 netstream export source {ip-address | ipv6 ipv6-address}
[ port ]
A source IP address and a source port are configured for output NetStream flows.
Step 5 Run quit
Return to the system view.
Step 6 Run interface interface-type interface-number
The interface view is displayed.
Step 7 Run ipv6 netstream monitor monitor-name { inbound | outbound }
NetStream monitoring services are configured in the inbound or outbound
direction of the interface.
If NetStream monitoring services have been configured on the interface, statistics about original
flows are sent to the destination IP address specified in the NetStream monitoring service view,
not the system view. The source address and source port configured in the NetStream
monitoring service view are also used for output NetStream flows.
----End
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream as-mode { 16 | 32}
The AS field mode is specified on the router.
Step 3 Run ipv6 netstream export index-switch { 16 | 32 }
The type of the interface index carried in the NetStream packet output by the
router is specified.
An interface index can be changed from 16 bits to 32 bits only after the following
conditions are met:
● Original flows are output in V9 or IPFIX format.
----End
Context
Perform the following steps on the router on which TCP flag statistics are to be
collected.
By enabling statistics collection of TCP flags, you can extract the TCP-flag
information from network packets and send it to the NMS. The NMS can
determine whether there are flood attacks to the network.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
----End
Context
Procedure
Step 1 Run system-view
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured for the interface.
By default, NetStream is disabled from sampling packets. Instead, it
collects statistics about all packets.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
The ip netstream sampler command has the same function as the ipv6
netstream sampler command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
NetStream IPv6 flow statistics have been collected.
Procedure
● Run the display ipv6 netstream cache origin slot [ source-ipv6 source-ip ]
[ source-port source-port ] [ destination-ipv6 destination-ip ] [ destination-
port destination-port ] [ protocol { udp | tcp | protocol-number } ] [ time-
range from start-time to end-time ] slot slot-id command to check
information about the NetStream buffer.
To view historical sampling information about IPv6 original flows on the CF card of a
main control board, run the display ipv6 netstream cache origin log command.
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream flows.
● Run the display ipv6 netstream monitor { all | monitor-name } command to
check the monitoring information about IPv6 original flows.
----End
Usage Scenario
On the network shown in Figure 3-5, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about NetStream aggregated flows contain information about original
flows with the same attributes, whereas statistics about NetStream original flows
contain information about sampled packets. The volume of aggregated flow
statistics collection is greater than that of original flow statistics.
Pre-configuration Tasks
Before collecting statistics about IPv6 aggregated flows, complete the following
tasks:
● Configure parameters of the link layer protocol and IP addresses for interfaces
so that the link layer protocol on the interfaces can go Up.
● Configure static routes or enable an IGP to implement network connectivity.
● Enable statistics collection for NetStream original flows.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
b. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
The distributed NetStream service processing mode is specified.
d. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Runipv6 netstream aggregation { as | as-tos | bgp-nexthop-tos | destination-
prefix | destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | vlan-id }
The NetStream aggregation view is created.
After collecting statistics about NetStream original flows, the router aggregates original
flows into aggregated flows based on specified rules, encapsulates aggregated flows into
UDP packets, and sends UDP packets after the aging timer expires. Aggregating original
flows minimizes the consumption of network bandwidths, CPU resources, and memory
resources. Attributes based on which flows are aggregated vary according to aggregation
modes. Table 3-2 describes the mapping between aggregation modes and flow attributes.
The length of the aggregate mask is set. The mask used by the system is the
greater one between the mask in the FIB table and the configured mask. If no
aggregate mask is set, the system uses the mask in the FIB table for flow
aggregation.
The aggregate mask takes effect only on flows aggregated in the following modes:
destination-prefix, destination-prefix-tos, prefix, prefix-tos, source-prefix, and source-prefix-
tos.
----End
Context
IPv6 aggregated flows can be output in V9or IPFIX format.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream export host ip-address port [ vpn-instance vpn-instance-
name ] [ dscp dscp-value ]
The destination IP address and destination port number for traffic statistics are
specified.
The destination IP address specified in the system view takes precedence over that
specified in the NetStream aggregation view.
Step 3 Run ipv6 netstream aggregation { as | as-tos | bgp-nexthop-tos | destination-
prefix | destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | vlan-id }
The IPv6 NetStream aggregation view is displayed.
Step 4 Run enable
The NetStream aggregation mode is enabled.
Step 5 (Optional) Run template timeout-rate timeout-interval
The interval at which the template for outputting aggregated flows in the V9or
IPFIX format is refreshed is set.
Step 6 Run ipv6 netstream export source { ip-address | ipv6 ipv6-address } [ port ]
The source IP address and source port are specified for aggregated flows.
The source IP address and source port specified in the aggregation view take
precedence over those specified in the system view. If no source IP address or
source port is specified in the aggregation view, the source IP address and source
port specified in the system view take effect.
Step 7 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ dscp dscp-value ]
The destination IP address and destination port number for traffic statistics are
specified.
● You can specify eight destination IP addresses in the system view, IPv4 NetStream
aggregation view, and IPv4 NetStream aggregation view.
● The destination IP address specified in the system view takes precedence over that
specified in the NetStream aggregation view.
Step 9 (Optional) Exit the IPv6 aggregated configuration mode view. In the system view,
run ipv6 netstream export template sequence-number fixed
The sequence numbers of template packets and option template packets in IPFIX
format are configured to remain unchanged, but data packets and option data
packets in IPFIX format are still consecutively numbered.
----End
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The type of the interface index carried in the NetStream packet output by the
router is specified.
An interface index can be changed from 16 bits to 32 bits only after the following
conditions are met:
● Original flows are output in V9 or IPFIX format.
● Aggregated flows are output in V9 or IPFIX format.
----End
Context
Procedure
Step 1 Run system-view
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured for the interface.
By default, NetStream is disabled from sampling packets. Instead, it
collects statistics about all packets.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
The ip netstream sampler command has the same function as the ipv6
netstream sampler command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Context
Run the following command to check the previous configuration.
Procedure
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ipv6 netstream cache { as | as-tos | bgp-nexthop-tos |
destination-prefix | destination-prefix-tos | index-tos | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | mpls-
label | vlan-id | flexflowtpl record-name } slot slot-id command to check
various aggregated flows in the buffer.
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream flows.
----End
Usage Scenario
On the network shown in Figure 3-6, a carrier enables NetStream on the router
functioning as an NDE to obtain detailed network application information. The
user can use the information to monitor abnormal network traffic, analyze users'
operation modes, and plan networks between ASs.
Flexible flow packets provide user-defined templates for users to customize
matching and collected fields as required. The user-defined template improves
traffic analysis accuracy and reduces network bandwidth occupation, CPU usage,
and storage space usage.
Pre-configuration Tasks
Before collecting the statistics about IPv4 flexible flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ip netstream sampler to slot self
d. Run commit
----End
Procedure
Step 1 Run system-view
An IPv4 flexible flow statistics template is created, and its recording view is
displayed.
Step 3 Run match { { source | destination } { vlan | as | port | address | mask } | mpls
top-label ip-address | mpls label position | { protocol | tos | direction | tcp-
flag } | { input | output } interface | next-hop [ bgp ] }
Step 4 Run collect { { first | last } switched | input { packets | bytes } length }
The flexible flow statistics sent to the NSC is configured to contain the number of
bytes, number of packets, and first and last forwarding time.
----End
Procedure
Step 1 Run system-view
The output version number and AS option of flexible flow packets are specified.
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slotslot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
3. Run quit
The system view is displayed.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
If NetStream is configured on multiple interfaces on an NDE, all interfaces send
traffic statistics to a single NetStream Collector (NSC). The NSC cannot distinguish
interfaces, and therefore, cannot manage or analyze traffic statistics based on
interfaces. In addition, the NSC will be overloaded due to a great amount of
information.
NetStream monitoring configured on an NDE allows the NDE to send traffic
statistics collected on specified interfaces to specified NSCs for analysis, which
achieves interface-specific service monitoring. Traffic statistics can be balanced
among these NSCs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream monitor monitor-name
A NetStream monitoring service is created and its view is displayed. If a NetStream
monitoring service view already exists, the view is displayed.
Step 3 (Optional) Run ip netstream export source {ip-address | ipv6 ipv6-address}
[ port ]
A source IP address and a source port are configured for output NetStream flows.
Step 4 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 5 | 9 | ipfix } ] [ dscp dscp-value ]
The destination IP address and destination port number for traffic statistics are
specified.
Step 5 Run apply record record-name
Flexible flows are applied to monitoring services.
Step 6 Run quit
If flexible flows are applied to both the NetStream monitoring service view and system view,
statistics about flexible flows are sent to the destination IP address specified in the NetStream
monitoring service view, not the system view. The source address and source port configured in
the NetStream monitoring service view are also used for output NetStream flows.
----End
Context
The NSC can properly receive and parse NetStream packets output by the NDE
only when the AS field modes and interface index types on the NDE and NSC are
the same.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The type of the interface index carried in the NetStream packet output by the
router is configured.
----End
Procedure
Step 1 Run system-view
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ip netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured for the interface.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Procedure
● Run the display ip netstream statistics slot slot-id command to check
NetStream packet statistics.
● Run the display ip netstream statistics interface interface-type interface-
number command to check the statistics about the sampled packets on an
interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream monitor { all | monitor-name } command to
check monitoring information about IPv4 flexible flows.
----End
Usage Scenario
On the network shown in Figure 3-7, a carrier enables NetStream on the router
functioning as an NDE to obtain detailed network application information. The
user can use the information to monitor abnormal network traffic, analyze users'
operation modes, and plan networks between ASs.
Pre-configuration Tasks
Before collecting the statistics about IPv6 flexible flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
----End
Procedure
Step 1 Run system-view
An IPv6 flexible flow statistics template is created, and its recording view is
displayed.
Step 3 Run match { { source | destination } { vlan | as | port | address | mask } | mpls
top-label ip-address | mpls label position | { protocol | tos | direction | tcp-
flag } | { input | output } interface | next-hop [ bgp ] }
Aggregation keywords of the flexible flow statistics template are configured.
Step 4 Run collect { { first | last } switched | input { packets | bytes } length }
The flexible flow statistics sent to the NSC is configured to contain the number of
bytes, number of packets, and first and last forwarding time.
Step 5 Run commit
The configuration is committed.
----End
Context
IPv6 flexible flow packets can be output only in the V9 format.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream export version9 [ origin-as | peer-as ] [ bgp-nexthop ]
The output version number and AS option of flexible flow packets are specified.
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
3. Run quit
The system view is displayed.
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
If NetStream is configured on multiple interfaces on an NDE, all interfaces send
traffic statistics to a single NetStream Collector (NSC). The NSC cannot distinguish
Procedure
Step 1 Run system-view
Step 3 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 9 | ipfix } ] [ dscp dscp-value ]
The destination IPv6 address and destination port number for traffic statistics are
specified.
Step 4 (Optional) Run ipv6 netstream export source {ip-address | ipv6 ipv6-address}
[ port ]
A source IP address and a source port are configured for output NetStream flows.
If flexible flows are applied to both the NetStream monitoring service view and system view,
statistics about flexible flows are sent to the destination IP address specified in the NetStream
monitoring service view, not the system view. The source address and source port configured in
the NetStream monitoring service view are also used for output NetStream flows.
----End
Context
The NSC can properly receive and parse NetStream packets output by the NDE
only when the AS field modes and interface index types on the NDE and NSC are
the same.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTICE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field.
If the NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-
AS traffic sent by devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream as-mode { 16 | 32 }
The AS field mode is specified on the router.
Step 3 Run ipv6 netstream export index-switch { 16 | 32 }
The type of the interface index carried in the NetStream packet output by the
router is configured.
----End
Procedure
Step 1 Run system-view
Step 2 Configure sampling mode and sampling ratio, perform at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure sampling mode and sampling ratio for the interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
The sampling mode and sampling ratio are configured for the interface.
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view takes precedence over those configured in the
system view.
The ip netstream sampler command has the same function as the ipv6
netstream sampler command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
----End
Prerequisites
NetStream IPv6 flow statistics have been collected.
Procedure
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream flows.
● Run the display ipv6 netstream monitor { all | monitor-name } command to
check monitoring information about IPv6 flexible flows.
----End
Usage Scenario
On the network shown in Figure 3-8, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
If statistics about MPLS packets are collected on the P, the P sends statistics to
inform the NetStream Collector (NSC) of the MPLS label-specific traffic volume.
Context
Before collecting statistics about MPLS IPv4 packets, enable MPLS on the device
and interfaces and configure the MPLS network.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Output statistics about MPLS IPv4 packets in the form of original or aggregated
flows.
▪ ip-only: The device samples only inner IP packets, not MPLS labels.
----End
Usage Scenario
On the network shown in Figure 3-9, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
If statistics about MPLS packets are collected on the P, the P sends statistics to
inform the NetStream Collector (NSC) of the MPLS label-specific traffic volume.
The NetStream can be functioned only in the user side of the MPLS network, if the SR-
MPLS TE tunnel is applied in public network.
Context
Before collecting statistics about MPLS IPv6 packets, enable MPLS on the device
and interfaces and configure the MPLS network.
Procedure
Step 1 Run system-view
● label-only: The device samples only MPLS labels, not inner IP packets.
● ip-only: The device samples only inner IP packets, not MPLS labels.
● label-and-ip: The device samples both MPLS labels and inner IP packets.
Step 3 Output statistics about MPLS IPv6 packets in the form of original or aggregated
flows. For application details, see 3.5 Collecting Statistics About IPv6 Original
Flows and 3.6 Collecting Statistics About IPv6 Aggregated Flows.
MPLS original and aggregated flows can be output only in V9 or IPFIX format.
----End
Usage Scenario
In Figure 3-10, statistics about MPLS flows sent by the P to the NetStream
Collector (NSC) inform the NSC of the traffic volume and traffic type
corresponding to each label. Such statistics, however, cannot tell to which VPN
each traffic belongs. In this case, the PE sends the meaning of each label to the
NSC so that the NSC can determine to which VPN the received traffic belongs. The
NSC can analyze the traffic data of each VPN and display the result.
Figure 3-10 Networking diagram for collecting statistics about BGP/VPLS VPN
flows
Context
Before collecting statistics about BGP/VPLS VPN flows, deploy the BGP/MPLS VPN
network.
Procedure
● Enable the P to collect statistics about MPLS flows.
Procedure
● Run the display ip netstream cache origin slot slot-id command to check
information about the NetStream flow buffer.
● Run the display ip netstream statistics slot slot-id command to check
statistics about NetStream flows.
● Run the display ip netstream statistics interface interface-type interface-
number command to check statistics about the sampled packets on an
interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream cache { as | as-tos | bgp-nexthop-tos |
destination-prefix | destination-prefix-tos | index-tos | mpls-label | prefix |
prefix-tos | protocol-port | protocol-port-tos | source-prefix | source-prefix-
tos | source-index-tos | vni-sip-dip | vlan-id } slot slot-id command to check
information about various aggregated flows in the buffer.
● Run the display ip netstream export option command to check information
about the output option template.
----End
Procedure
● Run the reset ip netstream statistics command to delete IPv6 NetStream
template statistics.
● Run the reset ipv6 netstream statistics command to delete IPv6 NetStream
template statistics.
----End
Networking Requirements
On the network shown in Figure 3-11, NetStream is configured to collect statistics
about the source IP address, destination IP address, port, and protocol information
of network packets on the user side. Such statistics help analyze users' behaviors
and detect the virus-infected terminals, source and destination of Denial of service
(DoS) and Distributed Denial of service (DDoS) attacks, source of spams, and
unauthorized websites. In addition, NetStream allows users to rapidly identify virus
types and locate the IP address of abnormal traffic. Based on other NetStream
flow attributes, users can filter out virus-infected traffic and prevent it from
spreading over the network.
Figure 3-11 Networking diagram for collecting statistics about abnormal IPv4
flows on the user side
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure PEs and CEs to communicate.
2. Configure NetStream to collect statistics about incoming and outgoing flows
on the user-side interface of the PE.
Data Preparation
To complete the configuration, you need the following data:
● Name of the user-side interface of the PE
● Output format of NetStream flows
● Destination IP address, destination port number, and source IP address of
NetStream flows to be output
● Number of the slot in which the NetStream service processing board resides
(In this example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Configure PEs and CEs to communicate.
# Assign the IP address and mask to each interface according to Figure 3-11. For
configuration details, see "Configuration Files" in this section.
Step 2 Enable the NetStream statistics collection function on GE 1/0/0 of the PE.
# Configure the interface board on the PE to process NetStream services in
distributed mode.
[*PE] slot 1
[*PE-slot-1] ip netstream sampler to slot self
[*PE-slot-1] quit
# Specify the destination address, destination port number, and source address for
NetStream flows output in V5 format.
[*PE] ip netstream export host 192.168.2.2 9001
[*PE] ip netstream export source 192.168.2.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*PE] ip netstream sampler fix-packets 10000 inbound
[*PE] ip netstream sampler fix-packets 10000 outbound
[*PE] commit
Unknown
GigabitEthernet1/0/0
0 0 253 0
0 0 0 60
3 384
0.0.0.0 in
192.168.1.3 0
192.168.1.4 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
Unknown
GigabitEthernet1/0/1
0 0 253 0
0 0 0 60
1 128
0.0.0.0 in
192.168.1.5 0
192.168.1.6 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
----End
Configuration Files
● CE configuration file
#
sysname CE
#
interface GigabitEthernet 1/0/0
ip address 192.168.1.2 255.255.255.0
#
return
● PE configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname PE
#
ip netstream tcp-flag enable
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9001
#
interface gigabitethernet 2/0/0
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet 1/0/0
ip address 192.168.1.1 255.255.255.0
ip netstream inbound
ip netstream outbound
#
return
Networking Requirements
On the network shown in Figure 3-12, Device D connects network A and network
B to the wide area network (WAN). Device D samples and aggregates flows
before sending them to the NetStream Collector (NSC).
Figure 3-12 Networking diagram for collecting statistics about IPv4 flows
aggregated based on the AS number
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between the egress router of the LAN and the
WAN.
2. Configure reachable routes between the ingress router of the LAN and the
NSC.
3. Configure the ingress router of the LAN to sent traffic statistics to the
specified NSC.
4. Configure the ingress router of the LAN to sent traffic statistics to the
inbound interface on the NSC.
5. Aggregate sampled flows to reduce the data sent to the NSC.
6. Enable NetStream on the inbound interface of the ingress router.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface on each router. For configuration details,
see "Configuration Files" in this section.
Step 2 Configure reachable routes between the WAN, Device A, and Device B.
# Configure reachable routes between Device A and Device D.
[*DeviceA] ip route-static 1.1.1.1 24 GigabitEthernet 1/0/0
[*DeviceA] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
ip address 172.16.0.1 255.255.255.0
#
ip route-static 1.1.1.1 255.255.255.0 GigabitEthernet1/0/0
#
return
return
Networking Requirements
In Figure 3-13, Device A, Device B, and Device C support MPLS and use OSPF as
an IGP protocol on the MPLS backbone network.
Local LDP sessions are established between Device A and Device B and between
Device B and Device C. A remote LDP session is established between Device A and
Device C. NetStream is enabled on Device B to collect statistics about MPLS flows.
Figure 3-13 Networking diagram for collecting statistics about MPLS original
flows
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure the LDP session between every two routers.
2. Specify the remote peer and its IP address on the two routers that have
established a remote LDP session.
3. Specify the destination IP address, destination port number, and source IP
address of NetStream flows to be output.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces on each router as shown in Figure 3-13, OSPF
process 1, and area 0
● Device A's remote peer with name Device C and IP address 3.3.3.9
● Device C's remote peer with name Device A and IP address 1.1.1.9
● Number of the slot in which the NetStream service processing board resides
(In this example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Assign an IP address to each interface.
# Configure OSPF to advertise host routes to the specified Label Switching Router
(LSR) ID and of the network segments to which interfaces on the router are
connected. Enable basic MPLS functions on each router and its interfaces.
For configurations of the static MPLS TE tunnel, see the chapter "Basic MPLS
Configurations" in HUAWEI NetEngine 8000 X SeriesNetEngine 8000 Configuration
Guide - MPLS.
Step 3 Enable NetStream on GE 1/0/0 of Device B.
# Specify the destination address, destination port number, and source address for
NetStream flows output.
[*DeviceB] ip netstream export host 192.168.1.2 2100
[*DeviceB] ip netstream export source 10.1.2.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*DeviceB] ip netstream sampler fix-packets 10000 inbound
[*DeviceB] ip netstream sampler fix-packets 10000 outbound
[*DeviceB] commit
# Run the display ip netstream cache origin slot 1 command in the user view.
You can view information about the NetStream flow buffer and statistics about
output flows.
Unknown
GigabitEthernet1/0/0
0 0 253 0
0 0 0 60
3 384
0.0.0.0 in
192.168.1.3 0
192.168.1.4 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
Unknown
GigabitEthernet1/0/1
0 0 253 0
0 0 0 60
1 128
0.0.0.0 in
192.168.1.5 0
192.168.1.6 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
lsp-trigger all
#
mpls ldp
#
mpls ldp remote-peer Devicec
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
return
● Device B configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname DeviceB
#
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export host 10.1.2.1
ip netstream export source 192.168.1.2 2100
#
mpls lsr-id 2.2.2.9
#
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
ip netstream inbound
ip netstream outbound
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.2.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
● Device C configuration file
#
sysname DeviceC
#
ip netstream mpls-aware label-and-ip
#
mpls lsr-id 3.3.3.9
#
mpls
lsp-trigger all
#
mpls ldp
#
Networking Requirements
With the development of L3VPN services, users and carriers increasingly demand
higher quality of service (QoS). Carriers and users reach service level agreements
(SLAs) on Voice over Internet Protocol and video over IP services. Deploying
NetStream on the BGP/MPLS IP VPN network allows users to analyze the LSP
traffic between PEs and adjust the network to better meet service requirements.
On the IPv4 BGP/MPLS IP VPN network shown in Figure 3-14:
● Packets with specified application labels are sampled on PE2 and sent to the
NetStream Collector (NSC) and NetStream Data Analyzer (NDA).
● Statistics collection of incoming and outgoing packets with specified
application labels is enabled on the P. Packets with specified application labels
sent by the CE are sampled and sent to the NSC and NDA.
● Traffic statistics are analyzed on the NSC and NDA to obtain users' traffic
volume between PEs.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
# Configure PE2 to send information about L3VPN application labels to the NMS.
[*PE2] ip netstream export template option application-label
# Specify the destination address, destination port number, and source address for
NetStream flows output in V9 format.
[*PE2] ip netstream export version 9
[*PE2] ip netstream export host 192.168.2.2 9000
[*PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream to collect statistics about incoming and outgoing packets with
specified application labels on the P.
# Configure the interface board on the P to process NetStream services in
distributed mode.
<P> system-view
[*P] slot 1
[*P-slot-1] ip netstream sampler to slot self
[*P-slot-1] quit
This example uses the configuration of distributed NetStream service processing on a board.
To configure an interface board to process NetStream services in centralized mode, run the
ip netstream sampler to slot slot-id command.
# Specify the destination address, destination port number, and source address for
NetStream flows output in V9 format.
[*P] ip netstream export version 9
[*P] ip netstream export host 192.168.2.2 9001
[*P] ip netstream export source 172.16.3.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*P] ip netstream sampler fix-packets 10000 inbound
[*P] ip netstream sampler fix-packets 10000 outbound
[*P] quit
Unknown
GigabitEthernet1/0/0
0 0 253 0
0 0 0 60
3 384
0.0.0.0 in
192.168.1.3 0
192.168.1.4 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
Unknown
GigabitEthernet1/0/1
0 0 253 0
0 0 0 60
1 128
0.0.0.0 in
192.168.1.5 0
192.168.1.6 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 vpn1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
route-distinguisher 100:1
apply-label per-instance
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
#
interface GigabitEthernet1/0/0
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
ip address 172.16.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.1.1.1 as-number 65440
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.16.1.0 0.0.0.255
#
return
● P configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname P
#
ip netstream mpls-aware label-and-ip
ip netstream export version 9
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export source 172.16.2.1
ip netstream export host 172.16.2.2 9001
#
mpls lsr-id 2.2.2.9
#
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet1/0/0
ip address 172.16.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
ip address 172.16.3.1 255.255.255.0
ip netstream inbound
ip netstream outbound
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
ip address 172.16.2.1 255.255.255.0
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.17.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
● PE2 configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname PE2
#
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9000
ip netstream export template option application-label
#
ip vpn-instance vpna
route-distinguisher 200:1
apply-label per-instance
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet1/0/0
ip binding vpn-instance vpna
ip address 10.3.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
ip address 172.16.3.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.4.1.1 as-number 65440
#
ospf 1
area 0.0.0.0
network 172.17.1.0 0.0.0.255
network 3.3.3.9 0.0.0.0
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
ip address 10.2.1.1 255.255.255.0
#
bgp 65420
peer 10.2.1.2 as-number 100
#
ipv4-family unicast
import-route direct
peer 10.2.1.2 enable
#
return
Networking Requirements
As the Internet rapidly develops, ISP networks support increasing bandwidth and
planned QoS parameters, and carriers need to provide more delicate management
and accounting services. NetStream monitoring configured on an NetStream Data
Exporter (NDE) allows the NDE to send traffic statistics collected on specified
interfaces to specified NetStream Collectors (NSCs) for analysis, which achieves
interface-specific service monitoring.
On the network shown in Figure 3-15, Device A and Device B reside on different
IPv6 networks. GE 1/0/0 and GE 2/0/0 connect Device C to Device A and Device B,
respectively. Traffic statistics are collected on Device C and sent to NSC1 and NSC2
after traffic is aggregated.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface on each router.
2. Configure NetStream statistics on router C.
3. Configure NetStream monitoring services on router C.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each router
● Version of the NetStream packet format
● Source and destination addresses, destination port number, and monitoring
view for NetStream packets
● Number of the slot in which the NetStream service processing board resides
(In this example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Assign an IP address to each interface on each router. For configuration details,
see "Configuration Files" in this section.
Step 2 Configure NetStream statistics on Device C.
# Specify the distributed NetStream service processing mode on an interface
board.
[*DeviceC] slot 1
[*DeviceC-slot-1] ipv6 netstream sampler to slot self
[*DeviceC-slot-1] quit
# Configure the NetStream sampling function and set the mode to fixed packet
sampling.
Address Port
192.168.0.2 6000
------------------------------------------------------------
Monitor monitor2
ID :2
AppCount : 1
Address Port
2001:db8:100::1 6000
------------------------------------------------------------
# Run the display ipv6 netstream cache origin slot 1 command to view all types
of NetStream original flows in the buffer.
[~DeviceC] display ipv6 netstream cache origin slot 1
Show information of IP and MPLS cache of slot 1 is starting.
get show cache user data success.
DstIf DstIP SrcIP Pro Tos Flags Packets
SrcIf DstP Msk SrcP Msk NextHop
DstAs SrcAs
BGP: BGP NextHop TopLabelType Direction
Label1 Exp1 Bottom1
Label2 Exp2 Bottom2
Label3 Exp3 Bottom3
TopLabelIpAddress
--------------------------------------------------------------------------
Null 1:1::1:1 2:2::2:2 1 1 0 2746
GI3/0/0 0 32 0 24 3:3::3:3
0 0
0:0::0:0 0 in
0 0 0
0 0 0
0 0 0
0:0::0:0
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8:200::2/96
#
return
Networking Requirements
On the network shown in Figure 3-16, Device D connects network A and network
B to the wide area network (WAN). Device D samples and aggregates flows
before sending them to the NetStream Collector (NSC).
Figure 3-16 Networking diagram for collecting statistics about IPv4 flexible flows
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between Device A and Device B of the LAN and
the WAN.
2. Configure reachable routes between Device D and the NSC.
3. Configure Device D to send traffic statistics to the specified NSC.
4. Configure the flexible flow output function for traffic.
5. Enable NetStream on the outbound interface of Device D.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface
● Output format of NetStream flows
● NetStream sampling ratio
● Number of the slot in which the NetStream service processing board resides
(In this example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Assign an IP address to each interface on each router. The configuration details
are not provided here.
Step 2 Configure reachable routes between the WAN, Device A, and Device B.
# Configure reachable routes between Device A and Device D.
[~DeviceA] ip route-static 192.168.1.1 24 gigabitethernet 1/0/0
[*DeviceA] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
ip address 172.16.0.1 255.255.255.0
#
ip route-static 192.168.1.1 255.255.255.0 GigabitEthernet1/0/0
#
return
#
interface GigabitEthernet2/0/0
ip address 172.17.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
ip address 192.168.1.1 255.255.255.0
ip route-static 172.17.1.3 24 gigabitethernet 3/0/0
ip netstream inbound
ip netstream sampler fix-packets 1000 inbound
#
interface GigabitEthernet4/0/0
ip address 192.168.2.1 255.255.255.0
#
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 3000
#
ip netstream record aa
match source address
collect first switched
#
ip netstream apply record aa
#
return
4 NQA Configuration
Configuration Precautions
Restrictions Guidelines Impact
For the ping/tracert ipv6- Initiate detection with Detection fails, but
sid function, if the the -a parameter traffic forwarding is not
End.OP SID outbound specified. affected.
interface does not have a
global IPv6 address, the
source IP address
contained in the
detection packets sent by
the local device may be
an unreachable IPv6
address. This causes a
detection failure.
When the RFC 2544 Configure dynamic ARP When the RFC 2544
initiator is bound to a entry learning for the initiator is bound to a
flexible access sub- source and destination IP flexible access sub-
interface to initiate a addresses when the RFC interface to initiate a
measurement, 2544 initiator is bound measurement, the
configuring static ARP to a flexible access sub- measurement fails.
entry learning for the interface to initiate a
source and destination IP measurement.
addresses causes the RFC
2544 measurement path
to fail to be learned.
When the lossy mode Do not configure the When the lossy mode
and the outward-facing lossy mode and the and the outward-facing
802,1ag function are outward-facing 802,1ag 802,1ag function are
both configured for an function for the same both configured for an
interface, the outward- interface. interface, the outward-
facing 802,1ag function facing 802,1ag function
fails. fails.
Usage Scenario
Pre-configuration Tasks
Before configuring NQA to monitor an IP network, configure static routes or an
Interior Gateway Protocol (IGP) to implement network connectivity.
Context
A DNS test is based on UDP packets. Only one probe packet is sent in one DNS
test to detect the speed at which a DNS name is resolved to an IP address. The
test result clearly reflects the performance of the DNS protocol on the network.
Procedure
Step 1 Run system-view
Step 3 Create an NQA test instance and set the test instance type to DNS.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created and the view of the test instance is displayed.
2. Run test-type dns
An IP address is configured for the DNS server in the DNS test instance.
Step 6 (Optional) Set optional parameters for the test instance and simulate packets
transmitted on an actual network.
1. Run agetime ageTimeValue
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to ICMP.
1. Run nqatest-instanceadmin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type icmp
The test type is set to ICMP.
3. (Optional) Run descriptiondescription
A description is configured for the NQA test instance.
Step 3 Run destination-address{ ipv4 destAddress | ipv6 destAddress6 }
The destination address (that is, the NQA server address) of the client is specified.
Step 4 Set parameters for the test instance to simulate a specific type of packet.
1. Run the agetime ageTimeValue command to configure the aging time of an
NQA test instance.
2. Run the datafill fill-string command to configure padding characters in NQA
test packets.
3. Run the datasize datasizeValue command to set the size of the data field in
an NQA test packet.
4. Run probe-count number
The number of probes in a test is set for the NQA test instance.
5. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
6. Run sendpacket passroute
The NQA test instance is configured to send packets without searching the
routing table.
7. Run source-address { ipv4 srcAddress | ipv6 srcAddr6 }
The source IP address of NQA test packets is set.
8. Run source-interface interface-type interface-number
The source interface for NQA test packets is set.
9. Run tos tos-value
The ToS value in NQA test packets is set.
10. Run ttl ttlValue
If the following conditions are met, the Completion field in the test results will be
displayed as no result:
– frequency configured ≤ (probe-count - 1) x interval + timeout.
2. Run the start command to start an NQA test.
An NQA test instance can be started immediately, at a specified time, or after
a specified delay.
– Run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds
second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command to start an NQA test instance immediately.
----End
Procedure
● Configure the NQA server for the TCP test.
a. Run system-view
The system view is displayed.
b. Run nqa-server tcpconnect [ vpn-instance vpn-instance-name ] ip-
address port-number
The IP address and number of the port used to monitor TCP services are
specified on the NQA server.
c. Run commit
The configuration is committed.
● Configure the NQA client for the TCP test.
a. Run system-view
The destination address and destination port number specified in this step must
be the same as ip-address and port-number specified for the NQA server.
The device enabled to send traps to the NMS after the number of
consecutive probe failures reaches the specified threshold.
ii. Run test-failtimes failTimes
The device is enabled to send traps to the NMS after the number of
consecutive failures of the test instance reaches the specified
threshold.
The VPN instance name is configured for the NQA test instance.
i. Schedule the test instance.
i. (Optional) Run frequency frequencyValue
The test period is set for the NQA test instance.
ii. Run start
An NQA test is started.
The start command has multiple formats. Choose one of the
following formats as needed.
○ To start an NQA test instance immediately, run the start now
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
○ To start an NQA test instance at a specified time, run the start
at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
○ To start an NQA test instance after a specified delay, run the
start delay { seconds second | hh:mm:ss } [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss }
| lifetime { seconds second | hh:mm:ss } } ] command.
○ To start an NQA test instance at a specified time every day, run
the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ]
[ end yyyy/mm/dd ] command.
j. Run commit
----End
Procedure
● Configure an NQA server.
a. Run system-view
The system view is displayed.
b. Run nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address
port-number
The IP address and port number of the NQA server for monitoring UDP
services are specified.
c. Run commit
The configuration is committed.
● Configure an NQA client.
a. Run system-view
The system view is displayed.
b. Create an NQA test instance, set the test instance type to UDP, and add
description information for the test instance.
i. Run nqa test-instance admin-name test-name
The NQA test instance is created, and the view of the test instance is
displayed.
ii. Run test-type udp
The test instance type is set to UDP.
iii. (Optional) Run description description
The description for of test instance is added.
c. Specify the destination IP address and destination port number for the
test instance.
i. Run destination-address{ ipv4 destAddress | ipv6 destAddress6 }
The destination IP address for the test instance (the IP address of the
NQA server) is specified.
ii. (Optional) Run destination-port port-number
The destination port number for the test instance is specified.
d. (Optional) Set parameters for the test instance and simulate packets.
i. Run agetime ageTimeValue
The aging time is set for the NQA test instance.
ii. Run datafill fill-string
The padding string in probe packets is set.
iii. Run datasize datasizeValue
The size of the data field in an NQA test packet.
iv. Run probe-count number
The number of probes to be sent each time is set.
v. Run interval seconds interval
The interval at which probe packets are sent is set.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to SNMP.
Before configuring an NQA SNMP test instance, configure SNMP. The NQA SNMP test
instance supports SNMPv1, SNMPv2c, and SNMPv3.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run test-type snmp
The test instance type is set to SNMP.
3. (Optional) Run the description description command to configure the test
instance description.
Step 3 Run destination-address ipv4 destAddress
The destination address (that is, the NQA server address) of the client is specified.
If a target SNMP agent runs SNMPv1 or SNMPv2c, the read community name
specified in the community read cipher command must be the same as the read
community name configured on the SNMP agent. Otherwise, the SNMP test will
fail.
Step 5 (Optional) Set parameters for the test instance and simulate packets.
1. Run the probe-count number command to configure the number of probes in
an NQA test instance.
2. Run the interval seconds interval command to configure the interval for
sending NQA test packets.
3. Run the sendpacket passroute command to configure the NQA test instance
to send packets without searching the routing table.
4. Run source-address ipv4 srcAddress
The source port number is set for the NQA test instance.
6. Run the tos tos-value command to configure the ToS value in NQA test
packets.
7. Run the ttl ttlValue command to configure the TTL value of NQA test packets.
Step 7 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
The device is enabled to send traps to the NMS after the number of
consecutive probe failures reaches the specified threshold.
2. Run the test-failtimes failTimes command to configure the Trap message to
be sent to the NMS when the number of continuous probe failures reaches
the specified value in NAQ tests.
3. Run the threshold rtd thresholdRtd command to configure an RTD threshold.
4. Run the send-trap { all | [ rtd | testfailure | probefailure | testcomplete ]* }
command to configure the conditions for sending trap messages.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to trace.
1. Run nqa test-instanceadmin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type trace
The test instance type is set to trace.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 Specify the destination address and destination port number for the test instance.
1. Run destination-address { ipv4 destAddress | ipv6 destAddress6 }
The destination address (that is, the NQA server address) of the client is
specified.
2. (Optional) Run destination-port port-number
The destination port number is specified for the NQA test instance.
Step 4 (Optional) Set parameters for the test instance to simulate packets.
1. Run agetime ageTimeValue
The aging time of an NQA test is configured.
2. Run datafill fill-string
Padding characters in NQA test packets are configured.
3. Run datasize datasizeValue
The size of the data field in an NQA test packet is set.
4. Run probe-count number
The number of probes in a test is set for the NQA test instance.
5. Run sendpacket passroute
The NQA test instance is configured to send packets without searching the
routing table.
6. Run source-address { ipv4 srcAddress | ipv6 srcAddr6 }
The source IP address of NQA test packets is set.
7. Run source-interface interface-type interface-number
The source interface for NQA test packets is set.
8. Run tos tos-value
The ToS value in NQA test packets is set.
9. Run nexthop ipv4 ip-address
The next-hop address is configured for the test instance.
10. Run tracert-livetime first-ttl first-ttl max-ttl max-ttl
The TTL of test packets is set.
Step 5 (Optional) Run the set-df command to prevent packet fragmentation.
Use a trace test instance to obtain the path MTU as follows:
Run the set-df command to disable packet fragmentation. Then, run the datasize
command to set the size of the packet data area. After that, start the test
instance. If the test is successful, the size of the sent packet's data area is smaller
than the path MTU. Then, keep increasing the packet data area size using the
datasize command until the test fails. If the test fails, the size of the sent packet's
data area is greater than the path MTU. The maximum size of the packet that can
be sent without being fragmented is used as the path MTU.
Step 6 (Optional) Configure test failure conditions.
1. Run timeout time
The response timeout period is set.
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run tracert-hopfailtimes hopfailtimesValue
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The NQA test instance is configured to send a trap message to the NMS when
the number of continuous test failures reaches the specified value.
2. Run threshold rtd thresholdRtd
The VPN instance name is configured for the NQA test instance.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the startat
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the
startdailyhh:mm:sstohh:mm:ss [ beginyyyy/mm/dd ] [ endyyyy/mm/dd ]
command.
----End
Procedure
● Configure the NQA server for the UDP jitter test.
a. Run system-view
The system view is displayed.
b. Run nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address
port-number
The IP address and number of the port used to monitor UDP services are
specified on the NQA server.
c. Run commit
The configuration is committed.
● Configure the NQA client for the UDP jitter test.
a. Run system-view
The system view is displayed.
b. (Optional) Run nqa-jitter tag-version version-number
The packet version is configured for a UDP jitter test instance.
Packet statistics collected in version 2 is more accurate than those in
version 1. Packet version 2 is recommended.
c. Create an NQA test instance and set the test instance type to UDP jitter.
i. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is
displayed.
ii. Run test-type jitter
The test instance type is set to UDP jitter.
iii. (Optional) Run description description
A description is configured for the test instance.
d. The destination address and destination port number are set for the test
instance.
i. (Optional) Run destination-address { ipv4 destAddress | ipv6
destAddress6 }
The client's destination address that is the NQA server address is
specified.
ii. Run destination-port port-number
The destination port number is specified for the UDP jitter test.
e. (Optional) Run hardware-based enable
The hardware forwarding engine on an interface board is enabled to send
packets and add timestamps to the packets.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to ICMP Jitter.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type icmpjitter
The test instance type is set to ICMP Jitter.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 Run destination-address { ipv4 destAddress | ipv6 destAddress6 }
The destination address (that is, the NQA server address) of the client is specified.
Step 4 (Optional) Run hardware-based enable
The hardware forwarding engine on an interface board is enabled to send packets.
After you enable the interface board to send packets on a client, run the nqa-server icmp-
server [ vpn-instance vpn-instance-name ] ip-address command on the NQA server to
specify the IP address of the ICMP services monitored by the NQA server.
Step 5 (Optional) Set timestamp units for the NQA test instance.
The timestamp units need to be configured only after the hardware-based enable
command is run.
1. Run timestamp-unit { millisecond | microsecond }
A timestamp unit is configured for the source in the NQA test instance.
2. Run receive-timestamp-unit { millisecond | microsecond }
A timestamp unit is configured for the destination in the NQA test instance.
In a scenario where a Huawei device is connected to a non-Huawei device, an
ICMP jitter test in which the Huawei device functions as the source (client) is
configured to detect the delay, jitter, and packet loss on the network. To set
the timestamp unit of the ICMP timestamp packet returned by the
destination, run the receive-timestamp-unit command.
The source's timestamp unit configured using the timestamp-unit
{ millisecond | microsecond } command must be the same as the
destination's timestamp unit configured using the receive-timestamp-unit
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run test-type pathjitter
The type of the test instance is configured as path jitter.
Step 4 Run destination-address ipv4 destAddress
The destination IP address is configured.
Step 5 (Optional) Run the following commands to configure other parameters for the
path jitter test:
● Run icmp-jitter-mode { icmp-echo | icmp-timestamp }
The mode of the path jitter test is configured.
● Run vpn-instance vpn-instance-name
The VPN instance to be tested is configured.
● Run source-address ipv4 srcAddress
The source IP address is configured.
● Run probe-count number
The number of test probes to be sent each time is set.
● Run jitter-packetnum packetNum
The number of test packets to be sent during each test is set.
The probe-count command is used to configure the number of times for the jitter test
and the jitter-packetnum command is used to configure the number of test packets
sent during each test. In actual configuration, the product of the number of times for
the jitter test and the number of test packets must be less than 3000.
● Run interval seconds interval
The interval for sending jitter test packets is set.
The shorter the interval is, the sooner the test is complete. However, delays
arise when the processor sends and receives test packets. Therefore, if the
interval for sending test packets is set to a small value, a relatively greater
error may occur in the statistics of the jitter test.
● Run fail-percent percent
The percentage of the failed NQA tests is set.
Select the start mode as required because the start command has several forms.
----End
Procedure
Step 1 Run system-view
An NQA test instance is created and the test instance view is displayed.
Step 5 (Optional) Run the following commands to configure other parameters for the
path MTU test.
● Run discovery-pmtu-max pmtu-max
The maximum value of the path MTU test range is set.
● Run step step
The value of the incremental step is set for the packet length in the path MTU
test.
● Run vpn-instance vpn-instance-name
The VPN instance to be tested is configured.
● Run source-address ipv4 srcAddress
The source IP address is configured.
● Run probe-count number
The maximum number of probe packets that are allowed to time out
consecutively is configured.
Select the start mode as required because the start command has several forms.
● To perform the NQA test after a certain delay period, run the startdelay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started after a certain delay.
----End
Prerequisites
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [collection] this command to check NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
● Run the display nqa-server command to check the NQA server status.
----End
Usage Scenario
Pre-configuration Tasks
Before configuring NQA to monitor an MPLS network, configuring basic MPLS
functions.
Procedure
Step 1 Run system-view
Step 2 Create an NQA test instance and set the test instance type to LSP ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type lspping
If the LSP type of the NQA test instance is srbe, you can run the remote-fec ldp
remoteIpAddr remoteMaskLen command to configure an IP address for a remote
FEC.
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run the
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] * command.
● To configure the tunnel interface for a checked TE tunnel, run the lsp-
tetunnel tunnel ifNum command.
● To configure the destination address for a BGP tunnel, run the destination-
address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback
loopbackAddress } ] * command.
● To configure the tunnel interface for an SR-MPLS TE tunnel, run the lsp-
tetunnel tunnel ifNum command.
● To configure the destination address for an SR-MPLS BE tunnel, run the
destination-address ipv4 destAddress lsp-masklen maskLen command.
● To configure the name, binding segment ID, endpoint IP address, and color ID
of an SR-MPLS TE policy, run the policy { policy-name policyname | binding-
sid bsid | endpoint-ip endpointip color colorid } command.
An IP address is configured for the next hop when load balancing is enabled.
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp | udp-via-vpls }
The number of probes in a test is set for the NQA test instance.
6. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
7. Run source-address ipv4 ip-address
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run fail-percent percent
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the startat
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the
startdailyhh:mm:sstohh:mm:ss [ beginyyyy/mm/dd ] [ endyyyy/mm/dd ]
command.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to LSP trace.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the test instance view is displayed.
2. Run test-type lsptrace
The test instance type is set to LSP trace.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Run lsp-type { ipv4 | te | bgp | srte | srbe | srte-policy }
The LSP type is specified for the NQA test instance.
If the LSP type of the NQA test instance is srbe, you can run the remote-fec ldp
remoteIpAddr remoteMaskLen command to configure an IP address for a remote
FEC.
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run:
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] *
● To configure the tunnel interface for a checked TE tunnel, run:
lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby ]
● To configure the destination address for a BGP tunnel, run:
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] *
● To configure the tunnel interface for an SR-MPLS TE tunnel, run:
lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby ]
● To configure the destination address for an SR-MPLS BE tunnel, run:
destination-address ipv4 destAddress lsp-masklen maskLen
● To configure the name, binding segment ID, endpoint IP address, and color ID
of an SR-MPLS TE policy, run:
policy { policy-name policyname | binding-sid bsid | endpoint-ip endpointip
color colorid }
Step 6 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run lsp-exp exp
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp }
The LSP packet return mode is configured for the NQA test instance.
3. Run probe-count number
The number of probes in an NQA test instance is configured.
4. Run source-address ipv4 srcAddress
The source IP address of NQA test packets is set.
5. Run tracert-livetime first-ttl first-ttl max-ttl max-ttl
The TTL of test packets is set.
Step 7 (Optional) Configure test failure conditions.
1. Run timeout time
The timeout period of response packets is set.
2. Run tracert-hopfailtimes hopfailtimesValue
The maximum number of hop failures in a probe is set.
Step 8 (Optional) Configure the NQA statistics function.
Run records { history number | result number }
The maximum number of history records and the maximum number of result
records are set for the NQA test instance.
Step 9 Schedule the test instance.
1. (Optional) Run frequency frequencyValue
The test period is set for the NQA test instance.
2. Run start
The NQA test instance is started.
You can start an NQA test instance immediately, at a specified time, after a
delay, or periodically.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
Step 10 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to LSP jitter.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run test-type lspjitter
The test instance type is set to LSP jitter.
3. (Optional) Run the description description command to configure the test
instance description.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Run lsp-type { ipv4 | te }
The LSP type is specified for the NQA test instance.
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run the
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] * command.
● To configure the tunnel interface for a checked TE tunnel, run the lsp-
tetunnel tunnel ifNum command.
● To configure the destination address for a BGP tunnel, run the destination-
address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback
loopbackAddress } ] * command.
Step 6 (Optional) Set parameters for the test instance to simulate packets.
1. Run the lsp-exp exp command to set the LSP EXP value for the NQA test
instance.
2. Run the lsp-replymode { level-control-channel | no-reply | udp } command
to configure the LSP packet return mode for the NQA test instance.
3. Run the datafill fill-string command to configure padding characters in NQA
test packets.
4. Run the datasize datasizeValue command to set the size of the data field in
an NQA test packet.
5. Run the jitter-packetnum packetNum command to configure the number of
packets sent each time in a probe.
6. Run the probe-count number command to configure the number of probes in
an NQA test instance.
7. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
8. Run source-addressipv4srcAddress
Step 8 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
----End
Prerequisites
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [collection] this command to check NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
● Run the display lspv statistics command to check LSPV statistics.
----End
Usage Scenario
Pre-configuration Tasks
Before you configure NQA to check VPNs, configure basic VPN functions.
Procedure
Step 1 Run system-view
Step 2 Create an NQA test instance and configure the test instance type as PWE3 ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type pwe3ping
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the startat
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the
startdailyhh:mm:sstohh:mm:ss [ beginyyyy/mm/dd ] [ endyyyy/mm/dd ]
command.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and configure the test instance type as VPLS MAC
ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type vplsping
The test instance type is set to VPLS MAC ping.
3. Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set the VPLS parameters to be checked.
1. Run vsi vsi-name
The name of the VSI to be detected is specified.
2. Run destination-address mac macAddress
The destination MAC address associated with the VSI is specified.
3. (Optional) Run vlan vlan-id
A VLAN ID is specified for the NQA test instance.
Step 5 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run lsp-exp exp
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { no-reply | udp | udp-via-vpls }
The reply mode in the NQA LSP test instance is configured.
3. Run datafill fill-string
Padding characters in NQA test packets are configured.
4. Run datasize datasizeValue
The size of the data field in an NQA test packet.
5. Run probe-count number
The number of probes in a test is set for the NQA test instance.
The interval at which NQA test packets are sent is set for the NQA test
instance.
7. Run ttl ttlValue
Step 6 (Optional) Configure detection failure conditions and enable the function to send
traps to the NMS upon detection failures.
1. Run timeout time
The system is enabled to send traps to the NMS after the number of
consecutive probe failures reaches the specified threshold.
4. Run test-failtimes failTimes
The system is enabled to send traps to the NMS after the number of
consecutive failures of the NQA test instance reaches the specified threshold.
5. Run threshold rtd thresholdRtd
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
----End
Procedure
Step 1 Run system-view
Step 2 Create an NQA test instance and set the test instance type to VPLS PW ping.
1. Run nqa test-instance admin-name test-name
2. Run test-type vplspwping
If the VSI configured using the vsivsi-name command has a specified negotiation-vc-
id, the local-pw-idlocal-pw-id command must be run.
Step 5 (Optional) Set parameters for the test instance and simulate packets.
1. Run lsp-exp exp
2. Run lsp-replymode { level-control-channel | no-reply | udp | udp-via-vpls }
3. Run the datafill fill-string command to configure padding characters in NQA
test packets.
4. Run the datasize datasizeValue command to set the size of the data field in
an NQA test packet.
5. Run probe-count number
6. Run interval seconds interval
7. Run ttl number
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to PWE3 trace.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run test-type pwe3traceThe test instance type is set to PWE3 trace.
3. (Optional) Run the description description command to configure the test
instance description.
Step 4 Set parameters for the Layer 2 virtual private network (L2VPN) to be monitored.
1. Run local-pw-type pwTypeValue
Step 6 (Optional) Configure remote provider edge (PE) information for a multi-segment
PW to be monitored.
----End
Procedure
Step 1 Run system-view
Step 2 Create an NQA test instance and set the test instance type to VPLS PW ping.
1. Run nqa test-instance admin-name test-name
2. Run test-type vplspwtrace
The test instance type is set to VPLS PW trace.
3. (Optional) Run description description
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set parameters for the VPLS network to be monitored.
1. Run vsi vsi-name
The name of a virtual switching instance (VSI) to be monitored is specified.
2. Run destination-address ipv4 destAddress
An IP address of the remote PE is specified.
3. (Optional) Run local-pw-id local-pw-id
A PW ID is set on the local PE.
Step 5 (Optional) Set parameters for the test instance and simulate packets.
1. Run lsp-exp exp
2. Run lsp-replymode { level-control-channel | no-reply | udp | udp-via-vpls }
3. Run probe-count number
4. Run ttl number
----End
Prerequisites
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
----End
Usage Scenario
Table 4-4 Usage scenario for checking a Layer 2 network using NQA
Pre-configuration Tasks
Before configuring NQA to check a Layer 2 network, complete the following tasks:
● Complete basic configurations of the Layer 2 network.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and specify the test instance type as MAC ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
Step 3 Configure the MEP ID, MD name, and MA name based on the MAC ping type.
1. Run mep mep-id mep-id
The names of the MD and MA for sending NQA test packets are configured.
Step 4 Perform either of the following steps to configure the destination address for the
MAC ping test:
● Run destination-address mac macAddress
The destination MAC address is configured.
To query a destination MAC address, run the display cfm remote-mep command.
● Run destination-address remote-mep mep-id remoteMepID
The peer MEP ID is configured.
If the destination address type is remote-mep, you must configure the mapping between
the remote MEP and MAC address first.
Step 5 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run datasize datasizeValue
The number of probes in a test is set for the NQA test instance.
3. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
Step 6 (Optional) Configure detection failure conditions and enable the function to send
traps to the NMS upon detection failures.
1. Run timeout time
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run fail-percent percent
----End
Prerequisites
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [ collection ] this command to view NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
----End
Applicable Environment
An NQA generalflow test is a standard traffic testing method for evaluating
network performance and is in compliance with RFC 2544. This test can be used in
various networking scenarios that have different packet formats. NQA generalflow
tests are conducted using UDP packets with source UDP port 49184 and
destination UDP port 7.
Before a customer performs a service cutover, an NQA generalflow test helps the
customer evaluate whether the network performance counters meet the
requirements in the design. An NQA generalflow test has the following
advantages:
● Enables a device to send simulated service packets to itself before services are
deployed on the device.
Existing methods, unlike generalflow tests, can only be used when services
have been deployed on networks. If no services are deployed, testers must be
used to send and receive test packets.
● Uses standard methods and procedures that comply with RFC 2544 so that
NQA generalflow tests can be conducted on a network on which both Huawei
and non-Huawei devices are deployed.
A generalflow test measures the following counters:
● Throughput: maximum rate at which packets are sent without loss.
● Packet loss rate: percentage of discarded packets to all sent packets.
● Latency: consists of the bidirectional delay time and jitter calculated based on
the transmission and receipt timestamps carried in test packets. The
transmission time in each direction includes the time the forwarding devices
process the test packet.
A generalflow test can be used in the following scenarios:
● Layer 2: native Ethernet, L2VPN (VLL and VPLS), EVPN
On the network shown in Figure 4-1, an initiator and a reflector perform a
generalflow test to monitor the forwarding performance for end-to-end
services exchanged between two user-to-network interfaces (UNIs).
In the L2VPN accessing L3VPN networking shown in Figure 4-2, the initiator
and reflector can reside in different locations to represent different scenarios.
– If the initiator and reflector reside in locations 1 and 5 (or 5 and 1),
respectively, or the initiator and reflector reside in locations 4 and 6 (or 6
and 4), respectively, it is a native Ethernet scenario.
– If the initiator and reflector reside in locations 2 and 3 (or 3 and 2),
respectively, it is a native IP scenario.
– If the initiator resides in location 3 and the reflector in location 1, or the
initiator resides in location 2 and the reflector in location 4, it is similar to
an IP gateway scenario, and the simulated IP address must be configured
on the L2VPN device.
– If the initiator and reflector reside in locations 1 and 2 (or 2 and 1),
respectively, or the initiator and reflector reside in locations 3 and 4 (or 4
and 3), respectively, it is an IP gateway scenario.
– If the initiator resides in location 1 and the reflector in location 4, the
initiator resides in location 1 and the reflector in location 3, or the
initiator resides in location 4 and the reflector in location 2, it is an
L2VPN accessing L3VPN scenario. In this scenario, the destination IP and
MAC addresses and the source IP address must be specified on the
initiator, and the destination IP address for receiving test flows must be
specified on the reflector. If the initiator resides on the L2VPN, the
simulated IP address must be specified as the source IP address.
● IP gateway scenario
Layer 2 interface access to a Layer 3 device: IP gateway scenario
Figure 4-3 shows the networking of the Layer 2 interface's access to a Layer 3
device.
Figure 4-3 General flow test in the scenario in which a Layer 2 interface
accesses a Layer 3 device
Pre-configuration Tasks
Before configuring an NQA generalflow test, complete the following tasks:
● Layer 2:
– In a native Ethernet scenario, configure reachable Layer 2 links between
the initiator and reflector.
– In an L2VPN scenario, configure reachable links between CEs on both
ends of an L2VPN connection.
– In an EVPN scenario, configure reachable links between CEs on both ends
of an EVPN connection.
● Layer 3:
– In a native IP scenario, configure reachable IP links between the initiator
and reflector.
– In an L3VPN scenario, configure reachable links between CEs on both
ends of an L3VPN connection.
● L2VPN accessing L3VPN scenario: configure reachable links between the
L2VPN and L3VPN.
● IP gateway scenario: configure reachable Layer 2 links between an IP gateway
and the reflector.
Context
On the network shown in Figure 4-1 of the "Configuring an RFC 2544
Generalflow Test Instance", the following two roles are involved in a generalflow
test:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure the reflector. The reflector settings vary according to usage scenarios.
----End
Context
On the network shown in Figure 4-1 of the "Configuring an RFC 2544
Generalflow Test Instance", the following two roles are involved in a generalflow
test:
Procedure
Step 1 Create a generalflow test instance.
1. Run system-view
The system view is displayed.
2. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the test instance view is displayed.
3. Run test-type generalflow
The test type is set to generalflow.
4. Run measure { throughput | loss | delay }
A test counter is specified.
Step 2 Set basic simulated service parameters.
The basic simulated service parameters on the initiator must be the same as those
configured on the reflector.
Throug 1. Run the rate rateL rateH command to set the upper and lower rate
hput thresholds.
2. Run the interval seconds interval command to set the interval at
which test packets are transmitted at a specific rate.
3. Run the precision precision-value command to set the throughput
precision.
4. Run the fail-ratio fail-ratio-value command to set the packet loss
rate during a throughput test. The value is expressed in 1/10000. If
the actual packet loss rate is less than 1/10000, the test is
successful and continues.
Latenc 1. Run the rate rateL command to set the rate at which test packets
y are sent.
2. Run the interval seconds interval command to set the interval at
which test packets are sent.
Packet 1. Run the rate rateL command to set the rate at which test packets
loss are sent.
rate
In Layer 2 and Layer 3 scenarios, the data size of a generalflow test packet
cannot be greater than the maximum transmission unit (MTU) of the
simulated inbound interface.
2. Run duration duration
The duration value must be greater than twice the interval value in throughput and
delay tests.
3. Run records result number
The 802.1p priority is set for generalflow test packets in an Ethernet scenario.
5. Run tos tos-value
NOTICE
----End
Prerequisites
All generalflow test configurations are complete.
NQA test results cannot be displayed automatically on the terminal. Run the display nqa
results command to view test results. By default, the command output shows the results of
the latest five tests.
Procedure
● Run the display nqa results [ test-instance adminName testName ]
command on the initiator to view generalflow test results.
● Run the display nqa reflector [ reflector-id ] command on the reflector to
view reflector information.
----End
Usage Scenario
An Ethernet service activation test is a method defined in Y.1564. This test helps
carriers rapidly and accurately verify whether network performance meets SLA
performance indexes before service rollouts.
Pre-configuration Tasks
Before configuring an Ethernet service activation test, complete the following
tasks:
● Layer 2 scenarios:
– In a native Ethernet scenario, configure reachable Layer 2 links between
the initiator and reflector.
– In an L2VPN scenario, configure reachable links between CEs on both
ends of an L2VPN connection.
Context
Devices performing an Ethernet service activation test play two roles: initiator and
reflector. An initiator sends simulated service traffic to a reflector, and the
reflector reflects the service traffic.
● Interface-based mode: A reflector loops all traffic that its interface receives.
● Flow-based mode: A reflector loops only traffic meeting specified conditions.
In flow-based mode, a test flow must have been configured.
Procedure
Step 1 Run system-view
MAC, IP, or both MAC and IP addresses are specified based on traffic types:
– For Ethernet Layer 2 switching and L2VPN services, a MAC address must be
specified, and an IP address is optional.
– For IP routing and L3VPN services, an IP address and a MAC address must be
specified. If no IP address or MAC address is specified, the reflector will reflect all
the traffic, which affects other service functions.
– For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
2. Configure the following parameters as needed:
– Run vlan vlan-id [ end-vlan-vid ]
A single VPN ID is specified for Ethernet packets in the NQA test flow
view.
– Run pe-vid pe-vid ce-vid ce-vid [ ce-vid-end ]
Double VPN IDs are specified for Ethernet packets in the NQA test flow
view.
– Run udp destination-port destination-port [ end-destination-port ]
A destination UDP port number or range is specified.
– Run udp source-port source-port [ end-source-port ]
A source UDP port number or range is specified.
▪ For the same test flow, a range can be specified only in one of the traffic-
type, vlan, pe-vid, udp destination-port, and udp source-port commands.
In addition, the difference between the start and end values cannot be more
than 127, and the end value must be greater than the start value.
▪ In the traffic-type command, the start MAC or IP address has only one
different octet from the end MAC or IP address. For example, the start IP
address is set to 1.1.1.1, and the end IP address can only be set to an IP
address in the network segment 1.1.1.0.
The test-flow flow-id & <1-16> command configures the reflector to reflect based
on a specified flow. If the flow ID is not specified, the reflector performs interface-
based reflection. The agetime age-time parameter is optional, and the default
value is 14400s.
----End
Context
Devices performing an Ethernet service activation test play two roles: initiator and
reflector. An initiator sends simulated service traffic to a reflector, and the
reflector reflects the service traffic.
Procedure
Step 1 Run system-view
MAC, IP, or both MAC and IP addresses are specified based on traffic types:
– For Ethernet Layer 2 switching and L2VPN services, a MAC address must be
specified, and an IP address is optional.
– For IP routing and L3VPN services, an IP address and a MAC address must be
specified. If no IP address or MAC address is specified, the reflector will reflect all
the traffic, which affects other service functions.
– For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
2. Configure the following parameters as needed:
– Run vlan vlan-id [ end-vlan-vid ]
A single VPN ID is specified for Ethernet packets in the NQA test flow
view.
– Run pe-vid pe-vid ce-vid ce-vid [ ce-vid-end ]
Double VPN IDs are specified for Ethernet packets in the NQA test flow
view.
– For the same test flow, a range can be specified only in one of the traffic-type,
vlan, pe-vid, udp destination-port, and udp source-port commands. In addition,
the difference between the start and end values cannot be more than 127, and the
end value must be greater than the start value.
– In the traffic-type command, the start MAC or IP address has only one different
octet from the end MAC or IP address. For example, the start IP address is set to
1.1.1.1, and the end IP address can only be set to an IP address in the network
segment 1.1.1.0.
----End
Prerequisites
An Ethernet service activation test has been configured and conducted.
NQA test results cannot be displayed automatically on the terminal. Run the display nqa
results command to view test results. By default, the command output shows the results of
the last five tests.
Procedure
● Run the display nqa results [ test-instance adminName testName ]
command on the initiator to view Ethernet service activation test results.
● Run the display nqa reflector [ reflector-id ] command on the reflector to
check reflector information.
----End
Usage Scenario
The result table of NQA test instances records results of each test type. A
maximum of 5000 test result records are supported in total. If the number of
records reaches 5000, test results are uploaded and the new test result overwrites
the earliest one. If the NMS cannot poll test results in time, test results are lost.
You can send the statistics on the test results that reach the capacity of the local
storage or periodically send the statistics to the FTP server for storage through
FTP. This can effectively prevent the loss of test results and facilitate network
management based on the analysis of test results at different times.
Pre-configuration Tasks
Before configuring test results to be sent to the FTP server, complete the following
tasks:
● Configure the FTP server.
● Configure a reachable route between the NQA client and the NMS.
● Configure a test instance.
Data Preparation
Before configuring test results to be sent to the FTP server, you need the following
data.
No. Data
2 User name and password used for logging in to the FTP server
Context
Perform the following operations on the NQA client.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa upload test-type { icmp | icmpjitter | jitter | udp } ftp ipv4 ipv4-address
file-name file-name [ vpn-instance vpn-instance-name ] [ port port-number ]
username user-name password password [ interval upload-interval ] [ retry
retry-times ]
----End
Context
Perform the following operations on the NQA client.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-instance admin-name test-name
The NQA view is displayed.
Step 3 Run test-type { icmp | icmpjitter | jitter | udp }
A test instance type is set.
Step 4 Run destination-address ipv4 destAddress
A destination address is configured.
Step 5 (Optional) Run destination-port port-number
A destination port number is configured.
Step 6 Run start
An NQA test instance is started.
An NQA test instance can be started immediately, at a specified time, or after a
specified delay.
● Run start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second
| hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
The test instance is started immediately.
● Run start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ]
The test instance is started at a specified time.
● Run start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ]
The test instance is started after a specified delay.
● Run start daily hh:mm:ss to hh:mm:ss [ begin { yyyy/mm/dd | yyyy-mm-
dd } ] [ begin { yyyy/mm/dd | yyyy-mm-dd } ]
----End
Prerequisites
Test results have been configured to be sent to the FTP server.
Procedure
Step 1 Run the display nqa upload file-info command to check information about files
that a device is uploading and has attempted to upload onto a server.
----End
Procedure
● Run the display nqa support-test-type command to check the supported test
types.
<HUAWEI> display nqa support-test-type
NQA support test type information:
----------------------------------------------------
Type Description
tcp TCP type NQA test
udp UDP type NQA test
jitter JITTER type NQA test
icmp ICMP type NQA test
snmp SNMP type NQA test
trace TRACE type NQA test
lspping LSPPING type NQA test
lsptrace LSPTRACE type NQA test
dns DNS type NQA test
pwe3ping PWE3PING type NQA test
pwe3trace PWE3TRACE type NQA test
macping MACPING type NQA test
lspjitter LSPJITTER type NQA test
----End
Prerequisites
Run the following commands in the NQA view to stop an NQA test instance.
Procedure
Step 1 Run system-view
Step 2 Run nqa test-instance admin-name test-name
Step 3 Run stop
The NQA test instance is stopped.
Step 4 Run commit
----End
Prerequisites
Run the following commands in the NQA view to restart an NQA test instance.
Context
NOTICE
Procedure
Step 1 Run system-view
----End
Procedure
● Run display cfm statistics lblt
----End
Prerequisites
Run the following commands in the user view to delete NQA statistics.
Context
NOTICE
Statistics cannot be restored after being deleted. Exercise caution when running
the reset command.
Procedure
Step 1 Run reset lspv statistics
----End
Prerequisites
Before running the clear-records command, run the stop and commit commands
to stop the NQA test instance first.
Context
NOTICE
Test records cannot be restored after being deleted. Exercise caution when running
the clear-records command.
Procedure
Step 1 Run system-view
Historical records and result records of the NQA test instance are deleted.
----End
Networking Requirements
On the network shown in Figure 4-4, Device A needs to access host A using the
domain name Server.com. A DNS test instance can be configured on Device A to
measure the performance of interaction between Device A and the DNS server.
Figure 4-4 Networking diagram for detecting the DNS resolution speed
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between Device A, the DNS server, and host A at
the network layer.
2. Configure a DNS test instance on Device A and start the test instance to
detect the DNS resolution speed on an IP network.
Data Preparation
To complete the configuration, you need the following data:
● IP address of the DNS server
● Domain name and IP address of host A
Procedure
Step 1 Configure reachable routes between Device A, the DNS server, and host A at the
network layer. (Omitted)
Step 2 Configure a DNS test instance and start it.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] dns resolve
[*DeviceA] dns server 10.3.1.1
[*DeviceA] nqa test-instance admin dns
Step 3 Verify the test result. Min/Max/Average Completion Time indicates the delay
between the time when a DNS request packet is sent and the time when a DNS
response packet is received. In this example, the delay is 208 ms.
[~DeviceA-nqa-admin-dns] display nqa results test-instance admin dns
NQA entry(admin, dns) :testflag is inactive ,testtype is dns
1 . Test 1 result The test is finished
Send operation times: 1 Receive response times: 1
Completion:success RTD OverThresholds number:0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Status errors number:0
Destination ip address:10.3.1.1
Min/Max/Average Completion Time: 208/208/208 Sum/Square-Sum Completion Time: 208/43264
Last Good Probe Time: 2018-01-25 09:18:22.6
Lost packet ratio: 0 %
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
dns resolve
dns server 10.3.1.1
#
nqa test-instance admin dns
test-type dns
destination-address url Server.com
dns-server ipv4 10.3.1.1
#
return
Networking Requirements
On the network shown in Figure 4-5, the headquarters and a subsidiary of a
company often need to use TCP to exchange files with each other. The time taken
to respond to a TCP transmission request must be less than 800 ms. The NQA TCP
test can be configured to measure the TCP response time between Device A and
Device D that are connected to the IP backbone network.
Figure 4-5 Networking diagram for an NQA TCP test to measure the response
time on an IP network
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Device D as the NQA client and Device A as the NQA server, and
create a TCP test instance.
2. Configure the test instance to start at 10:00 o'clock every day and start the
test instance.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of Device A and Device D that are connected to the IP backbone
network
● Number of the port used to monitor TCP services
Procedure
Step 1 Configure the NQA server Device A.
<DeviceA> system-view
[~DeviceA] nqa-server tcpconnect 10.1.1.1 4000
[*DeviceA] commit
Step 2 Configure the NQA client Device D. Create a TCP test instance. Set the destination
IP address to the IP address of Device A.
<DeviceD> system-view
[*DeviceD] nqa test-instance admin tcp
[*DeviceD-nqa-admin-tcp] test-type tcp
[*DeviceD-nqa-admin-tcp] destination-address ipv4 10.1.1.1
[*DeviceD-nqa-admin-tcp] destination-port 4000
[*DeviceD-nqa-admin-tcp] commit
[*DeviceD-nqa-admin-tcp] commit
Step 4 Verify the test result. Run the display nqa results test-instance admin tcp
command on Device D. The command output shows that the TCP response time is
less than 800 ms.
[~DeviceD-nqa-admin-tcp] display nqa results test-instance admin tcp
NQA entry(admin, tcp) :testflag is active ,testtype is tcp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number:0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.1.1.1
Min/Max/Average Completion Time: 600/610/603
Sum/Square-Sum Completion Time: 1810/1092100
Last Good Probe Time: 2011-01-16 02:59:41.6
Lost packet ratio: 0 %
Step 5 Configure the test instance to start at 10:00 o'clock every day.
[~DeviceD-nqa-admin-tcp] stop
[*DeviceD-nqa-admin-tcp] start daily 10:00:00 to 10:30:00
[*DeviceD-nqa-admin-tcp] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
nqa-server tcpconnect 10.1.1.1 4000
#
isis 1
network-entity 00.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 4-6, the headquarters and a subsidiary of a
company often need to use VoIP to hold teleconferences. The round-trip delay
time must be less than 250 ms, and the jitter time must be less than 20 ms. The
UDP jitter test can be configured to simulate VoIP services.
Figure 4-6 Networking diagram for an NQA UDP jitter test to monitor the VoIP
service jitter time
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Device A as the NQA server and Device D as the NQA client, and
create a UDP jitter test instance on Device D.
2. Start the test instance.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the NQA server Device A.
<DeviceA> system-view
[~DeviceA] nqa-server udpecho 10.1.1.1 4000
[*DeviceA] commit
2. Create a UDP jitter test instance and set the destination IP address to the IP
address of Device A.
[*DeviceD] nqa test-instance admin udpjitter
[*DeviceD-nqa-admin-udpjitter] test-type jitter
[*DeviceD-nqa-admin-udpjitter] destination-address ipv4 10.1.1.1
[*DeviceD-nqa-admin-udpjitter] destination-port 4000
Step 4 Verify the test result. Run the display nqa results test-instance admin udpjitter
command on Device D. The command output shows that the round-trip delay
time is less than 250 ms, and the jitter time is less than 20 ms.
[~DeviceD-nqa-admin-udpjitter] display nqa results test-instance admin udpjitter
NQA entry(admin, udpjitter) :testflag is active ,testtype is jitter
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
nqa-server udpecho 10.1.1.1 4000
#
isis 1
network-entity 00.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
return
Networking Requirements
On the MPLS network shown in Figure 4-7, Device A and Device C are PEs. An
NQA LSP ping test can be configured to periodically monitor the connectivity
between these two PEs.
Figure 4-7 Networking diagram for an NQA LSP ping test to monitor MPLS
network connectivity
Configuration Roadmap
The configuration roadmap is as follows:
1. Create an LSP ping test instance on Device A.
2. Start the test instance.
Data Preparation
To complete the configuration, you need the IP addresses of Device A and Device
C.
Procedure
Step 1 Create an LSP ping test instance.
<DeviceA> system-view
[~DeviceA] nqa test-instance admin lspping
[*DeviceA-nqa-admin-lspping] test-type lspping
[*DeviceA-nqa-admin-lspping] lsp-type ipv4
[*DeviceA-nqa-admin-lspping] destination-address ipv4 3.3.3.9 lsp-masklen 32
Step 4 Configure the test instance to start at 10:00 o'clock every day.
[*DeviceA-nqa-admin-lspping] stop
[*DeviceA-nqa-admin-lspping] start daily 10:00:00 to 10:30:00
[*DeviceA-nqa-admin-lspping] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
nqa test-instance admin lspping
test-type lspping
destination-address ipv4 3.3.3.9 lsp-masklen 32
start daily 10:00:00 to 10:30:00
#
return
Networking Requirements
Figure 4-8 illustrates the networking of monitoring PW connectivity between U-
PE1 and U-PE2. CE-A and CE-B run PPP to access U-PE1 and U-PE2, respectively.
U-PE1 and U-PE2 are connected on an MPLS backbone network. A dynamic multi-
segment PW between U-PE1 and U-PE2 is established over an label switched path
(LSP), with an S-PE functioning as the transit node.
The PWE3 ping function can be configured to monitor the connectivity of the
multi-segment PW between U-PE1 and U-PE2.
Configuration Roadmap
The configuration roadmap is as follows:
1. Run an IGP on the backbone network to implement the connectivity of
routers on the backbone network.
2. Enable basic MPLS functions over the backbone and set up LSP tunnels.
Establish remote MPLS Label Distribution Protocol (LDP) peer relationship
between U-PE1 and S-PE, and between U-PE2 and S-PE.
3. Set up an MPLS Layer 2 virtual circuit (L2VC) connection between U-PEs.
4. Set up a switched PW on the switching node S-PE.
5. Configure the PWE3 Ping test on the multi-segment PW on U-PE1.
Data Preparation
To complete the configuration, you need the following data:
● Different L2VC IDs of U-PE1 and U-PE2
● MPLS LSR IDs of U-PE1, S-PE, and U-PE2
● IP address of the peer
● Encapsulation type of the switched PW
● Name of the PW template configured on the U-PEs and parameters of the
PW template
Procedure
Step 1 Configure a dynamic multi-segment PW.
Configure a dynamic multi-segment PW on the MPLS backbone network.
----End
Configuration Files
● CE-A configuration file
#
sysname CE-A
#
interface GigabitEthernet1/0/0
undo shutdown
undo shutdown
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
● S-PE configuration file
#
sysname S-PE
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
mpls switch-l2vc 5.5.5.9 200 between 1.1.1.9 100 encapsulation ppp
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
#
mpls ldp remote-peer 5.5.5.9
remote-ip 5.5.5.9
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 4.4.4.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.2.9 0.0.0.0
network 10.3.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
#
Networking Requirements
A generalflow test needs to be configured to monitor the performance of an
Ethernet virtual connection (EVC) between Device A and Device B on the network
shown in Figure 4-9.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure reachable Layer 2 links between the initiator and reflector and add
Layer 2 interfaces to VLAN 10. For configuration details, see "Configuration Files"
in this section.
Step 2 Configure the reflector.
<DeviceB> system-view
[~DeviceB] nqa reflector 1 interface gigabitethernet 1/0/1 mac 00-e0-fc-12-34-56 vlan 10
Step 3 Configure the initiator to conduct a throughput test and check the test results.
<DeviceA> system-view
[~DeviceA] nqa test-instance admin throughput
[*DeviceA-nqa-admin-throughput] test-type generalflow
[*DeviceA-nqa-admin-throughput] measure throughput
[*DeviceA-nqa-admin-throughput] destination-address mac 00-e0-fc-12-34-56
[*DeviceA-nqa-admin-throughput] forwarding-simulation inbound-interface gigabitethernet 1/0/1
[*DeviceA-nqa-admin-throughput] rate 10000 100000
[*DeviceA-nqa-admin-throughput] interval seconds 5
[*DeviceA-nqa-admin-throughput] precision 1000
[*DeviceA-nqa-admin-throughput] fail-ratio 81
[*DeviceA-nqa-admin-throughput] datasize 70
[*DeviceA-nqa-admin-throughput] duration 100
[*DeviceA-nqa-admin-throughput] vlan 10
[*DeviceA-nqa-admin-throughput] start now
[*DeviceA-nqa-admin-throughput] commit
[~DeviceA-nqa-admin-throughput] display nqa results test-instance admin throughput
NQA entry(admin, throughput) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is throughput
ID Size Throughput(Kbps) Precision(Kbps) LossRatio Completion
1 70 100000 1000 0.00% success
Step 4 Configure the initiator to conduct a latency test and check the test results.
[*DeviceA] nqa test-instance admin delay
[*DeviceA-nqa-admin-delay] test-type generalflow
[*DeviceA-nqa-admin-delay] measure delay
[*DeviceA-nqa-admin-delay] destination-address mac 00-e0-fc-12-34-56
[*DeviceA-nqa-admin-delay] forwarding-simulation inbound-interface gigabitethernet 1/0/1
[*DeviceA-nqa-admin-delay] datasize 64
[*DeviceA-nqa-admin-delay] rate 99000
[*DeviceA-nqa-admin-delay] interval seconds 5
[*DeviceA-nqa-admin-delay] duration 100
[*DeviceA-nqa-admin-delay] vlan 10
[*DeviceA-nqa-admin-delay] start now
[*DeviceA-nqa-admin-delay] commit
[~DeviceA-nqa-admin-delay] display nqa results test-instance admin delay
NQA entry(admin, delay) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is delay
ID Size Min/Max/Avg RTT(us) Min/Max/Avg Jitter(us) Completion
1 64 1/12/5 2/15/8 finished
Step 5 Configure the initiator to conduct a packet loss rate test and check the test results.
[*DeviceA] nqa test-instance admin loss
----End
Configuration Files
● Configuration file of Device A
#
sysname DeviceA
#
vlan 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet 1/0/2
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
nqa test-instance admin throughput
test-type generalflow
duration 100
measure throughput
fail-ratio 81
destination-address mac 00e0-fc12-3456
datasize 70
rate 10000 100000
precision 1000
forwarding-simulation inbound-interface GigabitEthernet1/0/1
vlan 10
nqa test-instance admin loss
test-type generalflow
duration 100
measure loss
destination-address mac 00e0-fc12-3456
datasize 64
rate 99000
forwarding-simulation inbound-interface GigabitEthernet1/0/1
vlan 10
nqa test-instance admin delay
test-type generalflow
duration 100
measure delay
interval seconds 5
destination-address mac 00e0-fc12-3456
datasize 64
rate 99000
forwarding-simulation inbound-interface GigabitEthernet1/0/1
vlan 10
#
return
Usage Scenario
A generalflow test needs to be configured to monitor the performance of the
Ethernet network shown in Figure 4-10 between Device A and IP gateway Device
B.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reflector Device A.
2. Configure initiator Device B and monitor the latency time.
Data Preparation
To complete the configuration, you need the following data:
● On reflector Device A: Set the simulated IP address to 10.1.1.1 (CE's IP
address) and the reflector interface to GE 1/0/1.
● On initiator Device B:
– Destination IP address: (10.1.1.1) of the CE connected to Device A's GE
1/0/1
– Source IP address: an address that resides on the same network segment
as the IP address of the initiator
– Latency test parameters: packet rate (99000 Kbit/s), packet loss ratio
(81%), test duration (100s), and interval (5s) at which the initiator sends
test packets
Procedure
Step 1 Configure Layer 2 devices so that Layer 3 routes between the CE and Device B are
reachable. For configuration details, see "Configuration Files" in this section.
Step 2 Configure the reflector.
[*DeviceA] vlan 10
[*DeviceA-vlan10] commit
[~DeviceA-vlan10] quit
[~DeviceA] nqa reflector 1 interface GigabitEthernet 1/0/1 ipv4 10.1.1.1 vlan 10
[*DeviceA] commit
Step 3 Configure the initiator to conduct a latency test and view test results.
[*DeviceB] vlan 10
[*DeviceB-vlan10] commit
[~DeviceB-vlan10] quit
[~DeviceB] interface gigabitethernet 1/0/2.1
[*DeviceB-GigabitEthernet1/0/2.1] vlan-type dot1q 10
[*DeviceB-GigabitEthernet1/0/2.1] ip address 10.1.1.2 24
[*DeviceB-GigabitEthernet1/0/2.1] quit
[*DeviceB] arp static 10.1.1.1 00e0-fc12-3456 vid 10 interface GigabitEthernet 1/0/2.1
[*DeviceB] nqa test-instance admin delay
[*DeviceB-nqa-admin-delay] test-type generalflow
[*DeviceB-nqa-admin-delay] measure delay
[*DeviceB-nqa-admin-delay] destination-address ipv4 10.1.1.1
[*DeviceB-nqa-admin-delay] source-address ipv4 10.1.1.2
[*DeviceB-nqa-admin-delay] source-interface gigabitethernet 1/0/2.1
[*DeviceB-nqa-admin-delay] rate 99000
[*DeviceB-nqa-admin-delay] interval seconds 5
[*DeviceB-nqa-admin-delay] datasize 64
[*DeviceB-nqa-admin-delay] duration 100
[*DeviceB-nqa-admin-delay] start now
[*DeviceB-nqa-admin-delay] commit
[~DeviceB-nqa-admin-delay] display nqa results test-instance admin delay
NQA entry(admin, delay) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is delay
ID Size Min/Max/Avg RTT(us) Min/Max/Avg Jitter(us) Completion
1 64 1/12/5 2/15/8 finished
----End
Configuration Files
● Configuration file of Device A
#
sysname DeviceA
#
vlan 10
#
nqa reflector 1 interface GigabitEthernet 1/0/1 ipv4 10.1.1.1 vlan 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet 1/0/2
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
return
test-type generalflow
destination-address ipv4 10.1.1.1
source-address ipv4 10.1.1.2
duration 100
measure delay
interval seconds 5
datasize 64
rate 99000
source-interface GigabitEthernet 1/0/2.1
#
return
Networking Requirements
Figure 4-11 shows how to check the Ethernet frame transmission performance
between Device B and Device C complies with the SLA.
Configuration Roadmap
1. Configure a reflector (Device C) and flow-based traffic filtering with the
reflection port being set to GE1/0/1.
2. Configure an initiator (Device B) as well as configuration and performance
tests.
Data Preparation
To complete the configuration, you need the following data:
● Configurations of the reflector (Device C): The MAC address of GE1/0/1 on
Device D connected to UNI B is 2-2-2, and the reflector is configured to
implement reflection based on flows.
● Configurations of the initiator (Device B)
– Service flow configurations and characteristics:
▪ Destination MAC address: 2-2-2, that is, the MAC address of GE1/0/1
on Device D connected to UNI B
▪ Source MAC address: 1-1-1, that is, the MAC address of GE1/0/1 on
Device A connected to UNI A
Procedure
Step 1 Configure a reachable link between the initiator and reflector and add Layer 2
interfaces to VLAN 10.
Step 2 Configure the reflector.
[*DeviceC] nqa test-flow 1
[*DeviceC-nqa-testflow-1] vlan 10
[*DeviceC-nqa-testflow-1] udp destination-port 1234
[*DeviceC-nqa-testflow-1] udp source-port 5678
[*DeviceC-nqa-testflow-1] traffic-type mac destination 2-2-2
[*DeviceC-nqa-testflow-1] traffic-type mac source 1-1-1
[*DeviceC-nqa-testflow-1] quit
[*DeviceC] nqa reflector 1 interface GigabitEthernet 1/0/1 test-flow 1 exchange-port agetime 0
[*DeviceC] commit
Step 3 Configure the initiator to perform configuration and performance tests and view
test results.
[*DeviceB] nqa test-flow 1
[*DeviceB-nqa-testflow-1] vlan 10
[*DeviceB-nqa-testflow-1] udp destination-port 1234
[*DeviceB-nqa-testflow-1] udp source-port 5678
[*DeviceB-nqa-testflow-1] cir simple-test enable
[*DeviceB-nqa-testflow-1] bandwidth cir 10000 eir 10000
[*DeviceB-nqa-testflow-1] sac flr 1000 ftd 1000 fdv 1000
[*DeviceB-nqa-testflow-1] traffic-type mac destination 2-2-2
[*DeviceB-nqa-testflow-1] traffic-type mac source 1-1-1
[*DeviceB-nqa-testflow-1] traffic-policing test enable
[*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
[*DeviceB-nqa-testflow-1] quit
[*DeviceB] nqa test-instance admin ethernet
[*DeviceB-nqa-admin-ethernet] test-type ethernet-service
[*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 1/0/1
[*DeviceB-nqa-admin-ethernet] test-flow 1
[*DeviceB-nqa-admin-ethernet] start now
[*DeviceB-nqa-admin-ethernet] commit
[~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
1 . Test 1 result The test is finished
Status : Pass
Test-flow number : 1
Mode : Round-trip
Last step : Performance-test
Estimated total time :6
Real test time :6
1 . Configuration-test
Test-flow 1, CIR simple test
Begin : 2014-06-25 16:22:45.8
End : 2014-06-25 16:22:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : --
Min/Max/Mean IR(kbit/s) : 9979/10057/10013
Min/Max/Mean FTD(us) : 98/111/104
Min/Max/Mean FDV(us) : 1/11/5
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Green
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 10039/10054/10045
Min/Max/Mean FTD(us) : 96/110/104
Min/Max/Mean FDV(us) : 1/9/4
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Yellow
Begin : 2014-06-25 16:23:45.8
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
vlan batch 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
return
Networking Requirements
The network shown in Figure 4-12 requires a check of the Ethernet frame
transmission between DeviceB and DeviceC to determine whether the
performance parameters meet SLAs.
● In this example, Interface 1 and Interface 2 stand for GE 1/0/1 and GE 1/0/2,
respectively.
Configuration Roadmap
1. Configure Device C as the reflector and set filter criteria based on flows.
2. Configure Device B as the initiator and execute configuration tests and
performance tests.
Data Preparation
To complete the configuration, you need the following data:
● Configurations on the reflector (DeviceC)
– Service flow configurations and characteristics:
▪ Destination MAC address: 4-4-4, that is, the MAC address of GE 1/0/1
on DeviceD connected to UNI B
▪ Source MAC address: 3-3-3, that is, the MAC address of UNI B
▪ Destination MAC address: 2-2-2, that is, the MAC address of UNI A
▪ Source MAC address: 1-1-1, that is, the MAC address of GE 1/0/1 on
DeviceA connected to UNI A
The link between the two user networks must be reachable. Otherwise, static ARP entries
must be configured.
Procedure
Step 1 Configure the Layer 3 link reachability for the initiator and reflector.
Step 3 Configure the initiator to initiate a configuration test and a performance test and
view the test results.
[*DeviceB] nqa test-flow 1
[*DeviceB-nqa-testflow-1] bandwidth cir 500000 eir 20000
[*DeviceB-nqa-testflow-1] sac flr 1000 ftd 10000 fdv 10000000
[*DeviceB-nqa-testflow-1] traffic-type mac destination 2-2-2
[*DeviceB-nqa-testflow-1] traffic-type mac source 1-1-1
[*DeviceB-nqa-testflow-1] traffic-type ipv4 destination 10.1.3.2
[*DeviceB-nqa-testflow-1] traffic-type ipv4 source 10.1.1.1
[*DeviceB-nqa-testflow-1] traffic-policing test enable
[*DeviceC-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
[*DeviceB-nqa-testflow-1] quit
[*DeviceB] nqa test-instance admin ethernet
[*DeviceB-nqa-admin-ethernet] test-type ethernet-service
[*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 1/0/1.1
[*DeviceB-nqa-admin-ethernet] test-flow 1
[*DeviceB-nqa-admin-ethernet] start now
[*DeviceB-nqa-admin-ethernet] commit
[~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
1 . Test 1 result The test is finished
Status : Pass
Test-flow number : 1
Mode : Round-trip
Last step : Performance-test
Estimated total time :6
Real test time :6
1 . Configuration-test
Test-flow 1, CIR simple test
Begin : 2014-06-25 16:22:45.8
End : 2014-06-25 16:22:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : --
Min/Max/Mean IR(kbit/s) : 9979/10057/10013
Min/Max/Mean FTD(us) : 98/111/104
Min/Max/Mean FDV(us) : 1/11/5
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Green
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 10039/10054/10045
Min/Max/Mean FTD(us) : 96/110/104
Min/Max/Mean FDV(us) : 1/9/4
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Yellow
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : --
Min/Max/Mean IR(kbit/s) : 12544/12566/12554
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 1/8/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
2 . Performance-test
Test-flow 1, Performance-test
Begin : 2014-06-25 16:24:15.8
End : 2014-06-25 16:39:15.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9888/10132/10004
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/8/2
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
----End
Configuration Files
● Device B configuration file
#
sysname DeviceB
#
interface GigabitEthernet 1/0/1
undo shutdown
#
interface GigabitEthernet 1/0/1.1
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet 1/0/2
undo shutdown
ip address 10.1.2.1 255.255.255.0
#
nqa test-flow 1
bandwidth cir 500000 eir 20000
sac flr 1000 ftd 10000 fdv 10000000
traffic-type mac destination 2-2-2
traffic-type mac source 1-1-1
traffic-type ipv4 destination 10.1.3.2
traffic-type ipv4 source 10.1.1.1
traffic-policing test enable
color-mode 8021p green 0 7 yellow 0 7
#
nqa test-instance admin ethernet
test-type ethernet-service
forwarding-simulation inbound-interface GigabitEthernet 1/0/1.1
test-flow 1
#
return
Networking Requirements
An Ethernet service activation test can be configured to help users learn the
performance and running status of existing deployed networks before rolling out
services. Information obtained can help make business proposals and promote
services.
In Figure 4-13, the EVPN VXLAN to be tested resides on UNIs on the network side.
Device B is configured as the initiator, and Device C is configured as the reflector.
A test instance is configured to test whether Ethernet frame transmission
performance between the initiator and reflector meets the SLA.
In this example, the destination MAC address is specified in a test instance to check the
network performance between CEs on both ends of a Layer 2 EVPN. In a Layer 3 scenario,
the destination IP address must be specified.
Interface 1 and interface 2 stand for GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Bandwidth profile: 10000 kbit/s for both the CIR and EIR
● Service acceptance criteria: 1000/100000 for FLR: 1000 microseconds for both
FTD and FDV
Procedure
Step 1 Assign an IP address and a loopback address to each interface.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network. In this example, OSPF is used.
For configuration details, see Configuration Files in this section.
Step 3 Configure a VXLAN tunnel between Device B and Device C.
For configuration roadmap, see Configuring VXLAN. For configuration details, see
Configuration Files in this section.
After a VXLAN tunnel is established, run the display vxlan tunnel command on
Device B or Device C to check VXLAN tunnel information. The following example
uses the command output on Device B.
[~DeviceB] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.1 2.2.2.2 up dynamic 00:12:56
Step 4 Configure Device A and Device B to communicate and Device C and Device D to
communicate.
# Configure Device B.
[~DeviceB] interface gigabitethernet 2/0/0.1 mode l2
[*DeviceB-GigabitEthernet2/0/0.1] encapsulation dot1q vid 10
[*DeviceB-GigabitEthernet2/0/0.1] bridge-domain 10
[*DeviceB-GigabitEthernet2/0/0.1] commit
[~DeviceB-GigabitEthernet2/0/0.1] quit
Step 6 Configure Device B as the initiator to simulate and send service traffic.
[~DeviceB] nqa test-flow 1
[*DeviceB-nqa-testflow-1] vlan 10
[*DeviceB-nqa-testflow-1] udp destination-port 1234
[*DeviceB-nqa-testflow-1] udp source-port 5678
[*DeviceB-nqa-testflow-1] cir simple-test enable
[*DeviceB-nqa-testflow-1] bandwidth cir 10000 eir 10000
[*DeviceB-nqa-testflow-1] sac flr 1000 ftd 1000 fdv 1000
[*DeviceB-nqa-testflow-1] traffic-type mac destination 00-e0-fc-12-34-67
[*DeviceB-nqa-testflow-1] traffic-type mac source 00-e0-fc-12-34-65
[*DeviceB-nqa-testflow-1] traffic-policing test enable
[*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
[*DeviceB-nqa-testflow-1] commit
[~DeviceB-nqa-testflow-1] quit
Run the display nqa results test-instance admin ethernet command on Device
B. The command output shows that the test status is Pass, which indicates that
the test is successful.
[~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
1 . Test 1 result The test is finished
Status : Pass
Test-flow number : 1
Mode : Round-trip
Last step : Performance-test
Estimated total time :6
Real test time :6
1 . Configuration-test
Test-flow 1, CIR simple test
Begin : 2014-06-25 16:22:45.8
End : 2014-06-25 16:22:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1
vlan-type dot1q 10
ip address 10.100.0.1 255.255.255.0
#
ospf 1
import-route direct
area 0.0.0.0
network 10.100.0.0 0.0.0.255
#
return
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance evpna
ipv4-family
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
vxlan vni 100
#
bridge-domain 10
vxlan vni 1 split-horizon-mode
evpn binding vpn-instance evpna
#
interface Vbdif10
ip binding vpn-instance evpna
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.0.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Nve1
source 1.1.1.1
vni 1 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
peer 2.2.2.2 advertise encap-type vxlan
#
ospf 1
import-route direct
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.0.0.0 0.0.0.255
#
nqa test-flow 1
vlan 10
udp destination-port 1234
udp source-port 5678
cir simple-test enable
bandwidth cir 10000 eir 10000
sac flr 1000 ftd 1000 fdv 1000
traffic-type mac destination 00e0-fc12-3457
traffic-type mac source 00e0-fc12-3456
traffic-policing test enable
color-mode 8021p green 0 7 yellow 0 7
#
nqa test-instance admin ethernet
test-type ethernet-service
forwarding-simulation inbound-interface GigabitEthernet 2/0/0.1
test-flow 1
#
return
● Device C configuration file
#
sysname DeviceC
#
evpn vpn-instance evpna bd-mode
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance evpna
ipv4-family
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
vxlan vni 100
#
bridge-domain 10
vxlan vni 1 split-horizon-mode
evpn binding vpn-instance evpna
#
interface Vbdif10
ip binding vpn-instance evpna
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.0.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Nve1
source 2.2.2.2
vni 1 head-end peer-list protocol bgp
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
peer 1.1.1.1 advertise encap-type vxlan
#
ospf 1
import-route direct
area 0.0.0.0
network 2.2.2.2 0.0.0.0
Networking Requirements
As shown in Figure 4-14, Device A serves as the client to perform an ICMP test
and send test results to the FTP server through FTP.
● Interface 1 and interface 2 in this example stand for GE 1/0/0 and GE 2/0/0, respectively.
Figure 4-14 Networking diagram of sending test results to the FTP server
Configuration Roadmap
The configuration roadmap is as follows:
1. Set parameters for configuring test results to be sent to the FTP server.
2. Start a test instance.
3. Verify the configurations.
Data Preparation
To complete the configuration, you need the following data:
● IP address of the FTP server
● User name and password used for logging in to the FTP server
● Name of a file in which test results are saved through FTP
● Interval at which test results are uploaded through FTP
Procedure
Step 1 Set parameters for configuring test results to be sent to the FTP server.
<DeviceA> system-view
[~DeviceA] nqa upload test-type icmp ftp ipv4 10.1.2.8 file-name test1 port 21 username ftp password
huawei-123 interval 600 retry 3
[*DeviceA] commit
FileName : NQA_38ba47987301_icmp_20171014112421710_test1.xml
Status : Uploading
RetryTimes : 3
UploadTime : --
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet 1/0/0
5 Ping/Tracert
Configuration Precautions
N/A
Context
Ping is a common debugging tool used to test the reachability of devices. It uses
ICMP Echo messages to determine the following:
● Whether the remote device is available.
● Round-trip delay of the communication with the remote host.
● Whether packet loss occurs.
The ping command labels each ICMP Echo Request message with a sequence ID
that starts from 1 and is increased by 1. The number of ICMP Echo Request
messages to be sent is determined by the device, and the default number is 5. The
number of ICMP Echo Request messages to be sent can also be set. If the
destination is reachable, it sends five ICMP Echo Reply messages to the source,
with their sequence numbers identical with that of ICMP Echo Request messages.
Perform the following steps in any view on the client:
Procedure
Step 1 Check whether network connectivity is proper. You can run a different command
to display detailed or brief information.
The jitter time and delay time in ping processes are great. This is because the
ICMP packets used in ping operations need to be processed by the CPUs of devices
and the processing produces great delays. The details are as follows:
● To minimize the impact of ping attacks on itself, the NetEngine 8000 reduces
the ICMP packet processing priority to the lowest level.
● The NetEngine 8000 uses a distributed processing system. ARP and ICMP
packets and routing information are processed on the interface board. In a
ping operation, the interface board sends ICMP packets to the interface board
for processing, and then the interface board returns the processed ICMP
packets to the interface board. Due to their low processing priority, ICMP
packets are always transmitted and processed after other packets. Their
transmission is delayed.
To resolve ping delay and jitter issues, devices provide the ICMP fast reply function.
After this function is enabled, received ICMP request packets are not sent to the
CPU for processing. Instead, the PFE of the interface board responds to the source
end with ICMP reply packets, greatly shortening the ping delay.
After the undo icmp-reply fast command is run in the system or slot view, the fast ICMP
reply function is disabled on the interface board. After the fast ICMP reply function is
disabled on the interface board, the fast ICMP reply function takes effect on the interface
board only after the icmp-reply fast command is run in both the system and slot views.
----End
Context
Multiple physical interfaces can be bundled into a logical trunk interface, and
these physical interfaces are trunk member interfaces. A specific transmission path
is used by each member interface. The path-specific service parameters, such as
delay time, jitter time, and packet loss ratio, are also different. Therefore, you
cannot determine which member interface is faulty when the quality of services
on a trunk interface deteriorates. To resolve this problem, perform a ping test to
detect each physical link to help locate the faulty link.
The ping test applies when two devices are directly connected through trunk interfaces or Eth-
Trunk sub-interfaces.
Procedure
Step 1 Enable the receive end to monitor Layer 3 trunk member interfaces.
1. Run system-view
The system view is displayed.
----End
Context
The tracert/tracert ipv6 command is used to discover gateways through which a
message passes from the source to the destination. The maximum TTL value set
for the UDP packet is 255. Each time the source does not receive a reply after the
configured time elapses, it displays the TTL of the UDP packet as expired and
sends another UDP packet with the TTL value increasing by 1. If the TTL value
remains expired for 255 times, the source considers that the UDP packet cannot
reach the destination and the trace test fails.
Procedure
● On an IPv4 network:
a. (Optional) Configure the IP address of the loopback interface as the
source IP address of ICMP Port Unreachable or Time Exceeded messages.
i. Run system-view
The system view is displayed.
ii. Run interface loopback loopback-number
A loopback interface is created, and the loopback interface view is
displayed.
iii. (Optional) Run ip binding vpn-instance vpn-instance-name
The interface is bound to a VPN instance.
iv. Run ip icmp { ttl-exceeded | port-unreachable } source-address
The IP address of the loopback interface is configured as the source
IP address of ICMP Port Unreachable or Time Exceeded messages.
v. Run commit
The configuration is committed.
b. Run tracert [ -a source-ip-address | -f initTtl | -m maxTtl | -p port | -q
nqueries | -vpn-instance vpn-instance-name | -w timeout | -v | -name | -
s size | -tos tos-value | -nexthop nexthop-address | -passroute | -i
interface-type interface-number ] * host
The fault position is tested.
The following example uses the tracert command to analyze the
network.
<HUAWEI> tracert -m 10 10.1.1.1
traceroute to 10.1.1.1 (10.1.1.1), max hops: 10 ,packet length: 40,press CTRL_C to break
1 172.16.112.1 19 ms 19 ms 1 ms
2 172.16.216.1 39 ms 39 ms 19 ms
3 172.16.136.23 39 ms 40 ms 39 ms
4 172.16.168.22 39 ms 39 ms 39 ms
5 172.16.197.4 40 ms 59 ms 59 ms
6 172.16.221.5 59 ms 59 ms 59 ms
7 172.31.70.13 99 ms 99 ms 80 ms
8 172.31.71.6 139 ms 239 ms 319 ms
9 172.31.81.7 220 ms 199 ms 199 ms
10 10.1.1.1 239 ms 239 ms 239 ms
● On an IPv6 network:
----End
Prerequisites
Before you start a test, run the lspv mpls-lsp-ping echo enable or lspv mpls-lsp-
ping echo enable ipv6 command to enable the device to respond to MPLS echo
request/MPLS echo request IPv6 packets.
As NQA is deployed on the main control board of a device, both the initiator and
responder of an LSP ping test need to send LSP ping test packets to the main
control board for processing. If a large number of packets are sent to the main
control board, the CPU usage of the main control board increases, which adversely
affects device operation. To prevent this problem, run the lspv mpls-lsp-ping cpu-
defend cpu-defend command to set an upper limit for the rate of sending MPLS
Echo Request packets to the main control board.
If the MPLS packet length of an NQA test instance is greater than the MTU of a
specified MPLS tunnel, MPLS packets fail to pass through the tunnel. To allow the
packets to pass through the tunnel, run the fragment enable command to enable
MPLS packet fragmentation.
Context
Perform the following steps in any view on the NQA client:
Procedure
● To test the connectivity of an LDP LSP that carries IPv4 packets, run:
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r reply-mode | -s
packet-size | -t time-out | -v | -g ] * ip destination-iphost mask-length [ ip-address ] [ nexthop
nexthop-address ] [ remote remote-address ]
For example:
<HUAWEI> ping lsp -v ip 3.3.3.3 32
LSP PING FEC: IPV4 PREFIX 3.3.3.3/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms Return Code 3, Subcode 1
--- FEC: IPV4 PREFIX 3.3.3.3/32/ ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/4/5 ms
● To test the connectivity of a TE tunnel (RSVP-TE tunnel, static TE tunnel, or
dynamic TE tunnel) that carries IPv4 packets, run:
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r reply-mode | -s
packet-size | -t time-out | -v | -g ] * te { tunnelName | ifType ifNum } [ hot-standby ] [ compatible-
mode ] | auto-tunnel auto-tunnelname }
For example:
<HUAWEI> ping lsp te Tunnel 1
LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100 data bytes, press CTRL_C to break
Reply from 1.1.1.1: bytes=100 Sequence=1 time = 4 ms
Reply from 1.1.1.1: bytes=100 Sequence=2 time = 2 ms
Reply from 1.1.1.1: bytes=100 Sequence=3 time = 2 ms
Reply from 1.1.1.1: bytes=100 Sequence=4 time = 2 ms
Reply from 1.1.1.1: bytes=100 Sequence=5 time = 2 ms
--- FEC: RSVP IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/2/4 ms
● Test SR-MPLS TE IPv4 tunnel connectivity.
– To test the connectivity of an SR-MPLS TE tunnel dynamically created,
run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -
--- FEC: AUTO TE TUNNEL IPV4 SESSION QUERY Tunnel10 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 6/8/11 ms
--- FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel10 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 6/8/11 ms
--- FEC: BGP LABELED IPV4 PREFIX 4.4.4.4/32 ping statistics ---
2 packet(s) transmitted
2 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/24/46 ms
When testing the connectivity of an SR-MPLS BE tunnel connected with an LDP LSP,
specify a remote IP address using the remote remote-ip parameter.
You must run the lspv echo-reply fec-validation ldp disable command on the SR-
MPLS BE side to disable the LSPV response end from checking the LDP FEC.
<HUAWEI> ping lsp -v ip 3.3.3.3 32
LSP PING FEC: IPV4 PREFIX 3.3.3.3/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms Return Code 3, Subcode 1
--- FEC: IPV4 PREFIX 3.3.3.3/32/ ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/4/5 ms
● Test the connectivity of an SR-MPLS BE tunnel connected with an LDP LSP
(the LDP end does not support interworking).
To test the connectivity of an SR-MPLS BE tunnel connected with an LDP LSP,
run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -s packet-size | -t time-out | -v | -g ] * segment-routing ip
destination-address mask-length version draft2 remote-fec { ldp
remoteipaddr remotemasklen | nil } command on the ingress to initiate a
ping test to the egress with the destination address being the LDP LSP.
<HUAWEI> ping lsp -c 3 segment-routing ip 5.5.5.9 32 version draft2 remote-fec ldp 5.5.5.9 32
LSP PING FEC: IPV4 PREFIX 5.5.5.9/32 : 100 data bytes, press CTRL_C to break
Reply from 5.5.5.9: bytes=100 Sequence=1 time=9 ms
Reply from 5.5.5.9: bytes=100 Sequence=2 time=2 ms
Reply from 5.5.5.9: bytes=100 Sequence=3 time=3 ms
----End
Follow-up Procedure
After the test is completed, you are advised to run the undo lspv mpls-lsp-ping
echo enable or undo lspv mpls-lsp-ping echo enable ipv6 command to disable
the device from responding to MPLS Echo Request/MPLS Echo Request IPv6
packets to prevent system resource occupation.
Prerequisites
Before you start a test, run the lspv mpls-lsp-ping echo enable/lspv mpls-lsp-
ping echo enable ipv6 command to enable the device to respond to MPLS echo
request/MPLS echo request IPv6 packets.
If the device interworks with a non-Huawei device, run the lspv echo-reply compatible fec
enable command to enable the device to respond to MPLS Echo Request packets with
MPLS Echo Reply packets that do not carry FEC information.
As NQA is deployed on the main control board of a device, both the initiator and
responder of an LSP ping test need to send LSP ping test packets to the main
control board for processing. If a large number of packets are sent to the main
control board, the CPU usage of the main control board increases, which adversely
affects device operation. To prevent this problem, run the lspv mpls-lsp-ping cpu-
defend cpu-defend command to set an upper limit for the rate of sending MPLS
Echo Request packets to the main control board.
Context
Perform the following steps in any view on the NQA client:
Procedure
● To test the path over which an LDP LSP that carries IPv4 packets is established
or locate the fault point on the path, run:
tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-out | -s size | -g ] * ip
destination-iphost mask-length [ ip-address ] [ nexthop nexthop-address ] [ detail ]
For example:
<HUAWEI> tracert lsp ip 1.1.1.1 32
LSP Trace Route FEC: IPV4 PREFIX 1.1.1.1/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.1/[3 ]
1 1.1.1.1 5 Egress
● To test the path over which a TE tunnel (RSVP-TE tunnel, static TE tunnel, or
dynamic TE tunnel) that carries IPv4 packets is established or locate the fault
point on the path, run:
tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t
time-out | -s size | -g ] * te { tunnelName | ifType ifNum } [ hot-standby ]
[ compatible-mode ] | auto-tunnel auto-tunnelname [ detail ]
<HUAWEI> tracert lsp te Tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.1/[3 ]
1 1.1.1.1 4 Egress
● Test the path over which an SR-MPLS TE IPv4 tunnel is established or locate
the fault point on the path.
– To test an SR-MPLS TE tunnel dynamically created, run the tracert lsp [ -
a source-ip | -exp exp-value | -h ttl-value | -t time-out | -s size | -g ] *
segment-routing auto-tunnel auto-tunnelname version { draft2 |
draft4 } [ hot-standby ] [ detail ] command and specify auto-
tunnelname on the ingress to initiate a tracert test to the egress.
<HUAWEI> tracert lsp segment-routing auto-tunnel Tunnel10 version draft4
LSP Trace Route FEC: AUTO TE TUNNEL IPV4 SESSION QUERY Tunnel10 , press CTRL_C to
break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[284688 ]
1 10.1.1.2 7 ms Egress
– To test an SR-MPLS TE IPv4 tunnel manually configured, run the tracert
lsp [ -a source-ip | -exp exp-value | -h ttl-value | -t time-out | -s size | -g ]
* segment-routing te { tunnelName | ifType ifNum } [ draft2 ] [ hot-
0 Ingress 10.1.1.2/[284688 ]
1 10.1.1.2 7 ms Egress
● Test the path over which an SR-MPLS BE IPv4 tunnel is established or the
fault point on the path.
To test the fault point on an SR-MPLS BE IPv4 tunnel, run the tracert lsp [ -a
source-ip | -exp exp-value | -h ttl-value | -s size | -g ] * segment-routing ip
ip-address mask-length version draft2 [ remote remote-ip ] command.
<HUAWEI> tracert lsp segment-routing ip 2.2.2.2 32 version draft2
LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 2.2.2.2/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 192.168.1.2/[1001 ]
1 192.168.1.2 6 ms Transit 192.168.2.2/[3 ]
2 192.168.2.2 6 ms Egress
● Test the BGP LSP carrying IPv4 packets or locate the fault point on the path.
You must run the lspv echo-reply fec-validation ldp disable command on the SR-
MPLS BE side to disable the LSPV response end from checking the LDP FEC.
<HUAWEI> tracert lsp ip 1.1.1.1 32
LSP Trace Route FEC: IPV4 PREFIX 1.1.1.1/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.1/[3 ]
1 1.1.1.1 5 Egress
● Test the path over which an SR-MPLS TE policy tunnel carrying IPv4 packets is
established or the fault point on the path.
To locate a fault point on an SR-MPLS TE policy tunnel, run the tracert lsp [ -
a source-ip | -exp exp-value | -h ttl-value | -s packet-size | -t time-out | -g ] *
sr-te policy { policy-name policyname | endpoint-ip endpoint-ip color
colorid | binding-sid bsid } command.
<HUAWEI> tracert lsp sr-te policy policy-name test
LSP Trace Route FEC: Nil FEC, press CTRL_C to break.
sr-te policy¡¯s segment list:
Preference : 300; Path Type: main; Protocol-Origin : local; Originator: 0, 0.0.0.0; Discriminator: 300;
Segment-List ID : 1; Xcindex : 1
TTL Replier Time Type Downstream
0 Ingress 10.1.2.1/[13312 12]
1 10.1.2.1 63 ms Transit 10.1.2.2/[12 ]
2 6.6.6.6 93 ms Egress
● Test the path over which an inter-AS E2E SR-MPLS TE tunnel is established or
locate the fault point on the path.
To locate a fault point on an inter-AS E2E SR-MPLS TE tunnel, run the tracert
lsp [ -a source-ip | -exp exp-value | -h ttl-value | -t time-out | -s packet-size | -
g | -r reply-mode ] * segment-routing { { auto-tunnel srAutoTunnelName
version { draft2 | draft4 } } | te { tunnelName | ifType ifNum } [ draft2 ] }
[ hot-standby ] [ detail ] command.
<HUAWEI> tracert lsp segment-routing te Tunnel 11 draft2
LSP Trace Route FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel11 , press
CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[48061 48120 2000 ]
1 10.1.1.2 393 ms Transit 10.2.1.2/[48120 2000 ]
2 10.2.1.2 18 ms Transit 10.3.1.2/[2000 ]
3 10.3.1.2 23 ms Transit 10.4.1.2/[3 ]
4 5.5.5.9 30 ms Egress
----End
Follow-up Procedure
After the test is completed, you are advised to run the undo lspv mpls-lsp-ping
echo enable/undo lspv mpls-lsp-ping echo enable ipv6 command to disable the
device from responding to MPLS Echo Request/MPLS Echo Request IPv6 packets to
prevent system resource occupation.
Usage Scenario
On a VPLS over P2MP network, ping can be used to check the following tunnels:
● P2MP label distribution protocol (LDP) label switched paths (LSPs)
● P2MP TE tunnels that are automatically generated
Pre-configuration Tasks
Before using ping to check the P2MP network connectivity, ensure that P2MP is
correctly configured.
Procedure
Step 1 Run the ping multicast-lsp command to check the connectivity of the following
tunnels on a P2MP network:
ms
----End
Usage Scenario
On a VPLS over P2MP network, tracert can be used to check the following tunnels:
● P2MP label distribution protocol (LDP) label switched paths (LSPs)
● P2MP TE tunnels that are automatically generated
Pre-configuration Tasks
Before using tracert to check the P2MP network connectivity, ensure that P2MP is
correctly configured.
Procedure
Step 1 Run the tracert multicast-lsp command to check path information about the
following tunnels on a VPLS over P2MP network:
● VPLS over P2MP LDP LSPs
tracert multicast-lsp [ -a source-ip | -exp exp-value | -h ttl-value | -j jitter-
value | -r reply-mode | -t time-out | t-flag ] * mldp p2mp root-ip root-ip-
address { lsp lsp-id | opaque-value opaque-value } [ detail ]
● VPLS over P2MP TE tunnels that are automatically generated
tracert multicast-lsp [ -a source-ip | -exp exp-value | -h ttl-value | -j jitter-
value | -r reply-mode | -t time-out | t-flag ] * te-auto-tunnel auto-tunnel-
name [ leaf-destination leaf-destination ] [ detail ]
----End
Context
Perform the following steps on the PE of a Kompella VLL network to check
connectivity:
Procedure
Step 1 Run the following commands as network requirements:
● To check connectivity of the VLL network through the control word channel,
run:
ping vc vpn-instance vpn-name local-ce-id remote-ce-id [ -c echo-number | -m time-value | -s data-
bytes | -t timeout-value | -v | -g ] * control-word
● To check connectivity of the VLL network through the MPLS Router Alert
channel, run:
Context
Perform the following steps on the PE of the Kompella VLL network to check
connectivity:
Procedure
Step 1 Run any of the following commands as required:
● To check connectivity of the VLL network through the control word channel,
run:
● To check connectivity of the VLL network through the label alert channel, run:
tracert vc -vpn-instance vpn-name local-ce-id remote-ce-id [ -exp exp-value | -f first-ttl | -m max-ttl
| -r reply-mode | -t timeout-value | -g ] * label-alert [ full-lsp-path ]
● To check connectivity of the VLL network through the ordinary channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-mode | -t timeout-value |
-g ] * normal [ remote remote-ip-address ] } [ full-lsp-path ]
The control word channel and the ordinary mode cannot be configured together.
For detailed information about each parameter and its description in the tracert
vc -vpn-instance command, refer to the HUAWEI NetEngine 8000 X Series Router
Command Reference.
----End
Prerequisites
Before testing pseudo wire (PW) connectivity using the ping vc command, ensure
that the VPWS network has been configured correctly.
Context
To check whether a PW on the VPWS network is faulty, run the ping vc command.
When the PW is Up, you can locate faults, such as forwarding entry loss or errors.
Procedure
● Control-word mode:
To monitor PW connectivity using the control-word mode, run the ping vc vc-
type pw-id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes |
-t timeout-value | -exp exp-value | -r reply-mode | -v | -g] * control-word
[ remote remote-ip-address peer-pw-id [ sender sender-address ] ] [ ttl ttl-
value ] [ pipe | uniform ] command.
To monitor multi-segment PW connectivity, specify an IP address and a PW ID
for the remote PE and a source IP address in the remote remote-ip-address
peer-pw-id [ sender sender-address ] command.
● Label-alert mode:
To monitor PW connectivity using the label-alert mode, run the ping vc vc-
type pw-id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes |
-t timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * label-alert [ no-
control-word ] command.
● TTL mode:
To monitor PW connectivity using the TTL mode, run the ping vc vc-type pw-
id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes | -t
timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * normal [ no-
control-word ] [ remote remote-ip-address peer-pw-id ] [ ttl ttl-value ]
[ pipe | uniform ] command.
For example:
<HUAWEI> ping vc ethernet 100 control-word
PW PING : FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 : 100 data bytes, press CTRL_C
to break
Reply from 10.10.10.10: bytes=100 Sequence=1 time = 140 ms
Reply from 10.10.10.10: bytes=100 Sequence=2 time = 40 ms
Reply from 10.10.10.10: bytes=100 Sequence=3 time = 30 ms
Reply from 10.10.10.10: bytes=100 Sequence=4 time = 50 ms
Reply from 10.10.10.10: bytes=100 Sequence=5 time = 50 ms
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 30/62/140 ms
----End
Prerequisites
Before you run the ping vpls command to check PW connectivity, ensure that the
VPLS network has been configured correctly.
Context
To check whether a PW on the VPLS network is faulty, run the ping vpls
command.
Procedure
Step 1 To locate the faulty node on the VPLS network, run either of the following
commands as required:
● In Kompella mode, run:
ping vpls [ -c echo-number | -m time-value | -s data-bytes | -t timeout-value | -exp exp-value | -r
reply-mode | -v | -g ] * vsi vsi-name local-site-id remote-site-id [ bypass -si interface-type interface-
number ]
● In Martini mode, run:
ping vpls [ -c echo-number | -m time-value | -s data-bytes | -t timeout-value | -exp exp-value | -r
reply-mode | -v | -g ] * vsi vsi-name peer peer-address [ negotiate-vc-id vc-id ] [ control-word
[ remote remote-address remote-pw-id [ sender sender-address ] ] ] [ bypass -si interface-type
interface-number ]
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = vlan, ID = 2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 50/58/60 ms
----End
Context
Perform the following steps on the PE of a PWE3 network:
Procedure
Step 1 To locate the faulty node on a PWE3 network, run any of the following commands
as required:
● To monitor connectivity of the PWE3 network through the control word
channel, run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * control-word [ ptn-mode |
full-lsp-path ] [ pipe | uniform ] [ detail ]
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * control-word remote
remote-ip-address [ ptn-mode | full-lsp-path ] [ pipe | uniform ] [ detail ]
● To monitor connectivity of the PWE3 network through the label alert channel,
run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * label-alert [ no-control-
word ] [ full-lsp-path ] [ pipe | uniform ] [ detail ]
● To monitor check connectivity of the PWE3 network in ordinary mode, run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * normal [ remote remote-ip-
address ] } [ full-lsp-path ] [ pipe | uniform ] [ detail ]
The preceding command output contains information about each node along the
PW and the response time of each hop.
----End
Procedure
Step 1 To locate the faulty node on the VPLS network, run either of the following
commands as required:
● In Kompella mode, run:
tracert vpls [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-mode | -t timeout-value | -g ] * vsi vsi-
name local-site-id remote-site-id [ full-lsp-path ] [ detail ] [ bypass -si interface-type interface-
number ]
● In Martini mode, run:
tracert vpls [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-mode | -t timeout-value | -g ] * vsi vsi-
name peer peer-address [ negotiate-vc-id vc-id ] [ full-lsp-path ] [ control-word ] [ pipe |
uniform ] [ detail ] [ bypass -si interface-type interface-number ]
The preceding command output contains information about each node along the
PW and the response time of each hop.
----End
Prerequisites
The VPLS network has been correctly configured, and the specified virtual service
instance (VSI) is Up.
Context
NOTICE
Procedure
Step 1 Run the following command to monitor the connectivity between a PE and a CE:
ce-ping ip-address vsi vsi-name source-ip source-ip-address [ mac mac-address ]
[ interval interval | count count ] *
For example:
<HUAWEI> ce-ping 10.1.1.1 vsi abc source-ip 10.1.1.2 mac E024-7FA4-D2CB interval 2 count 5
Info: If the designated source IP address is in use, it could cause the abnormal data transmission in VPLS
network. Are you sure the source-ip is unused in this VPLS? [Y/N]:y
Ce-ping is in process...
----End
Prerequisites
The network has been correctly configured, and the specified BD is Up.
Context
An EVC model unifies the Layer 2 bearer service model and configuration model.
In an EVC model, you can use CE ping to check the link reachability between a PE
and a CE in a specified BD. For details about EVCs, see HUAWEI NetEngine 8000 X
Series Feature Description-Local Area Network. For configuration details, see EVC
Configuration.
When using CE ping to check the link reachability between a PE and a CE, you
must specify a source IP address that meets the following conditions:
● The source IP address must be on the same network segment as the CE's IP
address. If they are on different network segments, the CE considers received
CE Ping packets invalid and discards them.
● The source IP address must be an unused IP address in the specified BD. If you
specify a used IP address for the source IP address, CE Ping packets cannot be
properly forwarded. As a result, the user using the source IP address cannot
access the Internet. If you specify a gateway IP address for the source IP
address, all users cannot access the Internet.
To avoid this problem, do not specify a used IP address as the source IP address.
Procedure
Step 1 Run the ce-ping ip-address bd bd-id source-ip source-ip-address [ mac mac-
address ] [ interval interval | count count ] * command in any view to check the
link reachability between a PE and a CE.
<HUAWEI> ce-ping 10.1.1.1 bd 123 source-ip 10.1.1.2 mac e024-7fa4-d2cb interval 2 count 5
Info: If the designated source IP address is in use, it could cause the abnormal data transmission in EVC
network. Are you sure the source-ip is unused in this EVC? [Y/N]:y
Ce-ping is in process...
----End
Context
To manually monitor the connectivity between two devices, you can send test
packets and wait for a reply to test whether the destination device is reachable.
● For the network on which the MD, MA, and MEP are not configured, you can
implement GMAC ping to monitor the connectivity between two devices.
● For the network on which the MD, MA, and MEP are configured, you can
implement 802.1ag MAC ping to monitor the connectivity between MEPs at
the same maintenance level or between MEPs and MIPs at the same
maintenance level.
Pre-configuration Tasks
No pre-configuration tasks are needed to implement GMAC ping.
Context
GMAC ping has principles similar to those of 802.1ag MAC ping. The difference is
that a source device does not need to be a MEP, and a destination device does not
need to be a MEP or maintenance association intermediate point (MIP). In other
words, GMAC ping can be implemented without the need to configure an MD,
MA, or MEP on the source, intermediate, or destination device.
Enable the GMAC ping function on the source and destination devices. The
intermediate devices must have the bridge function to directly forward messages.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ping mac enable
The GMAC ping function is enabled globally.
If the GMAC ping function is enabled:
● A source device starts the GMAC ping function by sending a loopback
message (LBM) to a destination device.
● After receiving the LBM, the destination device replies to the source device
with a loopback reply (LBR).
Step 3 Run commit
The configuration is committed.
Step 4 (Optional) In a VLAN scenario: Run ping mac mac-address vlan vlan-id
[ interface interface-type interface-number | -c count | -s packetsize | -t timeout -
p priority-value ] *
The VLAN network connectivity is checked.
The following shows an example:
<HUAWEI> system-view
[~HUAWEI] ping mac enable
[*HUAWEI] commit
[~HUAWEI] ping mac 00e0-fc12-3456 vlan 10 -c 2 -s 112
Reply from 00e0-fc12-3456: bytes = 112 time < 1ms
Reply from 00e0-fc12-3456: bytes = 112 time < 1ms
Packets: Sent = 2, Received = 2, loss = 0 (0.00% loss)
Minimum = 1ms, Maximum = 1ms, Average = 1ms
Step 5 (Optional) In a VLL scenario: Run ping mac mac-address l2vc l2vc-id { raw |
tagged } [ interface interface-type interface-number | { pe-vid pe-vid ce-vid ce-
vid | dot1q-vlan vlan-id } -c count | -s packetsize | -t timeout | -p priority-value ] *
Step 6 (Optional) In a VPLS scenario: Run ping mac mac-address vsi vsi-name
[ interface interface-type interface-number | { pe-vid pe-vid ce-vid ce-vid |
dot1q-vlan vlan-id } -c count | -s packetsize | -t timeout | -p priority-value ] *
----End
Context
GMAC trace has principles similar to those of 802.1ag MAC trace. The difference is
that a source device does not need to be a MEP, and a destination device does not
need to be a MEP or MIP. In other words, GMAC trace can be implemented
without the need to configure an MD, MA, or MEP on the source, intermediate, or
destination device.
Enable the GMAC trace function on the source, intermediate, and destination
devices.
Procedure
Step 1 Run system-view
Step 4 (Optional) In a VLAN scenario: Run trace mac mac-address vlan vlan-id
[ interface interface-type interface-number | -t timeout ] *
Step 5 (Optional) In a VLL scenario: Run trace mac mac-address l2vc l2vc-id { raw |
tagged } [ interface interface-type interface-number | { [ pe-vid pe-vid ce-vid ce-
vid ] | [ dot1q-vlan vlan-id ] } | -t timeout | -h ] *
The VLL network connectivity is checked.
Step 6 (Optional) In a VPLS scenario: Run trace mac mac-address vsi vsi-name
[ interface interface-type interface-number | { [ pe-vid pe-vid ce-vid ce-vid ] |
[ dot1q-vlan vlan-id ] } | -t timeout | -h ] *
<HUAWEI> system-view
[~HUAWEI] trace mac enable
[*HUAWEI] commit
[~HUAWEI] trace mac 00e0-fc12-3458 vsi vsi1 -h
Tracing the route to 00e0-fc12-3458 over a maximum of 255 hops:
Hops Host Name (IP Address)
Mac Ingress Ingress Action Relay Action
Forwarded Egress Egress Action
1 HUAWEIA (10.10.10.16)
00e0-fc22-3459 GigabitEthernet2/0/1 IngOK RlyFDB
Forwarded GigabitEthernet1/0/1.1 EgrOK
2 HUAWEIB (10.10.10.13)
00e0-fc12-3458 GigabitEthernet3/0/1 IngOK RlyHit
Not Forwarded
Info: Succeeded in tracing the destination address 00e0-fc12-3458.
----End
Context
Similar to the ping operation, 802.1ag MAC ping checks whether the destination
device is reachable by sending test packets and receiving response packets. In
addition, the ping operation time can be calculated at the transmit end for
network performance analysis.
Before performing 802.1ag MAC ping, ensure that 802.1ag has been configured.
For more information, see Configuring Basic Ethernet CFM Functions.
Procedure
Step 1 A device is usually configured with multiple MDs and MAs. To monitor the
connectivity of a link between two or more devices, perform either of the
following steps on the router with a MEP on one end of the link to be monitored.
● In the MA view:
a. Run system-view
The system view is displayed.
b. Run cfm enable
CFM is globally enabled on the device.
c. Run cfm md md-name
The MD view is displayed.
d. Run ma ma-name
The MA view is displayed.
e. Run ping mac-8021ag mep mep-id mep-id [ md md-name ma ma-
name ] { mac mac-address | remote-mep mep-id mep-id } [ -c count | -s
packetsize | -t timeout | -p priority-value ] *
The connectivity between a MEP and an RMEP or between a MEP and a
MIP on other devices is monitored.
The intermediate device on the link to be tested only forwards LBMs and LBRs.
Therefore, the MD, MA, or MEP does not need to be configured on the
intermediate device.
----End
Context
Similar to traceroute or tracert, 802.1ag MAC trace tests the path between the
local device and a destination device or locates failure points by sending test
packets and receiving reply packets.
Before performing 802.1ag MAC trace, ensure that 802.1ag has been configured.
For more information, see Configuring Basic Ethernet CFM Functions.
Procedure
Step 1 A device is usually configured with multiple MDs and MAs. To determine the
forwarding path for sending packets from a MEP to another MEP or a MIP in an
MA or failure points, perform either of the following operations on the router with
a MEP at one end of the link to be tested.
● In the MA view:
a. Run system-view
The system view is displayed.
b. (Optional) Run cfm portid-tlv type { interface-name | local }
The portid-tlv type for trace packets is set.
c. Run cfm md md-name
The MD view is displayed.
d. Run ma ma-name
The MA view is displayed.
e. Run trace mac-8021ag mep mep-id mep-id [ md md-name ma ma-
name ] { mac mac-address | remote-mep mep-id mep-id } [ -t timeout |
ttl ttl ] *
The connectivity fault between the local router and the remote router is
located.
– Run the trace mac-8021ag command without md md-name ma ma-
name in the MA view to monitor a forwarding path or locate a failure
point in the current MA.
– Run the trace mac-8021ag md md-name ma ma-name command in the
MA view to monitor a forwarding path or locate a failure point in the
specified MA.
● In all views except the MA view:
a. (Optional) Run cfm portid-tlv type { interface-name | local }
The portid-tlv type for trace packets is set.
b. Run trace mac-8021ag mep mep-id mep-id md md-name ma ma-name
{ mac mac-address | remote-mep mep-id mep-id } [ -t timeout | ttl ttl ]
*
The connectivity fault between the local router and the remote router is
located.
● If the forwarding entry of the destination node does not exist in the MAC
address table, interface interface-type interface-number must be specified.
----End
Procedure
● Check the connectivity of an EVPN over MPLS LDP or EVPN over MPLS TE
tunnel when the EVPN public network is MPLS.
Run ping evpnvpn-instanceevpn-namemacmac-address [ -asource-ip | -
ccount | -minterval | -spacket-size | -ttime-out | -rreply-mode | -
nexthopnexthop-address ] *
The device is configured to check the connectivity of an EVPN over MPLS LDP
or EVPN over MPLS TE tunnel when the EVPN public network is MPLS.
<HUAWEI> ping evpn vpn-instance evpna mac 00e0-fc12-3456 -c 3 -s 200
Ping vpn-instance evpna mac 00e0-fc12-3456 : 200 data bytes, press CTRL_C to break
Reply from 1.1.1.1: bytes=200 sequence=1 time = 11ms
Reply from 1.1.1.1: bytes=200 sequence=2 time = 10ms
Reply from 1.1.1.1: bytes=200 sequence=3 time = 10ms
--- vpn-instance: evpna 00e0-fc12-3456 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 10/10/11 ms
● Check the connectivity of an EVPN over VXLAN tunnel when the EVPN public
network is VXLAN.
Run ping evpnbridge-domainbd-idmacmac-address [ -asource-ip | -ccount | -
minterval | -spacket-size | -ttime-out | -rreply-mode | -nexthopnexthop-
address ] *
The device is configured to check the connectivity of an EVPN over VXLAN
tunnel when the EVPN public network is VXLAN.
<HUAWEI> ping evpn bridge-domain 101 mac 00e0-fc12-3456 -c 3 -s 200
Ping bridge-domain 101 mac 00e0-fc12-3456 : 200 data bytes, press CTRL_C to break
Reply from 1.1.1.1: bytes=200 sequence=1 time = 11ms
Reply from 1.1.1.1: bytes=200 sequence=2 time = 10ms
Reply from 1.1.1.1: bytes=200 sequence=3 time = 10ms
--- bridge-domain: 101 00e0-fc12-3456 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 10/10/11 ms
● Check the connectivity of an EVPN over SR-MPLS BE or EVPN over SR-MPLS
TE tunnel when the EVPN public network is SR.
● Check the connectivity of an EVPN over BGP tunnel when the EVPN public
network is BGP.
----End
Context
After EVPN VPWS configurations are complete, perform the following operation in
any view of the client.
Procedure
● Check the EVPN VPWS network connectivity.
– When the tunnel type of the EVPN VPWS network is MPLS:
Run the ping evpn vpws local-ce-id remote-ce-id [ -a source-ip | -c count
| -exp exp-value | -m interval | -s packet-size | -t time-out | -r reply-mode
| -tc tc ] * command to check the EVPN VPWS status. If the EVPN VPWS
status is up, locate the fault.
– When the tunnel type of the EVPN VPWS network is SRv6:
Run the ping evpn vpws local-ce-id remote-ce-id [ end-op endOp ] [ -a
source-ip | -c count | -exp exp-value | -m interval | -s packet-size | -t
time-out | -r reply-mode | -tc tc ] * command to check the EVPN VPWS
status. If the EVPN VPWS status is up, locate the fault.
----End
Context
After EVPN VPWS configurations are complete, perform the following operation in
any view of the client.
Procedure
● Locate a forwarding fault on an EVPN VPWS network.
– When the tunnel type of the EVPN VPWS network is MPLS:
Run the tracert evpn vpws local-ce-id remote-ce-id [ -a source-ip | -exp
exp-value | -s packet-size | -t timeout | -h max-ttl | -r reply-mode | -tc tc ]
* [ pipe | uniform ] command to check the EVPN VPWS status. If the
EVPN VPWS status is down, locate the faulty node on the EVPN VPWS
path.
– When the tunnel type of the EVPN VPWS network is SRv6:
Run the tracert evpn vpws local-ce-id remote-ce-id [ end-op endOp ] [ -
a source-ip | -exp exp-value | -s packet-size | -t timeout | -h max-ttl | -r
----End
Context
After SRv6 configurations are complete, you can perform the following operations
in any view of the client.
Procedure
● Specify SIDs to test the connectivity of an SRv6 network.
To test the connectivity of an SRv6 network, run the ping ipv6-sid [ -a
source-ipv6-address | -c echo-number | -m wait-time | -s packetsize | -t
timeout | -tc traffic-class-value ] * [ segment-by-segment ] sid & <1-11>
command on the ingress to specify SRv6 SIDs to initiate a ping test to the
egress.
<HUAWEI> ping ipv6-sid A2::C31 A4::C50 A4::C52
PING ipv6-sid A2::C31 A4::C50 A4::C52 : 56 data bytes, press CTRL_C to break
Reply from A4::C52
bytes=56 Sequence=1 hop limit=64 time=2 ms
Reply from A4::C52
bytes=56 Sequence=2 hop limit=64 time=1 ms
Reply from A4::C52
bytes=56 Sequence=3 hop limit=64 time=1 ms
Reply from A4::C52
bytes=56 Sequence=4 hop limit=64 time=1 ms
Reply from A4::C52
bytes=56 Sequence=5 hop limit=64 time=1 ms
----End
Context
After configuring SRv6, you can perform the following configurations in any view
of a client.
Procedure
● Specify SIDs to test the path information of an SRv6 network or locate the
fault point on the path.
To test the fault point on an SRv6 network, run the tracert ipv6-sid [ -f first-
hop-limit | -m max-hop-limit | -p port-number | -q probes | -w timeout | -s
packetsize | -a source-ipv6-address ] * [ overlay ] sid & <1-11> command on
the ingress to specify SRv6 SIDs to initiate a tracert test to the egress.
<HUAWEI> tracert ipv6-sid A2::C31 A4::C50 A4::C52
traceroute ipv6-sid A2::C31 A4::C50 A4::C52 30 hops max,60 bytes packet
1 2001:DB8:1:2::21[SRH: A4::C52, A4::C50, A2::C31, SL=2] 5 ms 3 ms 2 ms
2 2001:DB8:2:3::31[SRH: A4::C52, A4::C50, A2::C31, SL=1] 5 ms 2001:DB8:2:3::32[SRH: A4::C52, A4::OP,
A2::C31, SL=1] 2ms 2ms
3 A4::C52[SRH: A4::C52, A4::C50, A2::C31, SL=1] 5 ms 10 ms 0.759 ms
● Test the path over which an SRv6 TE Policy is established or locate the fault
point on the path.
a. (Optional) Configure an End.OP SID on the remote endpoint of the SRv6
TE Policy.
An End.OP SID is an OAM SID that specifies the punt behavior to be
implemented for an OAM packet in ping and tracert scenarios. If the
bottom SID in the SRv6 TE Policy's segment list is an End.X or binding
SID, you must manually specify an End.OP SID when initiating a tracert
operation to the SRv6 TE Policy. Before specifying the end-op endop
parameter, you must configure the End.OP SID.
i. Run segment-routing ipv6
The SRv6 view is displayed.
ii. Run locator locator-name
The locator view is displayed.
Ensure that the locator has been created and advertised through IS-
IS. The locator is also used by the created SRv6 TE Policy.
iii. Run opcode func-opcode end-op
An opcode is configured for an End.OP SID.
iv. Run commit
The configuration is committed.
b. On the ingress of the SRv6 TE Policy, run the tracert srv6-te policy
{ policy-name policyname | endpoint-ip endpointipv6 color colorId |
binding-sid bsid } [ end-op endop ] [ -a sourceaddr6 | -f initHl | -m
maxHl | -s packetsize | -w timeout | -p destport | -tc tc ] * command with
the policy-name policyname, endpoint-ip endpointipv6 color colorId, or
binding-sid bsid to initiate tracert to the corresponding SRv6 TE Policy to
detect all intermediate nodes along the tunnel.
<HUAWEI> tracert srv6-te policy policy-name test end-op 2001:db8:2::1 -a 2001:db8:1::1 -q 5 -m
20 -tc 0
Trace Route srv6-te policy : 100 data bytes, press CTRL_C to break
srv6-te policy's segment list:
Preference: 200; Path Type: primary; Protocol-Origin: local; Originator: 0, 0.0.0.0; Discriminator: 200;
Segment-List ID: 1; Xcindex: 1; end-op: 2001:db8:2::1
TTL Replier Time Type SRH
0 Ingress [SRH: 2001:db8:1::F:1, 2001:db8:2::F:1, 2001:db8:2::1,
SL=2]
1 2001:db8:A::192:168:103:2 22 ms Transit [SRH: 2001:db8:1::F:1, 2001:db8:2::F:1,
2001:db8:2::1, SL=2]
2 2001:db8:A::192:168:106:2 10 ms Transit [SRH: 2001:db8:1::F:1, 2001:db8:2::F:1,
2001:db8:2::1, SL=1]
3 2001:db8:2::1 4 ms Egress
----End
Prerequisites
Before configuring an MTrace test instance, run the undo mtrace echo disable
command on each device along the multicast or RPF path to be detected to
enable the devices to respond to MTrace request and query messages.
Context
MTrace mainly has the following uses:
● The mtrace command can be used in multicast troubleshooting and routine
maintenance to locate a faulty device and reduce configuration errors.
● The mtrace command can be used to collect traffic statistics in path tracing
and calculate the multicast traffic rate in cyclic path tracing.
● The NMS analyzes faulty device information displayed in the mtrace
command output and generates alarms.
Procedure
Step 1 (Optional) Run reset mtrace statistics
NOTICE
After the reset mtrace statistics command is run, the statistics cleared cannot be
restored.
To ensure that the mtrace command is run successfully, the current device must have the
(S, G) entries and meet either of the following conditions:
● The current device is directly connected to the destination host.
● A ping test initiated from the current device to the last-hop device or destination host
succeeds.
● The current device is on the multicast path from the multicast source to the
destination host.
The following examples show some parameters. For detailed options and parameter
description, see mtrace.
● Run the mtrace source source-address command to trace the RPF path from
a multicast source to the current device.
<HUAWEI> mtrace source 10.1.0.1
Press Ctrl+C to break multicast traceroute facility
From the receiver(10.1.5.1), trace reverse path to source (10.1.0.1) according to RPF rules
-1 10.1.5.1
Incoming Interface Address: 10.1.5.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 0.0.0.0 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 0
The packet loss rate of (10.1.0.1, 225.0.0.1) is 0.00%
-2 10.1.2.1
Incoming Interface Address: 10.1.2.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 10.1.5.2 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 0
The packet loss rate of (10.1.0.1, 225.0.0.1) is 0.00%
-3 10.1.0.1
Incoming Interface Address: 10.1.0.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 10.1.2.2 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 0
********************************************************
In calculating-rate mode, reach the demanded number of statistic,and multicast traceroute finished.
● Run the mtrace [ -gw last-hop-router | -d ] -r receiver source source-address
command to trace the RPF path from a multicast source to the destination
host.
<HUAWEI> mtrace -gw 10.1.6.3 -r 10.1.6.4 -v source 10.1.0.1
Press Ctrl+C to break multicast traceroute facility
From the receiver(10.1.6.4), trace reverse path to source (10.1.0.1) according to RPF rules
NOTICE
After the mtrace echo disable command is run, a device discards MTrace request
and query messages. As a result, MTrace detection is terminated on this device.
----End
6 Telemetry Configuration
As shown in Table 6-1, although SNMP trap and Syslog use the push mode, only alarms or
events are pushed. Monitoring data such as the interface traffic cannot be collected or sent.
Configuration Precautions
Restrictions Guidelines Impact
number of instances/
200) x 10 seconds.
However, the sampling
task is always completed
10 seconds earlier due to
the actual number of
instances sampled at a
time.
● Shared policies cannot Do not use the shared ● Shared policies cannot
be used in conditional policies. be used in conditional
collection. collection.
● For the following ● The interval at which
policies, the smallest packets are sent
sampling interval is cannot be less than 1s
1s: even if the interval is
A, Policies that set to 100 ms.
contain if-match any
rules
B, Shared policies are
configured on multi-
NP-equipped boards.
Usage Scenario
The controller uses commands to configure devices that support telemetry,
subscribe to data sources, and collect data. The protocol used to send data can be
gRPC or UDP.
If the connection between a device and the collection is interrupted, the device
connects to the collector and sends data again. However, the data sampled when
the connection is being re-established is lost.
After an active/standby switchover is performed or the system saves telemetry
service configurations and restarts, the telemetry service reloads its configurations
and continues to run. However, the data sampled during the restart or active/
standby switchover is lost. This poses high pressure on devices. Therefore,
telemetry static subscription is often used for coarse-grained data collection.
Pre-configuration Tasks
Before configuring static telemetry subscription, configure a static or dynamic
routing protocol so that devices can communicate at the network layer.
Context
When static subscription is configured, a device functions as a client and a
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to configure the IP address and port number of a
destination collector, and the protocol and encryption mode for data sending.
Procedure
Step 1 Run system-view
A destination group to which the data sampled is sent is created, and the
Destination-group view is displayed.
An IP address and a port number are configured for the destination collector, and
a protocol and an encryption mode are configured for data sending.
This command can be run for no more than five times for each destination group.
Both this command and the protocol command in the Subscription view can be run to
configure a protocol and encryption mode for the target collector. If the target collector is
associated with the subscription, command configurations take effect based on the
following rules:
● If the protocol command has been run in the Subscription view, the protocol and
encryption mode configured in the Subscription view take effect.
● If the protocol command is not run in the Subscription view, the protocol and
encryption mode configured in the Destination-group view take effect.
----End
Context
When static subscription is configured, a device functions as a client and a
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to configure a source of the data to be sampled.
You can configure a telemetry customized event. If a performance indicator of a
resource object that telemetry monitors exceeds the user-defined threshold, the
customized event is reported to the collector in time for service policy
determination.
Procedure
● Configure the data to be sampled.
a. Run system-view
The system view is displayed.
b. Run telemetry
The telemetry view is displayed.
c. Run sensor-group sensor-name
A sampling sensor group is created, and the sensor-group view is
displayed.
d. Run sensor-path path
A sampling path is configured for the telemetry sensor group.
A filter is created for the sampling path, and the filter view is displayed.
A filter is created for the sampling path, and the filter view is displayed.
----End
Context
When static subscription is configured, a device functions as a client and a
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to create a subscription to set up a data sending
channel. The protocol used to send data can be gRPC or UDP.
To configure an SSL policy for a client so that the server and client can establish a
secure SSL connection, you must ensure that the SSL policy has been created. For
details about how to create an SSL policy, see "Configuring and Binding an SSL
Policy" in "Basic Configuration" in the configuration guide.
Procedure
● Create a static subscription based on gRPC.
a. Run system-view
The certificate to be loaded must be supported by both the client and server.
i. Run quit
Return to the system view.
ii. Run grpc
The gRPC view is displayed.
iii. Run grpc client
The gRPC client view is displayed.
iv. Run ssl-policy ssl-policy-name [ verify-cn cn-name ]
An SSL policy is configured for the client during static telemetry
subscription.
Prerequisites
The static telemetry subscription functions have been configured.
Procedure
● Run the display telemetry sensor [ sensor-name ] command to check the
sampling sensor information.
● Run the display telemetry destination [ dest-name ] command to check
information about a destination group to which the data sampled is sent.
● Run the display telemetry subscription [ subscription-name ] command to
check the subscription information.
Usage Scenario
If a collector functioning as a client initiates a connection to a device functioning
as a server, the data sampled is dynamically subscribed to. In this case, you need
to configure the source IP address and number for a port to be listened, and
enable the gRPC service. The protocol used to send data is gRPC.
If the connection where dynamic telemetry subscription resides is interrupted, the
device automatically cancels the subscription and stops data sampling and
reporting. The configuration cannot be restored unless the collector sends a
connection request again. For example, if a user wants to monitor an interface for
a period of time, configure dynamic telemetry subscription. To stop monitoring,
tear down the connection. The subscription is automatically canceled and cannot
be restored. This avoids the long-term load on devices and simplifies the
interaction between users and devices.
Pre-configuration Tasks
Before configuring dynamic telemetry subscription, complete the following tasks:
● Configure a static route or dynamic routing protocol to ensure that devices
can communicate at the network layer.
● Create an ACL, if needed, for the gRPC service to control which clients can
connect to the server. For details about how to create an ACL, see "ACL
Configuration" in "IP Services" in the configuration guide.
● Create an SSL policy, if needed, for the gRPC service so that the server and
client can establish a secure SSL connection. For details about how to create
an SSL policy, see "Configuring and Binding an SSL Policy" in "Basic
Configuration" in the configuration guide.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run grpc
The gRPC view is displayed.
Step 3 Run grpc server
The gRPC server view is displayed.
An ACL is configured for the gRPC service during dynamic telemetry subscription.
An idle timeout period is configured of the gRPC service during dynamic telemetry
subscription.
An SSL policy is configured for the gRPC service during dynamic telemetry
subscription.
Step 11 (Optional) Configure a maximum usage for the main control board's CPU used
when telemetry collects data.
1. Run quit
Return to the gRPC view.
2. Run quit
Return to the system view.
3. Run telemetry
The telemetry view is displayed.
4. Run cpu-usage max-percent usage
A maximum usage is configured for the main control board's CPU used when
telemetry collects data.
----End
Usage Scenario
In static telemetry subscription mode, a device initiates a connection to a collector
to send data collected.
● If the connection is interrupted, the device connects to the collector and sends
data again. However, the data sampled when the connection is being
established again is lost.
● After an active/standby switchover is performed or the system saves telemetry
service configurations and restarts, the telemetry service reloads its
configurations and continues to run. However, the data sampled during the
restart or active/standby switchover is lost.
Pre-configuration Tasks
Before configuring static telemetry subscription, configure a dynamic routing
protocol or static routes to ensure that devices can communicate at the network
layer.
Context
When static subscription is configured, a device functions as a client and an IPv6
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to configure the IPv6 address and port number of an
IPv6 destination collector, and the protocol and encryption mode for data sending.
Procedure
Step 1 Run system-view
A destination group to which the data sampled is sent is created, and the
destination-group view is displayed.
An IPv6 address and a port number are configured for the destination collector,
and a protocol and an encryption mode are configured for data sending.
This command can be run for no more than five times for each destination group.
Both this command and the protocol command in the subscription view can be run to
configure a protocol and encryption mode for the target collector. If the target collector is
associated with the subscription, command configurations take effect based on the
following rules:
● If the protocol command has been run in the subscription view, the protocol and
encryption mode configured in the subscription view take effect.
● If the protocol command is not run in the subscription view, the protocol and
encryption mode configured in the destination-group view take effect.
----End
Context
When static subscription is configured, a device functions as a client and a
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to configure a source of the data to be sampled.
Procedure
● Configure the data to be sampled.
a. Run system-view
A filter is created for the sampling path, and the filter view is displayed.
A filter is created for the sampling path, and the filter view is displayed.
Context
When static subscription is configured, a device functions as a client and a
collector functions as a server. To statically subscribe to the data sampled or a
customized event, you need to create a subscription to set up a data sending
channel. The protocol used to send data can be gRPC or UDP.
Before configuring an SSL policy on the client to establish a secure SSL connection
between the client and server, ensure that the SSL policy has been created. For
details about how to create an SSL policy, see "Configuring and Binding an SSL
Policy" in Configuration Guide - Basic Configuration.
Procedure
● Create a static subscription based on gRPC.
a. Run system-view
The system view is displayed.
b. Run telemetry
The telemetry view is displayed.
c. Run subscriptionsubscription-name
A subscription is created, and the subscription view is displayed.
d. Run sensor-groupsensor-name [ sample-intervalsample-interval
{ [ suppress-redundant ] | [ heartbeat-intervalheartbeat-interval ] } * ]
A sampling sensor group is associated with the subscription, and a
sampling interval, a heartbeat interval, and redundancy suppression are
configured for the sampling sensor group.
e. Run destination-group destination-group-name
A destination group is associated with the subscription.
f. (Optional) Run local-source-address ipv6 ipv6-address
A source IPv6 address is configured for gRPC-based data sending.
g. (Optional) Run dscp value
A DSCP value is set for data packets to be sent.
h. (Optional) Run encoding { json | gpb }
An encoding format is configured for data packets to be sent.
i. (Optional) Run protocolgrpc [ no-tls ]
The protocol and encryption mode are configured for the destination
collector that is associated with this subscription.
The certificate to be loaded must be supported by both the client and server.
i. Run quit
Return to the system view.
ii. Run grpc
The gRPC view is displayed.
iii. Run grpc client
The gRPC client view is displayed.
iv. Run ssl-policy policy-name [ verify-cn cn-name ]
An SSL policy is configured for the client during static telemetry
subscription.
The protocol and encryption mode are configured for the destination
collector that is associated with this subscription.
j. (Optional) Configure a maximum usage for the main control board's CPU
used when telemetry collects data.
i. Run quit
Return to the telemetry view.
ii. Run cpu-usage max-percent usage
A maximum usage is configured for the main control board's CPU
used when telemetry collects data.
k. Run commit
The configuration is committed.
----End
Prerequisite
The static telemetry subscription functions have been configured.
Procedure
● Run the display telemetry sensor [ sensor-name ] command to check
sampling sensor information.
● Run the display telemetry destination [ dest-name ] command to check
information about a destination group to which the data sampled is sent.
● Run the display telemetry subscription [ subscription-name ] command to
check the subscription information.
Usage Scenario
If an IPv6 collector functioning as a client initiates a connection to a device
functioning as a server, the data sampled is dynamically subscribed to. In this case,
you need to configure the source IPv6 address and port number for listening, and
enable the gRPC service.
If the connection that carries dynamic telemetry subscription is interrupted, the
device automatically cancels the subscription and stops data sampling and
reporting. The configuration cannot be restored unless the collector sends a
connection request again. For example, if a user wants to monitor an interface for
Pre-configuration Tasks
Before configuring dynamic telemetry subscription, complete the following tasks:
Procedure
Step 1 Run system-view
An IPv6 ACL is configured for the gRPC service during dynamic telemetry
subscription.
An idle timeout period of the gRPC IPv6 service is set for dynamic telemetry
subscription.
An SSL policy is configured for the gRPC IPv6 service during dynamic telemetry
subscription.
----End
Networking Requirements
As the network scale increases, carriers need to optimize networks and rectify
faults based on device information. For example, if the CPU usage of a device
In this example, Interface1 and Interface2 represent GE 1/0/1 and GE1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Collector's IP address 10.20.2.1 and port number 10001. The Device A and the
collector must be routable.
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Configure a destination collector.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] telemetry
[~DeviceA-telemetry] destination-group destination1
[*DeviceA-telemetry-destination-group-destination1] ipv4-address 10.20.2.1 port 10001 protocol grpc no-
tls
If the device connects to the collector using the IPv6 address, run the ipv6-address ip-
address-ipv6 port port [ vpn-instance vpn-instance ] [ protocol { grpc [ no-tls ] | udp } ]
command to configure the IPv6 address and port number of the destination collector.
[*DeviceA-telemetry-destination-group-destination1] quit
Step 3 Configure the data to be sampled and a customized event. When the value of
osMemoryUsage in the sampling path huawei-debug:debug/memory-infos/
memory-info is greater than 50, a customized event is reported.
[*DeviceA-telemetry] sensor-group sensor1
[*DeviceA-telemetry-sensor-group-sensor1] sensor-path huawei-debug:debug/cpu-infos/cpu-info
[*DeviceA-telemetry-sensor-group-sensor1-path] filter cpuinfo
[*DeviceA-telemetry-sensor-group-sensor1-path-filter-cpuinfo] op-field systemCpuUsage op-type gt op-
value 40
[*DeviceA-telemetry-sensor-group-sensor1-path-filter-cpuinfo] quit
[*DeviceA-telemetry-sensor-group-sensor1-path] quit
[*DeviceA-telemetry-sensor-group-sensor1] sensor-path huawei-debug:debug/memory-infos/memory-
info self-defined-event
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path] filter meminfo
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path-filter-meminfo] op-field
osMemoryUsage op-type gt op-value 50
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path-filter-meminfo] quit
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path] quit
[*DeviceA-telemetry-sensor-group-sensor1] quit
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
telemetry
#
sensor-group sensor1
sensor-path huawei-debug:debug/cpu-infos/cpu-info
filter cpuinfo
op-field systemCpuUsage op-type gt op-value 40
sensor-path huawei-debug:debug/memory-infos/memory-info self-defined-event
filter meminfo
op-field osMemoryUsage op-type gt op-value 50
#
destination-group destination1
ipv4-address 10.20.2.1 port 10001 protocol grpc no-tls
#
subscription subscription1
sensor-group sensor1
destination-group destination1
#
return
Networking Requirements
As the network scale increases, carriers need to optimize networks and rectify
faults based on device information. For example, if the CPU usage of a device
exceeds a specified threshold, the device reports data to a collector so that
network traffic can be monitored and optimized in a timely manner.
As shown in Figure 6-2, Device A supports telemetry and establishes a UDP
connection with the collector. When the CPU usage of Device A exceeds 40%, data
needs to be sent to the collector. When the system memory usage of Device A
exceeds 50%, a customized event needs to be sent to the collector.
In this example, Interface1 and Interface2 represent GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a destination collector.
2. Configure the data to be sampled and a customized event.
3. Create a subscription.
Data Preparation
To complete the configuration, you need the following data:
● Collector's IP address 10.20.2.1 and port number 10001; IP address of Device
A's Interface1 192.168.1.1 and port number 11111
● Destination group name destination1
● Sampling sensor group name sensor1
● Subscription name subscription1
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Configure a destination collector.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] telemetry
[~DeviceA-telemetry] destination-group destination1
[*DeviceA-telemetry-destination-group-destination1] ipv4-address 10.20.2.1 port 10001 protocol udp
If the device connects to the destination collector using an IPv6 address, you must run the
ipv6-address ip-address port port [ vpn-instance vpn-instance ] [ protocol udp ]
command to configure the IPv6 address and port number of the destination collector.
[*DeviceA-telemetry-destination-group-destination1] quit
If the device connects to the destination collector using an IPv6 address, run the local-
source-address ipv6 ip-address port port command to configure the source IPv6 address
and source port number.
[*DeviceA-telemetry-subscription-subscription1] commit
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
telemetry
#
sensor-group sensor1
sensor-path huawei-debug:debug/cpu-infos/cpu-info
filter cpuinfo
op-field systemCpuUsage op-type gt op-value 40
sensor-path huawei-debug:debug/memory-infos/memory-info self-defined-event
filter memoryinfo
op-field osMemoryUsage op-type gt op-value 50
#
destination-group destination1
ipv4-address 10.20.2.1 port 10001 protocol udp
#
subscription subscription1
sensor-group sensor1
destination-group destination1
Networking Requirements
As the network scale increases, carriers need to optimize networks and rectify
faults based on device information. For example, if a user wants to monitor an
interface for a period of time, configure dynamic telemetry subscription. To stop
monitoring, tear down the connection. The subscription is automatically canceled
and cannot be restored. This avoids the long-term load on devices and simplifies
the interaction between users and devices.
As shown in Figure 6-3, Device A supports telemetry and establishes a gRPC
connection with the collector. Interface1 of Device A needs to be monitored, and
data needs to be sent to the collector as required.
In this example, Interface1 and Interface2 represent GE 1/0/1 and GE 1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP address of Interface1: 192.168.1.1; Name of the VPN instance to be bound
to Interface1: vpn1
● Number of the port to be listened for: 20000
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Configure a source IP address to be listened during dynamic telemetry subscription
and the name of a VPN instance to be bound to the source IP address.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] grpc
[~DeviceA-grpc] grpc server
[~DeviceA-grpc-server] source-ip 192.168.1.1 vpn-instance vpn1
Step 3 Configure the number of the port to be listened during dynamic telemetry
subscription.
[*DeviceA-grpc-server] server-port 20000
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
grpc
#
grpc server
source-ip 192.168.1.1 vpn-instance vpn1
server-port 20000
server enable
#
return
Networking Requirements
As the network scale increases, carriers need to optimize networks and rectify
faults based on device information. For example, if a user wants to monitor an
interface for a period of time, configure dynamic telemetry subscription. To stop
monitoring, tear down the connection. The subscription is automatically canceled
and cannot be restored. This avoids the long-term load on devices and simplifies
the interaction between users and devices.
In this example, Interface1 and Interface2 represent GE1/0/1 and GE1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of Interface1: 2001:db8:4::1 (Interface1 on Device A and the
collector must be routable.)
● Number of the port to be listened for: 20000
Procedure
Step 1 Configure an IPv6 address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Set the source IPv6 address to be listened for during dynamic telemetry
subscription.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] grpc
[~DeviceA-grpc] grpc server ipv6
[~DeviceA-grpc-server-ipv6] source-ip 2001:db8:4::1
Step 3 Configure the number of the port to be listened for during dynamic telemetry
subscription.
[*DeviceA-grpc-server-ipv6] server-port 20000
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8:4::1/64
#
ipv6 route-static 2001:db8:3:: 64 GigabitEthernet1/0/0 2001:db8:4::2
#
grpc
#
grpc server ipv6
source-ip 2001:db8:4::1
server-port 20000
server enable
#
return
7 TWAMP Configuration
Background
As networks rapidly develop and applications widely apply, various services are
deployed to meet requirements in different scenarios. Therefore, networks
Advantages
TWAMP has the following advantages over the traditional tools that collect
statistics about IP network performance:
● TWAMP is a standard protocol that has a unified measurement model and
packet format, facilitating deployment.
● Multiprotocol Label Switching Transport Profile (MPLS-TP) Operation,
Administration and Maintenance (OAM) can be deployed only on MPLS-TP
networks, whereas TWAMP can be deployed on IP networks, MPLS networks,
and Layer 3 virtual private networks (L3VPNs).
● Compared with IP Flow Performance Measurement (FPM), TWAMP boasts
stronger availability and easier deployment and requires no clock
synchronization.
Models
TWAMP uses the client/server mode and defines four logical entities, as shown in
Figure 7-1.
● Control-client: establishes, starts, and stops a test session and collects
statistics.
● Session-sender: proactively sends probes for performance statistics after being
notified by the control-client.
● Server: responds to the control-client's request for establishing, starting, or
stopping a test session.
● Session-reflector: replies to the probes sent by the session-sender with
response probes after being notified by the server.
In TWAMP, TCP packets are used as control signals, and UDP packets are used as
probes.
Configuration Precautions
Restrictions Guidelines Impact
Applicable Environment
TWAMP applies to scenarios in which statistics on the IP network performance
must be quickly obtained but do not need to be highly accurate.
Currently, a Huawei device can function only as the server and session-reflector. To
implement TWAMP, ensure that devices that can function as the control-client and session-
sender exist on the network.
Pre-configuration Tasks
Before configuring TWAMP, complete the following tasks:
● Ensure that some devices on the live network can function as the control-
client and session-sender and comply with relevant standards.
● Ensure that the control-client and server are routable and the links between
them work properly.
Data Preparation
To configure TWAMP, you need the following data.
No. Data
1 (Optional) TCP port number and inactive interval for a control session
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa twamp
The TWAMP view is displayed.
Step 3 Run server
The server function is enabled, and the server view is displayed.
Step 4 (Optional) Run tcp port port-number [ all | vpn-instance vpn-instance-name ]
A TCP port is specified.
Step 5 (Optional) Run control-session inactive time-out
An inactive interval is configured for a control session.
Step 6 (Optional) Run client acl { aclnumBasic | aclnumAdv | aclname }
The ACL rule to be referenced is configured.
Step 7 Run commit
The configuration is committed.
----End
Context
After the session-reflector is configured, the session-reflector can reply to the
session-sender with timestamps and serial numbers to help collect statistics about
the delay, jitter, and packet loss rate.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa twamp
The TWAMP view is displayed.
Step 3 Run reflector
The session-reflector function is enabled, and the session-reflector view is
displayed.
Step 4 (Optional) Run test-session inactive timeout
An inactive interval is configured for a test session.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
TWAMP has been configured.
Procedure
● Run the display twamp global-info command to check global information
about TWAMP.
● Run the display twamp control-session [ verbose | client-ip client-ip-
address client-port client-port-number [ vpn-instance vpn-instance-name ] ]
command to check information about control sessions on the server.
● Run the display twamp test-session [ verbose | reflector-ip reflector-ip-
address reflector-port reflectort-port-number [ vpn-instance vpn-instance-
name ] ] command to check information about test sessions on the session-
reflector.
----End
Networking Requirements
As shown in Figure 7-2, DeviceA on an IP network functions as the server in a
TWAMP test. DeviceB functions as the control-client and specifies the IP address
of DeviceA to start collecting statistics. DeviceB sends statistics to the performance
management system.
Configuration Roadmap
The configuration roadmap for DeviceA is as follows:
1. Configure the server.
2. Configure the session-reflector.
Data Preparation
To complete the configuration, you need the following data:
● IP address of DeviceA
● TCP port number
● Inactive interval for a control session
● Inactive interval for a test session
Procedure
Step 1 Configure DeviceA, DeviceB, and the performance management system to be
routable. The configuration details are not provided here.
Step 2 Configure the server.
<DeviceA> system-view
[~DeviceA] nqa twamp
[~DeviceA-twamp] server
[*DeviceA-twamp-srv] tcp port 65530
[*DeviceA-twamp-srv] control-session inactive 600
[*DeviceA-twamp-srv] quit
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
nqa twamp
server
tcp port 65530
control-session inactive 600
reflector
test-session inactive 600
#
return
Networking Requirements
On the L3 VXLAN shown in Figure 7-3, DeviceB functions as the server in a
TWAMP test. DeviceA functions as the control-client and specifies the IP address
of DeviceB to start collecting statistics. DeviceA sends statistics to the performance
management system.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VXLAN tunnel between DeviceA and DeviceB.
2. Configure the server on DeviceB.
3. Configure the session-reflector on DeviceB.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces connecting devices
● TCP port number
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure a VXLAN tunnel between DeviceA and DeviceB.
For the configuration roadmap, see VXLAN Configuration. For configuration
details, see Configuration Files.
After a VXLAN tunnel is established, you can run the display vxlan tunnel
command on DeviceA to view VXLAN tunnel information. Use the command
output on DeviceA as an example.
[~DeviceA] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.1 2.2.2.2 up dynamic 00:12:56
Step 4 Set the forwarding mode of the VXLAN tunnel to hardware loopback.
# Configure DeviceA.
[~DeviceA] global-gre forward-mode loopback
# Configure DeviceB.
[~DeviceB] global-gre forward-mode loopback
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 10:1
apply-label per-instance
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 5010
#
bridge-domain 10
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
interface Nve1
source 2.2.2.2
vni 20 head-end peer-list protocol bgp
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
peer 1.1.1.1 advertise encap-type vxlan
#
nqa twamp
server
tcp port 65530 vpn-instance vpn1
reflector
#
global-gre forward-mode loopback
#
return
Networking Requirements
On the EVPN L3VPN shown in Figure 7-4, DeviceB functions as the server in a
TWAMP test. DeviceA functions as the control-client and specifies the IP address
of DeviceB to start collecting statistics. DeviceA sends statistics to the performance
management system.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPN.
2. Configure the server.
3. Configure the session-reflector.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces connecting devices
● TCP port number
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure an IS-IS SR-MPLS BE tunnel between DeviceA and DeviceB.
For the configuration roadmap, see Configuring an IS-IS SR-MPLS BE Tunnel. For
configuration details, see Configuration Files.
Step 4 Configure an EVPN L3VPN between DeviceA and DeviceB.
For configuration details, see Configuring an EVPN to Carry Layer 3 Services. For
configuration details, see Configuration Files.
Step 5 Configure the server.
<DeviceB> system-view
[~DeviceB] nqa twamp
[*DeviceB-twamp] server
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy SR-MPLS BE
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.3
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0012.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.3 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
bgp 100
peer 1.1.1.2 as-number 100
peer 1.1.1.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.2 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.2 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.2 enable
#
tunnel-policy SR-MPLS BE
tunnel select-seq lsp load-balance-number 1
#
return
● DeviceB configuration file
#
sysname DeviceB
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy SR-MPLS BE
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.2
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0010.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.2 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 1.1.1.3 as-number 100
peer 1.1.1.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.3 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.3 enable
#
nqa twamp
server
tcp port 65530 vpn-instance vpna
reflector
#
tunnel-policy SR-MPLS BE
tunnel select-seq lsp load-balance-number 1
#
return
The Two-Way Active Measurement Protocol (TWAMP) Light function rapidly and
flexibly measures the round-trip performance of an IP network.
Background
TWAMP is an IP performance monitoring (IPPM) protocol and has two versions:
standard version and light version. Different from standard TWAMP, TWAMP Light
moves the control plane from the Responder to the Controller so that TWAMP
control modules can be simply deployed on the Controller. Therefore, TWAMP
Light greatly relaxes its requirements on the Responder performance, allowing the
Responder to be rapidly deployed..
Characteristic
TWAMP Light deploys the Session-Sender on the Controller, and deploys the
Session-Reflector on the Responder.
The Controller creates test sessions, collects performance statistics, and reports
statistics to the NMS using Performance Management (PM) or MIBs. After that,
the Controller parses NMS information and sends the results to the Responder
through private channels. The Responder merely responds to TWAMP-Test packets
received over test sessions.
Models
In Figure 8-1, TWAMP-Test packets function as probes and carry the IP address
and UDP port number, and fixed TTL value 255 that are predefined for the test
session between the Controller and Responder. The Controller sends a TWAMP-
Test packet to the Responder, and the Responder replies to it. The Controller
collects TWAMP statistics.
TWAMP Light defines two types of TWAMP-Test packets: Test-request packets and
Test-response packets.
● Test-request packets are sent from the Controller to the Responder.
● Test-response packets are replied by the Responder to the Controller.
Configuration Precautions
Restrictions Guidelines Impact
Usage Scenario
As TWAMP Light simplifies deployment and supports plug-and-play, you can use
TWAMP Light to rapidly and flexibly measure the round-trip performance of an IP
network, such as the two-way packet loss rate, jitter, and delay.
Pre-configuration Tasks
Before configuring TWAMP Light functions, complete the following tasks:
● Ensure that devices on the live network support TWAMP Light and comply
with standard protocols.
● Ensure that the Controller and Responder are routable and IP links between
them work properly.
Procedure
Step 1 Run system-view
The TWAMP Light Responder is enabled, and the TWAMP Light Responder view is
displayed.
● After a test session is configured, its parameters cannot be modified. To modify parameters
of a test session, delete the session and reconfigure it.
● The IP address configured for a test session must be a unicast address.
● The UDP port number of the Responder must be a port number not in use.
● The VPN instance configured for a test session must be existent. When you attempt to delete
the VPN instance, the system prompts that the VPN instance cannot be deleted because it
has been bound to a TWAMP test session.
● In a Layer 2 and Layer 3 hybrid network scenario where base stations are offline, static ARP
must be configured on Layer 3 virtual interfaces of devices at the edge of Layer 2 and Layer
3 networks.
----End
Context
To configure TWAMP Light, you must configure the Responder and then the
Controller. If only the Controller is configured, the Responder sends a large
number of TWAMP Light packets received from the Controller to the LDM, and
responds to the Controller with a large number of ICMP packets.
Procedure
Step 1 Configure the Control-Client and create a test session.
1. Run system-view
The system view is displayed.
2. Run nqa twamp-light
The TWAMP Light view is displayed.
3. Run client
The TWAMP Light Control-Client is enabled, and the TWAMP Light Control-
Client view is displayed.
4. Run test-session session-id { sender-ip sender-ip-address reflector-ip
reflector-ip-address | sender-ipv6 sender-address-v6 reflector-ipv6 reflector-
address-v6 } sender-port sender-port reflector-port reflector-port [ vpn-
instance vpn-instance-name ] [ dscp dscp-value | padding padding-length |
padding-type padding-type | description description ] *
A TWAMP Light test session is created.
– After a test session is configured, its parameters cannot be modified. To
modify parameters of a test session, delete the session and reconfigure it.
– The IP address configured for a test session must be a unicast address.
– The UDP port number of a Controller must be an unused port number.
– The VPN instance configured for a test session must be existent. When
you attempt to delete the VPN instance, the system prompts that the
VPN instance cannot be deleted because it has been bound to a TWAMP
test session.
– To test packets with different DSCP values, you can specify different UDP
port numbers for test sessions. For example:
When configuring a TWAMP Light client to send IPv6 packets, ensure that the length
of the IPv6 packets to be sent is smaller than the smallest MTU configured on
interfaces along the path. Otherwise, packets are discarded.
Before the configuration, perform the ping test. Ensure that the source address,
destination address, and packet length of the ping packet are the same as those of the
TWAMP Light IPv6 packet. Then run the display ipv6 pathmtu command to check the
PMTU value of each interface along the path. For details, see Path MTU Test.
5. (Optional) Run test-session <session-id> bind interface { interface-type
interface-number | interface-name }
An interface is bound to the TWAMP Light test session.
After a TWAMP Light test session is bound to an interface and valid statistics
are collected, the statistics are advertised to the bound interface. Other
function modules can obtain the statistics from the interface. For example, an
IGP obtains the statistics from the interface and reports the statistics to the
Controller for path calculation.
Step 2 Configure the TWAMP Light Session-Sender and start the TWAMP Light test
session.
1. Run system-view
The TWAMP Light Session-Sender is enabled, and the TWAMP Light Session-
Sender view is displayed.
4. Start the TWAMP Light function.
– To perform on-demand measurement, run the test start test-session
session-id { duration duration | packet-count packet-count } [ period
{ 10 | 100 | 1000 | 30000 } ] [ time-out time-out ] command.
----End
Prerequisites
The TWAMP Light statistics collection function has been configured.
Procedure
● Run the display twamp-light test-session [ verbose | session-id ] command
to check real-time statistics about a specified TWAMP Light test session.
● Run the display twamp-light statistic-type { twoway-delay | twoway-loss }
test-session session-id command to check statistics about the two-way
packet delay or loss of a specified TWAMP Light test session.
----End
Context
NOTICE
TWAMP Light session statistics cannot be restored after being cleared. Exercise
caution when clearing the statistics.
Procedure
● Run the reset twamp-light statistics { all | test-session session-id }
command to clear TWAMP Light session statistics.
----End
Networking Requirements
On the IP network shown in Figure 8-2, DeviceA functions as the Controller, and
DeviceB functions as the Responder.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the NMS.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
devices (Devices) can communicate at the network layer.
2. Configure the TWAMP Light Responder on DeviceB.
3. Configure the TWAMP Light Controller on DeviceA.
Data Preparation
To complete the configuration, you need the following data:
● Device B (Responder)
– IP address 2.2.2.2
– UDP port number 2010
● Device A (Controller)
– IP address 1.1.1.1
– UDP port number 2001
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices (Devices) can communicate at the network layer. The configuration details
are not provided here.
Step 2 Configure the TWAMP Light Responder.
<DeviceB> system-view
[~DeviceB] nqa twamp-light
[~DeviceB-twamp-light] responder
[~DeviceB-twamp-light-responder] test-session 1 local-ip 2.2.2.2 remote-ip 1.1.1.1 local-port 2010
remote-port 2001
Step 3 Configure the TWAMP Light Controller through TWAMP Light Client configuration.
<DeviceA> system-view
[~DeviceA] nqa twamp-light
[~DeviceA-twamp-light] client
[~DeviceA-twamp-light-client] test-session 1 sender-ip 1.1.1.1 reflector-ip 2.2.2.2 sender-port 2001
reflector-port 2010
Step 4 Configure the TWAMP Light Controller through TWAMP Light Sender
configuration.
<DeviceA> system-view
[~DeviceA] nqa twamp-light
[~DeviceA-twamp-light] sender
[~DeviceA-twamp-light-sender] test start-continual test-session 1 period 10
# Check statistics about the two-way delay of a TWAMP Light session on DeviceA.
[~DeviceA] display twamp-light statistic-type twoway-delay test-session 1
Latest two-way delay statistics(usec):
--------------------------------------------------------------------------------
Index Delay(Avg) Jitter(Avg)
--------------------------------------------------------------------------------
15 170 3
16 170 3
17 171 3
18 170 3
19 170 3
20 170 3
21 170 3
22 170 3
23 170 3
24 170 3
25 170 3
26 170 3
27 170 3
28 170 3
29 170 3
30 170 3
31 170 3
32 170 3
33 170 3
34 170 3
35 170 3
36 170 3
37 170 3
38 170 3
39 170 3
40 170 3
41 170 3
42 170 3
43 170 3
44 170 3
--------------------------------------------------------------------------------
Average Delay : 170 Average Jitter : 3
Maximum Delay : 187 Maximum Jitter : 22
Minimum Delay : 162 Minimum Jitter : 0
# Check statistics about the two-way packet loss of a TWAMP Light session on
DeviceA.
[~DeviceA] display twamp-light statistic-type twoway-loss test-session 1
Latest two-way loss statistics:
--------------------------------------------------------------------------------
Index Loss count Loss ratio
--------------------------------------------------------------------------------
26 0 0.0000%
27 0 0.0000%
28 0 0.0000%
29 0 0.0000%
30 0 0.0000%
31 0 0.0000%
32 0 0.0000%
33 0 0.0000%
34 0 0.0000%
35 0 0.0000%
36 0 0.0000%
37 0 0.0000%
38 0 0.0000%
39 0 0.0000%
40 0 0.0000%
41 0 0.0000%
42 0 0.0000%
43 0 0.0000%
44 0 0.0000%
45 0 0.0000%
46 0 0.0000%
47 0 0.0000%
48 0 0.0000%
49 0 0.0000%
50 0 0.0000%
51 0 0.0000%
52 0 0.0000%
53 0 0.0000%
54 0 0.0000%
55 0 0.0000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.0000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.0000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.0000%
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
nqa twamp-light
client
test-session 1 sender-ip 1.1.1.1 reflector-ip 2.2.2.2 sender-port 2001 reflector-port 2010
sender
#
Networking Requirements
On the L3 VXLAN shown in Figure 8-3, DeviceA functions as the Responder and
DeviceB functions as the Controller.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VXLAN tunnel between DeviceA and DeviceB.
2. Configure the TWAMP Light Responder on DeviceA.
3. Configure the TWAMP Light Controller on DeviceB.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces connecting devices
● UDP port number
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure a VXLAN tunnel between DeviceA and DeviceB.
For the configuration roadmap, see VXLAN Configuration. For configuration
details, see Configuration Files.
After a VXLAN tunnel is established, you can run the display vxlan tunnel
command on DeviceA to display VXLAN tunnel information. The following
example uses the command output on DeviceA.
[~DeviceA] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.1 2.2.2.2 up dynamic 00:12:56
Step 4 Set the forwarding mode of the VXLAN tunnel to hardware loopback.
# Configure DeviceA.
[~DeviceA] global-gre forward-mode loopback
# Configure DeviceB.
[~DeviceB] global-gre forward-mode loopback
--------------------------------------------------------------------------------
Average Delay : 170 Average Jitter : 3
Maximum Delay : 187 Maximum Jitter : 22
Minimum Delay : 162 Minimum Jitter : 0
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 20:1
apply-label per-instance
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 22:22
apply-label per-instance
vpn-target 2:2 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 2:2 import-extcommunity
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance evrf3
#
isis 1
network-entity 10.0000.0000.0001.00
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Nve1
source 1.1.1.1
vni 10 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
peer 2.2.2.2 advertise encap-type vxlan
#
nqa twamp-light
client
test-session 1 sender-ip 192.168.2.1 reflector-ip 192.168.2.2 sender-port 2001 reflector-port 2010
sender
#
global-gre forward-mode loopback
#
return
Networking Requirements
On the EVPN L3VPN shown in Figure 8-4, DeviceB functions as the Responder and
DeviceA functions as the Controller.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPN.
2. Configure the TWAMP Light Controller.
3. Configure the TWAMP Light Responder.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
550 0 0.000%
551 0 0.000%
552 0 0.000%
553 0 0.000%
554 0 0.000%
555 0 0.000%
556 0 0.000%
557 0 0.000%
558 0 0.000%
559 0 0.000%
560 0 0.000%
561 0 0.000%
562 0 0.000%
563 0 0.000%
564 0 0.000%
565 0 0.000%
566 0 0.000%
567 0 0.000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.000%
----End
Configuration Files
● DeviceA configuration File
#
sysname DeviceA
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy SR-MPLS-BE evpn
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
mpls lsr-id 1.1.1.3
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0012.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.3 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
bgp 100
peer 1.1.1.2 as-number 100
peer 1.1.1.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.2 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.2 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.2 enable
#
nqa twamp-light
client
test-session 100 sender-ip 192.168.2.2 reflector-ip 192.168.2.1 sender-port 2000 reflector-port 3000
vpn-instance vpna padding 1454
sender
#
tunnel-policy SR-MPLS BE
tunnel select-seq lsp load-balance-number 1
#
return
● DeviceB configuration File
#
sysname DeviceB
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy SR-MPLS-BE evpn
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
mpls lsr-id 1.1.1.2
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0010.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.2 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 1.1.1.3 as-number 100
peer 1.1.1.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.3 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.3 enable
#
nqa twamp-light
responder
test-session 100 local-ip 192.168.2.1 remote-ip 192.168.2.2 local-port 3000 remote-port 2000 vpn-
instance vpna
#
tunnel-policy SR-MPLS BE
tunnel select-seq lsp load-balance-number 1
#
return
Networking Requirements
On the IP network shown in Figure 8-5, DeviceA functions as the Controller, and
DeviceB functions as the Responder.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the NMS.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
the devices can communicate at the network layer.
2. Configure the TWAMP Light Responder on DeviceB.
Data Preparation
To complete the configuration, you need the following data:
● DeviceB (Responder)
– IP address 2::2
– UDP port number 2010
● DeviceA (Controller)
– IP address 1::1
– UDP port number 2001
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all the
devices can communicate at the network layer. The configuration procedure is not
provided here.
Step 2 Configure the TWAMP Light Responder.
<DeviceB> system-view
[~DeviceB] nqa twamp-light
[*DeviceB-twamp-light] responder
[*DeviceB-twamp-light-responder] test-session 1 local-ipv6 2::2 remote-ipv6 1::1 local-port 2010 remote-
port 2001
Step 3 Configure the TWAMP Light Controller through TWAMP Light Client configuration.
<DeviceA> system-view
[~DeviceA] nqa twamp-light
[*DeviceA-twamp-light] client
[*DeviceA-twamp-light-client] test-session 1 sender-ipv6 1::1 reflector-ipv6 2::2 sender-port 2001
reflector-port 2010
Step 4 Configure the TWAMP Light Controller through TWAMP Light Sender
configuration.
<DeviceA> system-view
[~DeviceA] nqa twamp-light
[*DeviceA-twamp-light] sender
[*DeviceA-twamp-light-sender] test start-continual test-session 1 period 10
# Check statistics about the two-way delay of a TWAMP Light session on DeviceA.
[~DeviceA] display twamp-light statistic-type twoway-delay test-session 1
Latest two-way delay statistics(usec):
--------------------------------------------------------------------------------
Index Delay(Avg) Jitter(Avg)
--------------------------------------------------------------------------------
15 170 3
16 170 3
17 171 3
18 170 3
19 170 3
20 170 3
21 170 3
22 170 3
23 170 3
24 170 3
25 170 3
26 170 3
27 170 3
28 170 3
29 170 3
30 170 3
31 170 3
32 170 3
33 170 3
34 170 3
35 170 3
36 170 3
37 170 3
38 170 3
39 170 3
40 170 3
41 170 3
42 170 3
43 170 3
44 170 3
--------------------------------------------------------------------------------
Average Delay : 170 Average Jitter : 3
Maximum Delay : 187 Maximum Jitter : 22
Minimum Delay : 162 Minimum Jitter : 0
# Check statistics about the two-way packet loss of a TWAMP Light session on
DeviceA.
[~DeviceA] display twamp-light statistic-type twoway-loss test-session 1
Latest two-way loss statistics:
--------------------------------------------------------------------------------
Index Loss count Loss ratio
--------------------------------------------------------------------------------
26 0 0.0000%
27 0 0.0000%
28 0 0.0000%
29 0 0.0000%
30 0 0.0000%
31 0 0.0000%
32 0 0.0000%
33 0 0.0000%
34 0 0.0000%
35 0 0.0000%
36 0 0.0000%
37 0 0.0000%
38 0 0.0000%
39 0 0.0000%
40 0 0.0000%
41 0 0.0000%
42 0 0.0000%
43 0 0.0000%
44 0 0.0000%
45 0 0.0000%
46 0 0.0000%
47 0 0.0000%
48 0 0.0000%
49 0 0.0000%
50 0 0.0000%
51 0 0.0000%
52 0 0.0000%
53 0 0.0000%
54 0 0.0000%
55 0 0.0000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.0000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.0000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.0000%
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
nqa twamp-light
client
test-session 1 sender-ipv6 1::1 reflector-ipv6 2::2 sender-port 2001 reflector-port 2010
sender
#
Networking Requirements
On the VLL+L3VPN networks shown in Figure 8-6, DeviceA functions as the
Responder and is deployed on the last hop of the link connecting to a base
station. DeviceB functions as the Controller and is deployed on the aggregation
node.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure VLL and L3VPN networks.
2. Configure the TWAMP Light Responder.
3. Configure devices at the edge of Layer 2 and Layer 3 networks.
4. Configure the TWAMP Light Controller.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
Step 2 Configure an IGP on the backbone network. OSPF is used in this example.
Step 3 Configure an MPLS tunnel between DeviceA and DeviceC, and between DeviceC
and DeviceB.
After an MPLS tunnel is established, you can run the display mpls ldp command
on DeviceA to display LDP information. Take the display on DeviceA as an
example:
[~DeviceA] display mpls ldp
LDP Global Information
------------------------------------------------------------------------------
Protocol Version : V1 Neighbor Liveness : 600 Sec
Graceful Restart : Off FT Reconnect Timer : 300 Sec
MTU Signaling : On Recovery Timer : 300 Sec
Capability-Announcement : On Longest-match : Off
mLDP P2MP Capability : Off mLDP MBB Capability : Off
mLDP MP2MP Capability : Off mLDP Recursive-fec : Off
When DeviceA functions as the reflector, the local IP address in the command for creating a
session is the IP address of the base station, and the remote IP address is the IP address of
DeviceB.
Step 6 When the base station is offline, you need to configure static ARP on DeviceC to
specify the mapping between the IP address and the MAC address of the base
station.
<DeviceC> system-view
[~DeviceC] arp static 192.168.1.1 00e0-fc12-3456 vid 26 interface Virtual-Ethernet1/0/1.31
[*DeviceC] commit
26 0 0.0000%
27 0 0.0000%
28 0 0.0000%
29 0 0.0000%
30 0 0.0000%
31 0 0.0000%
32 0 0.0000%
33 0 0.0000%
34 0 0.0000%
35 0 0.0000%
36 0 0.0000%
37 0 0.0000%
38 0 0.0000%
39 0 0.0000%
40 0 0.0000%
41 0 0.0000%
42 0 0.0000%
43 0 0.0000%
44 0 0.0000%
45 0 0.0000%
46 0 0.0000%
47 0 0.0000%
48 0 0.0000%
49 0 0.0000%
50 0 0.0000%
51 0 0.0000%
52 0 0.0000%
53 0 0.0000%
54 0 0.0000%
55 0 0.0000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.0000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.0000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.0000%
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
mpls
#
mpls lsr-id 10.0.0.1
#
mpls
#
mpls ldp
outbound peer all split-horizon
accept target-hello all
#
ipv4-family
#
mpls ldp remote-peer 10.0.0.2
mpls ldp timer hello-hold 45
mpls ldp timer keepalive-hold 45
remote-ip 10.0.0.2
#
ospf 1 router-id 10.0.0.1
area 0.0.0.1
network 3.0.0.0 0.0.0.3
network 10.0.0.1 0.0.0.0
#
interface loopback0
ip address 10.0.0.1 255.255.255.255
#
interface GigabitEthernet1/0/1.31
vlan-type dot1q 31
mtu 9500
ip address 3.0.0.1 255.255.255.252
mpls
mpls ldp
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 1
mtu 9500
mpls l2vc 10.0.0.2 26 control-word raw
#
nqa twamp-light
responder
test-session 1 local-ip 192.168.1.1 remote-ip 192.168.2.2 local-port 6000 remote-port 6000 interface
GigabitEthernet1/0/0.1
#
#
sysname DeviceC
#
mpls
#
mpls lsr-id 10.0.0.2
#
mpls
#
mpls ldp remote-peer 10.0.0.1
mpls ldp timer hello-hold 45
mpls ldp timer keepalive-hold 45
remote-ip 10.0.0.1
#
ospf 1
stub-router on-startup
area 0.0.0.1
network 3.0.0.0 0.0.0.3
network 10.0.0.2 0.0.0.0
#
isis 1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ip vpn-instance CDMA-RAN
ipv4-family
route-distinguisher 4134:3060
apply-label per-instance
arp vlink-direct-route advertise
vpn-target 4134:306000 export-extcommunity
vpn-target 4134:306000 import-extcommunity
#
interface loopback0
ip address 10.0.0.2 255.255.255.255
isis enable 1
#
interface GigabitEthernet1/0/1.31
vlan-type dot1q 31
mtu 9500
ip address 3.0.0.2 255.255.255.252
mpls
mpls ldp
#
interface GigabitEthernet1/0/0.31
vlan-type dot1q 31
mtu 9500
ip address 3.0.0.5 255.255.255.252
isis enable 1
mpls
mpls ldp
#
interface Virtual-Ethernet1/0/0
ve-group 1 l3-access
#
interface Virtual-Ethernet1/0/0.2
mtu 9500
ip binding vpn-instance CDMA-RAN
ip address 192.168.1.3 255.255.0.0
encapsulation dot1q-termination
dot1q termination vid 26 to 50
arp broadcast enable
arp-proxy enable
arp-proxy inter-sub-vlan-proxy enable
arp-proxy inner-sub-vlan-proxy enable
ipv6 nd ns multicast-enable
ipv6 nd na glean
ipv6 nd proxy inter-access-vlan enable
#
interface Virtual-Ethernet1/0/1
ve-group 1 l2-terminate
#
interface Virtual-Ethernet1/0/1.26
vlan-type dot1q 26
mtu 9500
mpls l2vc 10.0.0.1 26 control-word raw ignore-standby-state
#
arp static 192.168.1.1 00e0-fc12-3456 vid 26 interface Virtual-Ethernet1/0/1.26
9 sFlow Configuration
This chapter provides an overview of Sampled Flow (sFlow) and describes how to
configure this traffic monitoring technology.
Definition
Sampled Flow (sFlow) is a traffic monitoring technology that collects and analyzes
traffic statistics based on packet sampling.
Purpose
Enterprise networks are generally smaller and more flexible than carrier networks.
However, they are often prone to attacks and service exceptions. To help ensure
network stability, enterprises require a traffic monitoring technique that can
promptly identify traffic anomalies and the source of attack traffic, allowing them
to quickly rectify faults.
Benefits
sFlow is comparable to NetStream. In NetStream, network devices collect and
analyze traffic statistics. The devices save these statistics to a buffer and export
them when they expire or when the buffer overflows. sFlow does not require a
flow table. In sFlow, network devices only sample packets, and a remote collector
collects and analyzes traffic statistics.
● Fewer resources and lower costs. sFlow requires no flow table and uses only a
small number of network devices, lowering costs.
● Flexible collector deployment. The collector can be deployed flexibly, enabling
traffic statistics to be collected and analyzed according to various traffic
characteristics.
Configuration Precautions
Prerequisites
Before configuring sFlow, complete the following tasks:
● Ensure that there are reachable routes between the sFlow agent and sFlow
collector.
● Create a virtual private network (VPN) instance if the sFlow agent and
collector are located on a private network.
Context
The sFlow collector's address is the destination address of sFlow packets.
Procedure
Step 1 Run system-view
The specified IP address of an sFlow agent must be a valid unicast address that has been
configured on the device interface. If an IPv6 address is specified, the IPv6 address must be
a global unicast address, but not a link-local address.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run slot slot-id
The slot view is displayed.
Step 3 Run sflow enable
sFlow is enabled on the board in the slot.
Step 4 Run commit
The configuration is committed.
Step 5 Run quit
Return to the system view.
Step 6 Run interface interface-type interface-number
The interface view is displayed.
----End
Prerequisites
sFlow has been configured.
Procedure
Step 1 Run the display sflow configuration command to check global sFlow
configurations.
Step 2 Run the display sflow interface interface-type interface-number command to
check the sFlow configuration on a specified interface.
Step 3 Run the display sflow packet statistics [ interface interface-type interface-
number ] slot slot-id command to check statistics about sFlow packets sent on a
specified interface or sFlow packets sent and received on a specified board.
----End
Networking Requirements
As shown in Figure 9-1, traffic between network 1 and network 2 is exchanged
through device A. Maintenance personnel need to monitor the traffic on interface
2 and devices to identify traffic anomalies and ensure normal operation on
network 1.
In this example:
● Interface 1 is GE 1/0/1.
● Interface 2 is GE 1/0/2.
● Interface 3 is GE 1/0/3.
Configuration Roadmap
To configure sFlow, configure device A as an sFlow agent and enable Flow
sampling on interface 2 so that the agent collects traffic statistics. The agent
encapsulates traffic statistics into sFlow packets and sends the sFlow packets from
interface 1 to the sFlow collector. The collector displays the traffic statistics based
on information in the received sFlow packets.
The configuration roadmap is as follows:
1. Assign an IP address to each interface.
2. Configure sFlow agent and collector information on the device.
3. Configure flow sampling on interface 2.
Procedure
Step 1 Assign an IP address to each interface of device A.
<DeviceA> system-view
[~DeviceA]interface GigabitEthernet 1/0/1
[~DeviceA-GigabitEthernet1/0/1]ip address 10.1.10.1 24
[*DeviceA-GigabitEthernet1/0/1]commit
[~DeviceA]interface GigabitEthernet 1/0/2
[~DeviceA-GigabitEthernet1/0/2]ip address 10.1.20.1 24
[*DeviceA-GigabitEthernet1/0/2]commit
[~DeviceA]interface GigabitEthernet 1/0/3
[~DeviceA-GigabitEthernet1/0/3]ip address 10.1.30.1 24
[*DeviceA-GigabitEthernet1/0/3]commit
[~DeviceA-GigabitEthernet1/0/3]quit
[~DeviceA]sflow
[~DeviceA-sflow]sflow agent ip 10.1.10.1
[*DeviceA-sflow]commit
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/1
ip address 10.1.10.1 255.255.255.0
#
interface GigabitEthernet1/0/2
ip address 10.1.20.1 255.255.255.0
#
interface GigabitEthernet1/0/3
ip address 10.1.30.1 255.255.255.0
#
sflow
sflow agent ip 10.1.10.1
sflow collector 2
sflow server ip 10.1.10.2
slot 1
sflow enable
#
interface GigabitEthernet1/0/2
sflow flow-sampling collector 2 inbound
sflow flow-sampling rate 4000 inbound
#
return
Configuration Precautions
Restrictions Guidelines Impact
----End
11 iFIT Configuration
iFIT and IP FPM are mutually exclusive. In VS mode, iFIT is supported only by the admin VS.
Background
Currently, there are two types of OAM performance measurement methods based
on the measurement type: out-band measurement and in-band measurement.
Basic Concepts
The iFIT model describes how service flows are measured to obtain packet loss
and latency. Specifically, iFIT measures the packet loss and latency of service flows
on the ingress and egress of the transit network, and summarizes desired
performance indicators. The iFIT model is composed of three objects: target flow,
transit network, and measurement system. Figure 11-1 shows the iFIT model.
● Target flow
Target flows are key elements in iFIT statistics. A target flow must be specified
before each statistics collection operation. Target flows can be classified as
static or dynamic flows based on the generation mode.
– One or more fields in IP headers can be specified to identify a static flow.
The fields that can be specified are the source IP address and prefix,
destination IP address and prefix, protocol type, source port number,
destination port number, and type of service (ToS). Currently, iFIT
supports measurement based on source and destination IP addresses,
source and destination port numbers, and protocol number.
– Dynamic flows are triggered based on packets with iFIT headers. If a
dynamic flow instance does not collect traffic statistics for a specified
length of time, it automatically ages.
The default aging time of a dynamic flow instance is 10 times the measurement
period, but cannot be less than 10 minutes.
● Transit network
The transit network only bears target flows. The target flows are not
generated or terminated on the transit network. The transit network can be a
Layer 2 (L2), Layer 3 (L3), or L2+L3 hybrid network. Each node on the transit
network must be reachable at the network layer.
● Measurement system
The measurement system consists of the ingress (configured with iFIT and
iFIT parameters) and multiple iFIT-capable devices. Packet loss is the
difference between the number of packets entering the network and the
number of packets leaving the network within a specified measurement
interval.
The number of packets entering the network is the sum of all packets moving
in the ingress direction: PI = PI1 + PI2 + PI3
The number of packets leaving the network is the sum of all packets moving
in the egress direction: PE = PE1 + PE2 + PE3
Latency is the difference between the time a service flow enters the network
and the time the service flow leaves the network within a specified
measurement interval.
Measurement Modes
iFIT supports end-to-end and hop-by-hop measurement.
● End-to-end measurement: To monitor network performance in real time,
configure end-to-end measurement.
● Hop-by-hop measurement: To diagnose faults, configure hop-by-hop
measurement.
Configuration Precautions
Restrictions Guidelines Impact
iFIT does not support Properly plan services. iFIT measurement results
multi-VS, and can be are not available.
configured only in
Admin-VS. The VSn does
not support iFIT
detection.
iFIT does not support Properly plan services. iFIT measurement results
broadcast or multicast are not available.
flow measurement.
iFIT does not support Properly plan services. iFIT measurement results
GRE or VXLAN or SRv6 are not available.
tunnels.
Before configuring static Properly plan services. The egress node does
capability negotiation, not identify packets,
ensure that the egress PE which causes traffic
can remove the iFIT interruption.
extension header.
Otherwise, traffic
forwarding is affected.
Context
On the network shown in Figure 11-2, the target flow enters the transit network
through DeviceA, traverses DeviceB, and leaves the transit network through
DeviceC. To monitor transit network performance in real time or perceive faults,
configure iFIT end-to-end measurement on both DeviceA and DeviceC.
Pre-configuration Tasks
Before configuring end-to-end measurement, complete the following tasks:
● Configure a dynamic routing protocol or static routes so that devices are
reachable at the network layer.
● Configure 1588v2 so that all device clocks can be synchronized.
Procedure
Static flows are manually configured, whereas dynamic flows are triggered based
on packets with iFIT headers. To configure a static flow, perform the following
steps:
1. Run system-view
The system view is displayed.
2. Run ifit
iFIT is globally enabled, and the iFIT view is displayed.
Only steps 1 and 2 need to be configured on DeviceC that functions as the traffic
egress.
3. Run encapsulation nexthop ip-address
The device is enabled to encapsulate the iFIT header in packets destined for a
specified next hop IP address.
4. Run node-id node-id
A node ID is configured.
5. Run instance instance-name
An iFIT instance is created, and its view is displayed.
6. Run flow unidirectional source source-ip [ source-mask ] destination
destination-ip [ destination-mask ] [ protocol { { tcp | udp | sctp | protocol-
number4 | protocol-number5 | protocol-number6 } [ source-port source-
port ] [ destination-port destination-port ] | { protocol-number | protocol-
1. Run system-view
The system view is displayed.
2. Run ifit
iFIT is globally enabled, and the iFIT view is displayed.
Only steps 1 and 2 need to be configured on DeviceC that functions as the traffic
egress.
3. (Optional) Run dynamic-flow age interval-multiplier multi-value
An aging time is set for dynamic flows.
4. (Optional) Run reset dynamic flow { flowId | all }
All configured dynamic flow instances or a specified one is deleted.
5. Run commit
The configuration is committed.
Context
On the network shown in Figure 11-3, the target flow enters the transit network
through DeviceA, traverses DeviceB, and leaves the transit network through
DeviceC. To measure hop-by-hop packet loss and latency when troubleshooting
deterioration of network performance, configure hop-by-hop measurement on
DeviceA, DeviceB, and DeviceC.
Procedure
Static flows are manually configured, whereas dynamic flows are triggered based
on packets with iFIT headers. To configure a static flow, perform the following
steps:
1. Run system-view
The system view is displayed.
2. Run ifit
iFIT is globally enabled, and the iFIT view is displayed.
Only steps 1 and 2 need to be configured on DeviceC that functions as the traffic
egress.
3. (Optional) Run dynamic-flow age interval-multiplier multi-value
An aging time is set for iFIT dynamic flows.
4. (Optional) Run reset dynamic flow { flowId | all }
All configured dynamic flow instances or a specified one is deleted.
5. Run commit
The configuration is committed.
Prerequisites
iFIT has been configured.
Procedure
Step 1 Run the display ifit command to check information about iFIT flows.
Step 2 Run the display ifit { source src-ip-address [ destination dest-ip-address ] |
destination dest-ip-address } command to check information about an iFIT flow
based on specified IP addresses.
Step 3 Run the display ifit static command to check information about iFIT static flows.
Step 4 Run the display ifit dynamic-hop command to check information about iFIT
dynamic flows on the transit node and egress.
----End
Networking Requirements
IPTV, video conferencing, voice over IP (VoIP), and other value-added services are
widely used on networks. Because these services rely heavily on high speed and
robust networks, link connectivity and network performance are essential to
service transmission.
● For voice services, users do not detect any change in voice quality if the
packet loss rate on links is lower than 5%. If the packet loss rate is higher
than 10%, voice quality deteriorates significantly.
● For real-time services, such as VoIP, online games, and video conferencing, a
latency lower than 100 ms, or even 50 ms, is required. As the latency
increases, user experience worsens.
To meet service quality requirements, you can measure packet loss and latency so
that you can quickly respond to network issues if service quality deteriorates.
The IP RAN shown in Figure 11-4 transmits voice services. Voice flows are
symmetrical and bidirectional, and therefore one voice flow can be divided into
two unidirectional service flows. The forward service flow enters the network
through the UPE, traverses SPE1, and leaves the network through the NPE. The
reverse service flow enters the network through the NPE, also traverses SPE1, and
leaves the network through the UPE.
To monitor the packet loss rate and latency of the links between the UPE and NPE
in real time, configure end-to-end measurement. This enables you to respond to
network issues if service quality deteriorates and therefore meet users' service
quality requirements.
Configuration Roadmap
The configuration roadmap is as follows:
1. Perform the following steps on the UPE, SPE1, SPE2, and NPE to carry IP RAN
services.
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses
Open Shortest Path First (OSPF) as the routing protocol.
b. Configure Multiprotocol Label Switching (MPLS) and public network
tunnels to carry L3VPN services. In this example, SR-MPLS TE tunnels are
established between the UPE and each SPE, between SPEs, and between
each SPE and the NPE.
c. Create a VPN instance on the UPE and NPE and import the local direct
routes on the UPE and NPE to their respective VPN instance routing
tables.
d. Establish Multiprotocol Interior Border Gateway Protocol (MP-IBGP) peer
relationships between the UPE and SPEs and between the NPE and SPEs.
e. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE
as RR clients.
f. Configure VPN FRR on the UPE and NPE.
2. Configure 1588v2 to synchronize the clocks of the UPE, SPEs, and NPE.
3. Configure packet loss and latency measurement on the UPE and NPE to
collect packet loss and latency statistics at intervals.
● For upstream traffic in the HoVPN scenario, an SPE functions as the ingress, and a
VPN instance needs to be configured for the iFIT flow.
● For downstream traffic in the HoVPN scenario or upstream/downstream traffic in
the H-VPN scenario, an SPE functions as the ingress, and no VPN instance needs to
be configured for the iFIT flow.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 1
● IGP type: OSPF; process ID: 1; area ID: 0
● LSR IDs of the UPE, SPE1, and SPE2: 1.1.1.1, 2.2.2.2, and 3.3.3.3
● Tunnel interface names, tunnel IDs, and tunnel interface addresses for the
bidirectional tunnels between the UPE and SPE1: Tunnel11, 100, and loopback
interface address
● Tunnel interface names, tunnel IDs, and tunnel interface addresses for the
bidirectional tunnels between the UPE and SPE2: Tunnel12, 200, and loopback
interface address
● Tunnel policy names for the bidirectional tunnels between the UPE and SPEs:
policy1; tunnel selector names on the SPEs: BindTE
● Names, RDs, and VPN targets of the VPN instances on the UPE and NPE:
vpna, 100:1, and 1:1
● iFIT instance ID: 1; measurement interval: 10s
● Target flow's source IP address: 10.1.1.1; destination IP address: 10.2.1.1
Procedure
Step 1 Configure IP addresses, routing protocols, L3VPNs, and public network tunnels on
the UPE, SPE1, SPE2, and NPE. For configuration details, see Configuration Files.
Step 2 Configure 1588v2 to synchronize the clocks of the UPE, SPE1, and NPE.
1. # Import BITS0 signals to SPE1.
[~SPE1] clock bits-type bits0 2mhz
[*SPE1] clock source bits0 synchronization enable
[*SPE1] clock source bits0 priority 1
[*SPE1] commit
Step 3 Configure iFIT for a link between the UPE and NPE.
# Run the display ifit static command to check the configuration and status of
the UPE.
[~UPE] display ifit static instance 1
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Type : static
Instance Id : 10
Instance-name :1
Flow Id : 2099183618
Direct Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Loss Measure : enable
Delay Measure : enable
Test Type : e2e
Interval : 10(s)
# Run the display ifit dynamic-hop command to view the configuration and
status of the NPE.
[~NPE] display ifit dynamic-hop
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Flow Id : 2099183618
Flow Type : unidirectional
Interface : GigabitEthernet1/0/3
Direction : egress
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
segment-routing
mpls lsr-id 1.1.1.1
mpls
mpls te
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf enable area 0.0.0.0
ospf prefix-sid absolute 16100
#
explicit-path spe1
next sid label 16300 type adjacency
#
explicit-path spe2
next sid label 16400 type adjacency
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 100
mpls te reserved-for-binding
mpls te signal-protocol segment-routing
mpls te path explicit-path spe1
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
destination 1.1.1.1
mpls te tunnel-id 100
mpls te reserved-for-binding
mpls te signal-protocol segment-routing
mpls te path explicit-path upe
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 200
mpls te reserved-for-binding
mpls te signal-protocol segment-routing
mpls te path explicit-path npe
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 20000
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 4.4.4.4 te Tunnel11
tunnel binding destination 1.1.1.1 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 4.4.4.4
mpls
mpls te
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf enable area 0.0.0.0
ospf prefix-sid absolute 16300
#
explicit-path spe1
next sid label 16100 type adjacency
#
explicit-path spe2
next sid label 16200 type adjacency
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path spe1
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path spe2
#
bgp 100
router-id 4.4.4.4
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 20000
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 2.2.2.2 te Tunnel11
tunnel binding destination 3.3.3.3 te Tunnel12
#
ifit
#
return
Networking Requirements
IPTV, video conferencing, voice over IP (VoIP), and other value-added services are
widely used on networks. Because these services rely heavily on high speed and
robust networks, link connectivity and network performance are essential to
service transmission.
● For voice services, users do not detect any change in voice quality if the
packet loss rate on links is lower than 5%. If the packet loss rate is higher
than 10%, voice quality deteriorates significantly.
● For real-time services, such as VoIP, online games, and video conferencing, a
latency lower than 100 ms or even 50 ms, is required. As the latency
increases, user experience worsens.
To facilitate troubleshooting when network performance deteriorates, you can
configure hop-by-hop measurement.
The IP RAN shown in Figure 11-5 transmits video services. A unidirectional service
flow enters the network through the UPE, traverses SPE1, and leaves the network
through the NPE.
To monitor the packet loss and latency segment-by-segment, for example,
between the UPE and NPE, configure hop-by-hop measurement. This enables you
to rapidly and accurately locate faulty points.
Configuration Roadmap
The configuration roadmap is as follows:
1. Perform the following steps on the UPE, SPE1, SPE2, and NPE to carry IP RAN
services.
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses
OSPF as the routing protocol.
b. Configure MPLS and public network tunnels to carry L3VPN services. In
this example, SR-MPLS TE tunnels are established between the UPE and
each SPE, between SPEs, and between each SPE and the NPE.
c. Create a VPN instance on the UPE and NPE and import the local direct
routes on the UPE and NPE to their respective VPN instance routing
tables.
d. Establish MP-IBGP peer relationships between the UPE and SPEs and
between the NPE and SPEs.
e. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE
as RR clients.
f. Configure VPN FRR on the UPE and NPE.
2. Configure 1588v2 to synchronize the clocks of the UPE, SPEs, and NPE.
3. Configure packet loss and latency measurement on the UPE and NPE to
collect packet loss and latency statistics at intervals.
● For upstream traffic in the HoVPN scenario, an SPE functions as the ingress, and a
VPN instance needs to be configured for the iFIT flow.
● For downstream traffic in the HoVPN scenario or upstream/downstream traffic in
the H-VPN scenario, an SPE functions as the ingress, and no VPN instance needs to
be configured for the iFIT flow.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 1
● IGP type: OSPF; process ID: 1; area ID: 0
● LSR IDs of the UPE, SPE1, and SPE2: 1.1.1.1, 2.2.2.2, and 3.3.3.3
● Tunnel interface names, tunnel IDs, and tunnel interface addresses for the
bidirectional tunnels between the UPE and SPE1: Tunnel11, 100, and loopback
interface address
● Tunnel interface names, tunnel IDs, and tunnel interface addresses for the
bidirectional tunnels between the UPE and SPE2: Tunnel12, 200, and loopback
interface address
● Tunnel policy names for the bidirectional tunnels between the UPE and SPEs:
policy1; tunnel selector names on the SPEs: BindTE
● Names, RDs, and VPN targets of the VPN instances on the UPE and NPE:
vpna, 100:1, and 1:1
● iFIT instance ID: 1; measurement interval: 10s
● Forward target flow's source IP address: 10.1.1.1; destination IP address:
10.2.1.1
Procedure
Step 1 Configure IP addresses, routing protocols, L3VPNs, and public network tunnels on
the UPE, SPE1, SPE2, and NPE. For configuration details, see Configuration Files.
Step 2 Configure 1588v2 to synchronize the clocks of the UPE, SPE1, and NPE.
1. # Import BITS0 signals to SPE1.
[~SPE1] clock bits-type bits0 2mhz
[*SPE1] clock source bits0 synchronization enable
[*SPE1] clock source bits0 priority 1
[*SPE1] commit
2. # Enable 1588v2 globally.
# Configure SPE1.
[~SPE1] ptp enable
[*SPE1] ptp domain 1
[*SPE1] ptp device-type bc
[*SPE1] ptp clock-source local clock-class 185
[*SPE1] clock source ptp synchronization enable
[*SPE1] clock source ptp priority 1
[*SPE1] commit
# Configure the UPE.
[~UPE] ptp enable
[*UPE] ptp domain 1
[*UPE] ptp device-type bc
[*UPE] ptp clock-source local clock-class 185
[*UPE] clock source ptp synchronization enable
[*UPE] clock source ptp priority 1
[*UPE] commit
# Configure the NPE.
[~NPE] ptp enable
[*NPE] ptp domain 1
[*NPE] ptp device-type bc
[*NPE] ptp clock-source local clock-class 185
[*NPE] clock source ptp synchronization enable
[*NPE] clock source ptp priority 1
[*NPE] commit
3. Enable 1588v2 on interfaces.
# Configure SPE1.
[~SPE1] interface gigabitethernet 1/0/1
[~SPE1-GigabitEthernet1/0/1] ptp enable
[*SPE1-GigabitEthernet1/0/1] commit
[~SPE1-GigabitEthernet1/0/1] quit
[~SPE1] interface gigabitethernet 1/0/2
[~SPE1-GigabitEthernet1/0/2] ptp enable
[*SPE1-GigabitEthernet1/0/2] commit
[~SPE1-GigabitEthernet1/0/2] quit
[~SPE1] interface gigabitethernet 1/0/4
Step 3 Configure iFIT hop-by-hop measurement for a link between the UPE and NPE.
# Run the display ifit static and display ifit dynamic-hop commands on the UPE.
The two command outputs show the configurations of iFIT static and dynamic
flows on the UPE, respectively.
[~UPE] display ifit static instance 1
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Type : static
Instance Id : 10
Instance-name :1
Flow Id : 2099183617
Direct Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Loss Measure : enable
Delay Measure : enable
Test Type : trace
Interval : 10(s)
[~UPE] display ifit dynamic-hop
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Flow Id : 2099183617
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitOutput
# Configure SPE1.
<SPE1> system-view
[~SPE1] ifit
[*SPE1] encapsulation nexthop 4.4.4.4
[*SPE1] commit
# Run the display ifit dynamic-hop command to check the SPE1 configuration
and status.
[~SPE1] display ifit dynamic-hop
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Flow Id : 2099183617
Flow Type : unidirectional
Interface : GigabitEthernet1/0/2
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Flow Id : 2099183617
Flow Type : unidirectional
Interface : GigabitEthernet1/0/1
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
# Run the display ifit dynamic-hop command to check the configuration and
status of the NPE.
[~NPE] display ifit dynamic-hop
2019-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Flow Id : 2099183617
Flow Type : unidirectional
Interface : GigabitEthernet1/0/3
Direction : egress
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Flow Id : 2099183617
Flow Type : unidirectional
Interface : GigabitEthernet1/0/2
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
segment-routing
mpls lsr-id 1.1.1.1
mpls
mpls te
label advertise non-null
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
ptp enable
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf enable area 0.0.0.0
ospf prefix-sid absolute 16100
#
explicit-path spe1
next sid label 16300 type adjacency
#
explicit-path spe2
next sid label 16400 type adjacency
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 100
mpls te reserved-for-binding
mpls te signal-protocol segment-routing
mpls te path explicit-path spe1
#
interface Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
segment-routing
mpls lsr-id 4.4.4.4
mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.2 255.255.255.0
mpls
mpls te
ptp enable
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
mpls
mpls te
ptp enable
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf enable area 0.0.0.0
ospf prefix-sid absolute 16300
#
explicit-path spe1
next sid label 16100 type adjacency
#
explicit-path spe2
next sid label 16200 type adjacency
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path spe1
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path spe2
#
bgp 100
router-id 4.4.4.4
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
area 0.0.0.0
segment-routing mpls
segment-routing global-block 16000 20000
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 2.2.2.2 te Tunnel11
tunnel binding destination 3.3.3.3 te Tunnel12
#
ifit
#
return
12 eMDI Configuration
Background
Multicast video services, including IPTV, are gradually deployed on carrier
networks and have become a major opportunity to improve profitability for
carriers. Quality monitoring of video services is of vital importance. Packet loss
rate, packet out-of-order rate, and jitter are three major factors that may affect
video quality. Even the packet loss rate and packet out-of-order rate of less than
0.01% may cause terminals to experience erratic display or mosaic. Jitter may also
lead to black screen. These problems severely affect the service quality of
experience (QoE) as well as the profitability and brand awareness of carriers. A
video quality monitoring and fault locating solution is in need so that carriers can
monitor and maintain service quality in real time and quickly demarcate faults
and clarify responsibilities.
The eMDI detection solution is a quality monitoring and fault locating solution
designed for multicast video services such as IPTV. This solution supports real-time
detection of quality indicators (such as packet loss rate, packet out-of-order rate,
and jitter) of real service packets, featuring high statistical precision and reliable
data support. In addition, this solution can be deployed on all network nodes from
edge devices to core devices. The detection results on multiple nodes help to
rapidly locate the faulty network segment.
Detection Principles
The eMDI detection solution is a distributed board detection solution. During
solution deployment, the channels to be detected are added to a channel group,
the eMDI-capable boards are added to a board group, and the channel group is
bound to the board group. With detection on the board NP, the video streams of
specified channels can be detected in distributed mode.
This solution supports detection only of UDP-based Real-Time Transport Report
(RTP) video streams. The NP of the board to be detected performs validity check
and RTP check on the IP header, UDP header, and RTP header of RTP packets and
calculates the packet loss rate and packet disorder rate based on the sequence
number in the RTP header. Then, the NP calculates jitter based on the timestamp
in the RTP header, achieving real-time monitoring of video quality.
The implementation process of eMDI detection is as follows:
1. The NMS delivers eMDI monitoring instructions to a device.
2. The device monitors eMDI indicators in real time.
3. The device periodically reports the monitored eMDI indicators and alarms to
the NMS.
4. The NMS displays eMDI indicators in GUI mode and supports analysis on fault
demarcation and locating.
Indicator Collection
eMDI can obtain monitoring data from a device on a regular basis and periodically
send the data to the NMS in various modes such as telemetry. After analysis on
the NMS, the monitoring data can be displayed in various forms, such as a trend
chart.
eMDI also supports reporting of alarms to the NMS. The alarm thresholds and the
number of alarm suppressions can be configured as required.
Indicators
The detection indicators supported by eMDI include the packet loss rate (RTP-LR),
packet out-of-order rate (RTP-SER), and jitter. The packet loss rate and packet out-
of-order rate are calculated based on the sequence number in an RTP packet
header. The jitter is calculated based on the timestamp in an RTP packet header.
For details, see eMDI Detection Indicators.
Configuration Precautions
Usage Scenario
As a distributed board detection solution, eMDI depends on the establishment of a
channel group and a board group and the binding of the channel group and board
group. Besides, when the jitter indicator needs to be detected and the eMDI
detection must be supported on Ps, jitter detection and P detection must be
configured.
Context
Before configuring the multicast channels to be detected, create a channel group
and add the multicast channels to the channel group.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
eMDI detection is enabled and the eMDI view is displayed.
Step 3 Run emdi channel-group channel-group-name
An eMDI channel group is created or the view of an existing channel group is
displayed.
----End
Context
As a distributed board detection solution, the eMDI detection solution requires the
configuration of boards for eMDI detection. Create a board group and then bind
the eMDI-capable boards to the board group so that eMDI detection can be
performed on the boards.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi lpu-group lpu-group-name
An eMDI board group is created or the view of an existing board group is
displayed.
Step 4 Run emdi bind slot { all | slot-id }
The specified boards are bound to the board group.
Step 5 Run commit
The configuration is committed.
----End
Context
As a distributed board detection solution, eMDI requires the binding of a channel
group and a board group. After the channel group and board group are bound,
the board NP in the board group performs real-time monitoring of the video
streams of a specified channel in order to obtain detection indicators such as the
packet loss rate and packet out-of-order rate.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi bind channel-group channel-group-name lpu-group lpu-group-name
[outbound]
An eMDI channel group is bound to an eMDI board group.
Step 4 Run quit
Return to the system view.
Step 5 (Optional) Run interface { interface-name | interface-type interface-number }
The interface view is displayed.
Step 6 (Optional) Run emdi channel channel-name outbound
An eMDI channel is bound to the outbound interface.
Before binding an eMDI channel to the outbound interface, bind an eMDI channel group to
an eMDI board group in the downstream direction.
----End
Context
By default, the eMDI detection solution detects only the packet loss rate and
packet out-of-order rate. If the jitter indicator also needs to be detected, eMDI
jitter detection needs to be configured.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi rtp-jitter enable
Jitter detection is enabled.
----End
Context
Only NG MVPN networks support eMDI detection on video streams passing
through Ps. eMDI detection is disabled by default. To enable eMDI detection on Ps,
perform the following steps.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi match-mpls-lable enable
eMDI detection on Ps is enabled.
Step 4 Run emdi channel source source-address group group-address transit
A multicast channel is added to an eMDI channel group to enable eMDI detection
on Ps.
eMDI detection takes effect on Ps only after both the emdi match-mpls-lable enable and
emdi channelsource source-address group group-address transit commands are run.
----End
Usage Scenario
After basic eMDI detection functions are configured, configure eMDI related
attributes. The monitoring period determines the frequency of eMDI detection.
The alarm thresholds and the number of alarm suppression times determine the
frequency of reporting eMDI alarms. The configuration of detection only on the
Context
With eMDI, monitoring data can be obtained from a device on a regular basis and
periodically send to uTraffic in various modes such as Telemetry. After analysis on
uTraffic, the monitoring data can be displayed in various forms, such as a trend
chart. To change a monitoring period, perform the following operations.
Procedure
Step 1 Run system-view
----End
Context
In addition to monitoring various indicators of video streams, the eMDI detection
solution allows reporting of alarms to NMS. eMDI alarm triggering is determined
by an alarm threshold and the number of alarm suppression times. If the alarm
threshold is M and the number of alarm suppression times is N, when an indicator
reaches M for N consecutive times, the device reports an eMDI alarm to the NMS.
Therefore, to control the frequency at which eMDI alarms are reported, configure
a proper alarm threshold and alarm suppression times.
If statistics are all below the threshold within 60 consecutive detection intervals, the alarm
is automatically cleared.
Procedure
Step 1 Run system-view
----End
Context
On the NG MVPN network, if the transit and bud nodes overlap, two pieces of
traffic will be detected on the same node, causing offset in the detection results of
the packet loss rate, packet out-of-order rate, and jitter. To avoid detection offset
when the transit and bud nodes overlap and ensure the accuracy of detection
results, enable eMDI detection only on the rate of video streams that pass through
the node instead of the packet loss rate, packet out-of-order rate, and jitter.
Procedure
Step 1 Run system-view
----End
Usage Scenario
After eMDI detection on video traffic is performed on a device, you can view the
detection results of a specified channel or all channels. To avoid interference of
useless records, you can clear historical statistics about a specified channel or all
channels.
Procedure
● In the user view, run the display emdi statistics history channel [ channel-
name ] [ start start-index end end-index | latest-record record-number ]
command to view historical statistics about a specified channel or all channels
in the upstream direction.
● In the user view, run the display emdi statistics history outbound channel
[ channel-name ] [ start start-index end end-index | latest-record record-
number ] slot slot-id command to view historical statistics about a specified
channel or all channels in the downstream direction.
● In the user view, run the reset emdi statistics history channel [ channel-
name ] command to clear historical statistics about a specified channel or all
channels in the upstream direction.
● In the user view, run the reset emdi statistics history outbound channel
[ channel-name ] slot slot-id command to clear historical statistics about a
specified channel or all channels in the downstream direction.
----End
Networking Requirements
On the network shown in Figure 12-2, IPTV programs are provided for host users
in multicast mode. eMDI detection is deployed on Device A, Device B, Device C,
and Device D to monitor the quality of IPTV service packets. Network O&M
personnel can check the detection results reported by the devices through
telemetry in real time on the monitor platform, quickly demarcating and locating
faults.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign IP addresses to router interfaces and configure a unicast routing protocol.
For configuration details, see Configuration Files in this section.
# Configure Device D.
[~DeviceD] interface gigabitethernet 1/0/1
[~DeviceD-GigabitEthernet1/0/1] igmp enable
[*DeviceD-GigabitEthernet1/0/1] igmp static-group 225.1.1.1
[*DeviceD-GigabitEthernet1/0/1] quit
[*DeviceD] commit
After completing the configuration, run the following commands to check whether
the multicast service is configured successfully.
● Run the display pim interface command to check the PIM-DM configuration
and status of each router interface. The following example uses the command
output on Device B.
<DeviceB> display pim interface
VPN-Instance: public net
Interface State NbrCnt HelloInt DR-Pri DR-Address
GE1/0/0 up 1 30 1 10.1.1.2 (local)
GE1/0/1 up 1 30 1 10.1.2.2
GE1/0/2 up 1 30 1 10.1.3.2
● Run the display pim neighbor command to check the PIM-DM neighbor
relationship between routers. The following example uses the command
output on Device B.
<DeviceB> display pim neighbor
VPN-Instance: public net
Total: 3
● Run the display pim routing-table command to check the PIM routing table
of each router. Assume that both user A and user B need to receive
information about multicast group G (225.1.1.1/24). When multicast source S
(10.1.4.100/24) sends multicast data to multicast group G, a multicast
distribution tree (MDT) is generated through flooding. Each router on the
MDT path has (S, G) entries. When user A and user B join multicast group G,
Device C and Device D generate (*, G) entries. The command output on each
router is as follows:
<DeviceA> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: LOC ACT
UpTime: 00:08:18
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-dm, UpTime: 00:08:18, Expires: never
<DeviceB> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:10:25
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 10.1.1.1
RPF prime neighbor: 10.1.1.1
Downstream interface(s) information:
Total number of downstreams: 2
1: GigabitEthernet1/0/1
Protocol: pim-dm, UpTime: 00:06:48, Expires: never
2: GigabitEthernet1/0/2
Protocol: pim-dm, UpTime: 00:05:53, Expires: never
<DeviceC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 00:11:47
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: static, UpTime: 00:11:47, Expires: never
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:17:13
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 10.1.2.1
RPF prime neighbor: 10.1.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-dm, UpTime: 00:11:47, Expires: -
<DeviceD> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 00:05:26
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: static, UpTime: 00:05:26, Expires: never
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:09:58
Upstream interface: GigabitEthernet1/0/2
Upstream neighbor: 10.1.3.1
RPF prime neighbor: 10.1.3.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-dm, UpTime: 00:05:26, Expires: -
After completing the configuration, run the display emdi statistics history
command to check the detection result when multicast traffic passes through
DeviceA.
● Check the detection result in the inbound direction.
<DeviceA> display emdi statistics history channel 1 start 3 end 5
Channel Name : 1
Total Records : 3
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
Record Record Monitor Monitor Received Ra
te Rate RTP-LC RTP-SE RTP-LR RTP-SER RTP
Index Time Period(s) Status Packets pps bps
After completing the configuration, check the real-time detection result reported
through telemetry on the monitor platform.
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.1.1 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.4.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.4.0 0.0.0.255
#
emdi
emdi channel-group IPtv-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group IPtv-lpu
emdi bind slot all
emdi bind channel-group IPtv-channel lpu-group IPtv-lpu
emdi bind channel-group IPtv-channel lpu-group IPtv-lpu outbound
#
telemetry
#
sensor-group emdi-monitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
sensor-path huawei-emdi:emdi/out-telem-reps/out-telem-rep
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc no-tls
#
subscription EMDI
sensor-group emdi-monitor
destination-group Monitor
#
return
● Device B configuration file
#
sysname DeviceB
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.1.2 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.2.1 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/2
undo portswitch
undo shutdown
ip address 10.1.3.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
#
emdi
emdi channel-group IPtv-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group IPtv-lpu
emdi bind slot all
emdi bind channel-group IPtv-channel lpu-group IPtv-lpu
emdi bind channel-group IPtv-channel lpu-group IPtv-lpu outbound
#
telemetry
#
sensor-group emdi-monitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc no-tls
#
subscription EMDI
sensor-group emdi-monitor
destination-group Monitor
#
return
● Device C configuration file
#
sysname DeviceC
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.5.1 255.255.255.0
pim dm
igmp enable
igmp static-group 225.1.1.1
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.2.2 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.5.0 0.0.0.255
#
emdi
emdi channel-group IPtv-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group IPtv-lpu
emdi bind slot all
emdi bind channel-group IPtv-channel lpu-group IPtv-lpu
Networking Requirements
On the network shown in Figure 12-3, a BGP/MPLS IP VPN over an MPLS LDP LSP
is deployed to carry unicast services, and an NG MVPN over an mLDP P2MP LSP is
deployed to carry multicast services. In addition, eMDI is deployed on the network
to monitor multicast service quality. Network maintenance personnel can check
real-time detection results reported through telemetry on the monitor platform,
quickly demarcating and locating faults.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Public network OSPF process ID: 1; area ID: 0 OSPF multi-instance process ID:
2; area ID: 0
● VPN instance name on PE1, PE2, and PE3: VPNA
CE1 - - -
configurati 1.1.1.1 - AS65001
on file
2.2.2.2 2.2.2.2 20 3:
0: 3
PE1 2.2.2.2 AS100
1 4:
4
3.3.3.3 3.3.3.3 30
3:
PE2 3.3.3.3 0: AS100
3
1
4.4.4.4 4.4.4.4 40
4:
PE3 4.4.4.4 0: AS100
4
1
Procedure
Step 1 Configure a BGP/MPLS IP VPN.
1. Assign an IP address to each interface of devices on the backbone network
and VPN sites.
Assign an IP address to each interface according to Figure 12-3. For
configuration details, see Configuration Files in this section.
2. Configure an IGP to interconnect devices on the backbone network.
OSPF is used in this example. For configuration details, see Configuration
Files in this section.
3. Configure basic MPLS functions and MPLS LDP on the backbone network to
establish LDP LSPs.
– # Configure PE1.
[~PE1] mpls lsr-id 2.2.2.2
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls ldp
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet1/0/2
[*PE1-GigabitEthernet1/0/2] mpls
[*PE1-GigabitEthernet1/0/2] mpls ldp
[*PE1-GigabitEthernet1/0/2] quit
[*PE1] commit
– # Configure PE2.
[~PE2] mpls lsr-id 3.3.3.3
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet1/0/2
[*PE2-GigabitEthernet1/0/2] mpls
[*PE2-GigabitEthernet1/0/2] mpls ldp
[*PE2-GigabitEthernet1/0/2] quit
[*PE2] commit
– # Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
[*CE3] commit
● # Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] mldp p2mp
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
● # Configure PE3.
[~PE3] mpls ldp
[*PE3-mpls-ldp] mldp p2mp
[*PE3-mpls-ldp] commit
[~PE3-mpls-ldp] quit
– # Configure PE3.
[~PE3] multicast mvpn 4.4.4.4
[*PE3] ip vpn-instance VPNA
[*PE3-vpn-instance-VPNA] ipv4-family
[*PE3-vpn-instance-VPNA-af-ipv4] multicast routing-enable
[*PE3-vpn-instance-VPNA-af-ipv4] mvpn
[*PE3-vpn-instance-VPNA-af-ipv4-mvpn] c-multicast signaling bgp
[*PE3-vpn-instance-VPNA-af-ipv4-mvpn] rpt-spt mode
[*PE3-vpn-instance-VPNA-af-ipv4-mvpn] quit
[*PE3-vpn-instance-VPNA-af-ipv4] quit
[*PE3-vpn-instance-VPNA] quit
[*PE3] commit
The command output shows that an mLDP P2MP LSP has been
established, with PE1 as the root node and PE2 and PE3 as leaf nodes.
● Configure PIM.
– # Configure PE1.
[*PE1] interface gigabitethernet1/0/1
[*PE1-GigabitEthernet1/0/1] pim sm
[*PE1-GigabitEthernet1/0/1] quit
[*PE1] commit
– # Configure CE1.
[~CE1] multicast routing-enable
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] pim sm
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] interface gigabitethernet1/0/1
[*CE1-GigabitEthernet1/0/1] pim sm
[*CE1-GigabitEthernet1/0/1] quit
[*CE1] commit
– # Configure PE2.
[*PE2] interface gigabitethernet1/0/1
[*PE2-GigabitEthernet1/0/1] pim sm
[*PE2-GigabitEthernet1/0/1] quit
[*PE2] commit
– # Configure CE2.
[~CE2] multicast routing-enable
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] pim sm
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] interface gigabitethernet1/0/1
[*CE2-GigabitEthernet1/0/1] pim sm
[*CE2-GigabitEthernet1/0/1] quit
[*CE2] commit
– # Configure PE3.
[*PE3] interface gigabitethernet1/0/1
[*PE3-GigabitEthernet1/0/1] pim sm
[*PE3-GigabitEthernet1/0/1] quit
[*PE3] commit
– # Configure CE3.
[~CE3] multicast routing-enable
[*CE3] interface gigabitethernet1/0/0
[*CE3-GigabitEthernet1/0/0] pim sm
[*CE3-GigabitEthernet1/0/0] quit
[*CE3] interface gigabitethernet1/0/1
[*CE3-GigabitEthernet1/0/1] pim sm
[*CE3-GigabitEthernet1/0/1] quit
[*CE3] commit
● Configure IGMP.
– # Configure CE2.
[~CE2] interface gigabitethernet1/0/1
[*CE2-GigabitEthernet1/0/1] pim sm
[*CE2-GigabitEthernet1/0/1] igmp enable
[*CE2-GigabitEthernet1/0/1] igmp version 3
[*CE2-GigabitEthernet1/0/1] commit
[~CE2-GigabitEthernet1/0/1] quit
– # Configure CE3.
[~CE3] interface gigabitethernet1/0/1
[*CE3-GigabitEthernet1/0/1] pim sm
[*CE3-GigabitEthernet1/0/1] igmp enable
[*CE3-GigabitEthernet1/0/1] igmp version 3
[*CE3-GigabitEthernet1/0/1] commit
[~CE3-GigabitEthernet1/0/1] quit
● Configure a static RP.
– # Configure CE1.
[~CE1] pim
[*CE1-pim] static-rp 1.1.1.1
[*CE1-pim] commit
[~CE1-pim] quit
– # Configure CE2.
[~CE2] pim
[*CE2-pim] static-rp 1.1.1.1
[*CE2-pim] commit
[~CE2-pim] quit
– # Configure CE3.
[~CE3] pim
[*CE3-pim] static-rp 1.1.1.1
[*CE3-pim] commit
[~CE3-pim] quit
– # Configure PE1.
[~PE1] pim vpn-instance VPNA
[*PE1-pim-VPNA] static-rp 1.1.1.1
[*PE1-pim-VPNA] commit
[~PE1-pim-VPNA] quit
– # Configure PE2.
[~PE2] pim vpn-instance VPNA
[*PE2-pim-VPNA] static-rp 1.1.1.1
[*PE2-pim-VPNA] commit
[~PE2-pim-VPNA] quit
– # Configure PE3.
[~PE3] pim vpn-instance VPNA
[*PE3-pim-VPNA] static-rp 1.1.1.1
[*PE3-pim-VPNA] commit
[~PE3-pim-VPNA] quit
After the configurations are complete, NG MVPN functions have been configured.
If CE2 or CE3 has access users, CE1 can use the BGP/MPLS IP VPN to forward
multicast data to the users. Configure users on CE2 or CE3 to send IGMPv3 Report
messages and the multicast source 10.1.3.1 to send multicast data. Then, check
multicast routing entries to verify whether the NG MVPN is configured
successfully.
Run the display pim routing-table command on CE2, CE3, and CE1 to check the
PIM routing table. Run the display pim vpn-instance routing-table command on
PE2, PE3, and PE1 to check the PIM routing table of the VPN instance.
[~CE2] display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:54:11
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: igmp, UpTime: 00:54:11, Expires: -
[~CE3] display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:01:57
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 192.168.3.1
RPF prime neighbor: 192.168.3.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: igmp, UpTime: 00:01:57, Expires: -
[~PE2] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:48:18
Upstream interface: through-BGP
Upstream neighbor: 2.2.2.2
RPF prime neighbor: 2.2.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:48:18, Expires: 00:03:12
[~PE3] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:02:06
Upstream interface: through-BGP
Upstream neighbor: 2.2.2.2
RPF prime neighbor: 2.2.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:02:06, Expires: 00:03:26
[~PE1] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 2 (S, G) entries
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:46:58
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 192.168.1.1
RPF prime neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: pseudo
Protocol: BGP, UpTime: 00:46:58, Expires: -
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:00:23
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 192.168.1.1
RPF prime neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: pseudo
Protocol: BGP, UpTime: 00:00:26, Expires: -
[~CE1] display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 2 (S, G) entries
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT LOC ACT
UpTime: 00:47:29
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:47:29, Expires: 00:03:03
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT LOC ACT
UpTime: 00:00:54
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:00:54, Expires: 00:02:36
The command outputs show that CE1 connecting to the multicast source has
received PIM Join messages from CE2 and CE3 connecting to multicast receivers
and that CE1 has generated PIM routing entries.
– # Configure PE2.
[~PE2] emdi
[*PE2-emdi] emdi channel-group PE2
[*PE2-emdi-channel-group-PE2] emdi channel 1 source 10.1.3.1 group 225.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE2-emdi-channel-group-PE2] quit
[*PE2-emdi] quit
[*PE2] commit
– # Configure PE3.
[~PE3] emdi
[*PE3-emdi] emdi channel-group PE3
[*PE3-emdi-channel-group-PE3] emdi channel 2 source 10.1.3.1 group 226.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE3-emdi-channel-group-PE3] quit
[*PE3-emdi] quit
[*PE3] commit
After completing the configuration, run the display emdi statistics history
channel command to check the detection result when multicast traffic passes
through PE1.
[~PE1] display emdi statistics history channel 1 start 3 end 5
Channel Name : 1
Total Records : 3
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
Record Record Monitor Monitor Received Rate Rate RTP-LC RTP-
SE RTP-LR RTP-SER RTP
Index Time Period(s) Status Packets pps bps
(1/100000) (1/100000) Jitter(ms)
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
3 2019-02-02:08-31-00 60 Normal 4388218 438821 4865656118 6700
6633 152 151 0
4 2019-02-02:08-32-00 60 Normal 4388533 438853 4866005390 6700
6633 152 151 0
5 2019-02-02:08-33-00 60 Normal 4393232 439323 4871215641 6700
6633 152 151 0
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
After completing the configuration, check the eMDI detection result reported
through telemetry on the monitor platform.
----End
Configuration Files
● CE1 configuration file
#
sysname CE1
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 2
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.3.0 0.0.0.255
network 192.168.1.0 0.0.0.255
#
pim
static-rp 1.1.1.1
#
return
return
● PE2 configuration file
#
sysname PE2
#
multicast mvpn 3.3.3.3
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 300:1
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
multicast routing-enable
mvpn
c-multicast signaling bgp
rpt-spt mode
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
mldp p2mp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.6.1 255.255.255.0
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance VPNA
ip address 192.168.2.1 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family mvpn
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance VPNA
import-route ospf 2
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.6.0 0.0.0.255
#
ospf 2 vpn-instance VPNA
import-route bgp
area 0.0.0.0
network 192.168.2.0 0.0.0.255
#
pim vpn-intstance VPNA
static-rp 1.1.1.1
#
emdi
emdi channel-group PE2
emdi channel 1 source 10.1.3.1 group 225.1.1.1 vpn-instance VPNA pt 33 clock-rate 90kHz
emdi lpu-group PE2
emdi bind slot all
emdi bind channel-group PE2 lpu-group PE2
#
telemetry
#
sensor-group emdi-monitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.6.2 port 10001 protocol grpc no-tls
#
subscription PE2
sensor-group emdi-monitor
destination-group Monitor
#
return
● CE3 configuration file
#
sysname CE3
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.5.1 255.255.255.0
pim sm
igmp enable
igmp version 3
#
interface LoopBack1
ip address 6.6.6.6 255.255.255.255
#
ospf 2
area 0.0.0.0
network 6.6.6.6 0.0.0.0
network 10.1.5.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
pim
static-rp 1.1.1.1
#
return
● PE3 configuration file
#
sysname PE3
#
multicast mvpn 4.4.4.4
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 400:1
sensor-group emdi-monitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.6.2 port 10001 protocol grpc no-tls
#
subscription PE3
sensor-group emdi-monitor
destination-group Monitor
#
return
13 ESQM Configuration
Purpose
Traditional communication networks are unable to "perceive" services, preventing
customers' ever-changing service requirements from being responded to in real
time. To solve this problem, ESQM has been developed to help devices monitor
the quality of services on networks. This technology integrates network
deployment with service requirements and provides the data that is the
foundation for automatic and intelligent network lifecycle management.
Benefits
ESQM offers the following benefits:
● Helps communication networks perceive service quality, and enables devices
to proactively detect services with poor QoE for fault diagnosis, demarcation,
and service optimization, thereby effectively shortening the duration of
network interruptions and reducing customers' OPEX.
● Helps customers perceive networks according to multiple metrics, including
service quality, forwarding path, and load, providing data support for routine
maintenance and network optimization.
Configuration Precautions
Restrictions Guidelines Impact
ESQM does not support Prevent this scenario The statistics about the
flow identification based during service planning. flows in this scenario are
on VPN and inbound/ combined. As a result,
outbound interface the monitoring results
information. A device are inaccurate.
regards the flows with
the same quintuple
information (source and
destination IP addresses,
source and destination
port numbers, and
protocol number) and
monitoring direction
(inbound or outbound)
as one flow for statistics
collection, even if the
inbound/outbound
interfaces and VPNs of
the flows are different.
ESQM flow entries share Prevent this scenario If both ESQM and MAC
resources with MAC during service planning. address services are
address entries. If the deployed, the maximum
number of MAC address numbers of ESQM flow
entries created reaches entries and MAC address
the upper limit of the entries cannot be
resources, ESQM flow reached at the same
entries cannot be time.
created. If the number of
ESQM flow entries
created reaches the
upper limit of the
resources, the maximum
number of MAC address
entries cannot be
created.
Context
Traditional communication networks are unable to "perceive" services, preventing
customers' ever-changing service requirements from being responded to in real
time. To solve this problem, ESQM has been developed to help devices monitor
the quality of services on networks. This technology integrates network
deployment with service requirements and provides the data that is the
foundation for automatic and intelligent network lifecycle management.
Procedure
1. Run system-view
The system view is displayed.
2. Run esqm
The ESQM view is displayed.
3. (Optional) Run esqm session aging-time sctp tmval
An aging time is set for SCTP flow tables.
The configured aging time takes effect only for subsequently created SCTP flow tables.
4. (Optional) Run esqm protocol { tcp | sctp | gtp } disable
The device is disabled from creating flow tables for sampled TCP, SCTP, or GTP
protocol packets.
5. (Optional) Run esqm filter permit ip ip-addr mask masklen
The function of filtering sampled packets is enabled.
6. Run any of the following commands:
– To perform ESQM for inbound or outbound packets on all the interfaces
to which a VPN instance is bound, run the esqm service-stream
{ inbound | outbound } vpn-instance vpn-instance-name command in
the ESQM view.
– To perform ESQM for inbound or outbound packets on all the interfaces
to which no VPN instance is bound, run the esqm service-stream
{ inbound | outbound } command in the ESQM view.
– To perform ESQM for inbound or outbound packets on an interface, run
the following commands:
i. Run quit
Exit from the ESQM view.
ii. Run interface interface-type interface-num
Networking Requirements
As networks rapidly develop and applications become diversified, various value-
added services are widely used. Link connectivity and network performance
influence network quality. Therefore, performance monitoring is especially
important for service transmission.
● For example, users will not sense any change in voice quality if the packet
loss rate on voice links is lower than 5%. However, if the packet loss rate is
higher than 10%, user experience obviously degrades.
● The real-time services such as Voice over Internet Protocol (VoIP), online
gaming, and online video require the delay lower than 100 ms. Some delay-
sensitive services even require that the delay be lower than 50 ms. Otherwise,
user experience will degrade.
To meet high requirements for voice, online gaming, and online video on the
network, carriers should be able to monitor the packet loss and delay of the links.
They can adjust the links if service quality decreases.
As shown in Figure 13-1, an access network is deployed between a UPE and an
SPE, and an aggregation network is deployed between an SPE and an NPE. The
forward service flow enters the network through the UPE, travels across the SPE,
and leaves the network through the NPE. The backward service flow enters the
network through the NPE, also travels across the SPE, and leaves the network
through the UPE.
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs between the UPE and SPE and between the SPE and NPE. In this
example, OSPF runs between the UPE and SPE, and IS-IS runs between the
SPE and NPE.
Data Preparation
To complete the configuration, you need the following data:
Procedure
1. Configure an L3VPN HoVPN with an L3EVPN on the UPE, SPE, and NPE. For
configuration details, see Configuration Files.
2. Configure ESQM measurement on the UPE and NPE, and inject unidirectional
traffic from the UPE to the NPE.
# Configure inbound ESQM on the user side of the UPE.
<UPE> system-view
[~UPE] esqm
[*UPE-esqm] commit
[~UPE-esqm] interface GigabitEthernet2/0/0
[~UPE—GigabitEthernet2/0/0] esqm service-stream inbound
[*UPE—GigabitEthernet2/0/0] commit
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
esqm service-stream inbound
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
esqm
#
return
● SPE configuration file
#
sysname SPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
Background
In the radio and television industry, especially in TV stations or media centers, IP-
based production and broadcasting networks are gaining in popularity. Related IP
standards are being formulated, which is an important step in the development of
Implementation
On the topology shown in Figure 14-1, traffic enters the device through the
inbound interface. The device extracts the septuple information from the traffic
and generates matching rules based on the septuple information. Each flow that
matches the septuple information has a statistical ID. The device collects statistics
about the numbers of packets and bytes for each statistical ID. The device sends
the statistics of each statistical ID to the controller over telemetry. The controller
calculates the flow rate based on the current and last data records to identify the
video or audio flow.
To calculate the flow rate, the controller needs to receive the data information of
each flow. Table 14-1 describes the flow data fields collected and sent by the
device to the controller.
If the device consecutively collects statistics about a flow twice, the flow's rate is
calculated as follows:
● Number of packets forwarded per second = (packetNum2 - packetNum1)/
(timeStampSec2 - timeStampSec1)
● Number of bytes forwarded per second = (bytesNum2 - bytesNum1)/
(timeStampSec2 - timeStampSec1)
Context
Figure 14-2 shows the typical networking of flow recognition. Target flows enter
the transport network from the multimedia terminal and then reach the device
through interface 1. After flow recognition is enabled on the device, the device
collects data and then sends the data to the controller over telemetry.
Pre-configuration Tasks
Before configuring flow recognition, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Prerequisites
Flow recognition has been configured.
Procedure
Step 1 Run the display flow-recognition cache command to check the flow table
information of a slot in the flow cache.
----End
Networking Requirements
Figure 14-3 shows a typical media network. The functions of each node are
described as follows:
● Controller: delivers control instructions to the device to control, manage, and
monitor the system.
● Device: provides functions such as forwarding, replication, scheduling, clean
switching, and flow recognition of media traffic.
● Multimedia terminal A: functions as the transmit end of media signals and
transmits traffic to the device.
● Multimedia terminal B: functions as the receive end of media signals and
receives traffic from the device.
On the network:
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
the nodes can communicate at the network layer.
2. Configure static telemetry subscription.
3. Configure flow recognition.
Data Preparation
To complete the configuration, you need the following data:
● Interface 1's IP address: 10.1.1.1
● Controller IP address: 10.1.1.2; port number: 10001
● Telemetry sampling path: huawei-flow-recognition:flowrecognition/
streaminfos/streaminfo
● Proto file used for flow recognition: huawei-flow-recognition.proto
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all the
nodes can communicate at the network layer. For configuration details about the
device, see Configuration Files.
Step 2 Configure static telemetry subscription. For configuration details, see
Configuration Files.
Step 3 Configure flow recognition.
<HUAWEI> system-view
[~HUAWEI] interface GigabitEthernet 1/0/1
[~HUAWEI-GigabitEthernet1/0/1] flow-recognition inbound
[*HUAWEI-GigabitEthernet1/0/1] commit
SrcMac : 0000-0201-0102
DstMac : 0030-4567-8058
SrcPort : 10
DstPort : 30
SrcAddr : 2.1.1.3
DstAddr : 3.1.1.2
FirstTimestamp : 2019-05-20 14:32:46
LastTimestamp : 2019-05-20 14:52:47
PacketsCount : 399564
BytesCount : 51144192
----------------------------------------------------------
----End
Configuration Files
#
telemetry
#
sensor-group sensor1
sensor-path huawei-flow-recognition:flowrecognition/streaminfos/streaminfo condition express op-field
systemCpuUsage op-type gt op-value 40
#
destination-group destination1
ipv4-address 10.1.1.2 port 10001 protocol grpc no-tls
#
subscription subscription1
sensor-group sensor1
destination-group destination1
#
#
interface GigabitEthernet 1/0/1
flow-recognition inbound
#