Cisco Express Forwarding Overview: Benefits
Cisco Express Forwarding Overview: Benefits
Cisco Express Forwarding (CEF) is advanced, Layer 3 IP switching technology. CEF optimizes network
performance and scalability for networks with large and dynamic traffic patterns, such as the Internet,
on networks characterized by intensive Web-based applications, or interactive sessions.
Procedures for configuring CEF or distributed CEF (dCEF) are provided in the “Configuring Cisco
Express Forwarding” chapter later in this publication.
This chapter describes CEF. It contains the following sections:
• Benefits
• Restrictions
• CEF Components
• Supported Media
• CEF Operation Modes
• TMS and CEF Nonrecursive Accounting
• Network Services Engine
• Virtual Profile CEF
Benefits
CEF offers the following benefits:
• Improved performance—CEF is less CPU-intensive than fast switching route caching. More CPU
processing power can be dedicated to Layer 3 services such as quality of service (QoS) and
encryption.
• Scalability—CEF offers full switching capacity at each line card when dCEF mode is active.
• Resilience—CEF offers an unprecedented level of switching consistency and stability in large
dynamic networks. In dynamic networks, fast-switched cache entries are frequently invalidated due
to routing changes. These changes can cause traffic to be process switched using the routing table,
rather than fast switched using the route cache. Because the Forwarding Information Base (FIB)
lookup table contains all known routes that exist in the routing table, it eliminates route cache
maintenance and the fast-switch or process-switch forwarding scenario. CEF can switch traffic more
efficiently than typical demand caching schemes.
Although you can use CEF in any part of a network, it is designed for high-performance, highly resilient
Layer 3 IP backbone switching. For example, Figure 8 shows CEF being run on Cisco 12000 series
Gigabit Switch Routers (GSRs) at aggregation points at the core of a network where traffic levels are
dense and performance is critical.
CEF CEF
CEF running
at the network
core
CEF CEF
Peripheral
S6782
routers and
switches
In a typical high-capacity Internet service provider (ISP) environment, Cisco 12012 GSRs as aggregation
devices at the core of the network support links to Cisco 7500 series routers or other feeder devices. CEF
in these platforms at the network core provides the performance and scalability needed to respond to
continued growth and steadily increasing network traffic. CEF is a distributed switching mechanism that
scales linearly with the number of interface cards and the bandwidth installed in the router.
Restrictions
• The Cisco 12000 series Gigabit Switch Routers operate only in distributed CEF mode.
• Distributed CEF switching cannot be configured on the same VIP card as distributed fast switching.
• Distributed CEF is not supported on Cisco 7200 series routers.
• If you enable CEF and then create an access list that uses the log keyword, the packets that match
the access list are not CEF switched. They are fast switched. Logging disables CEF.
CEF Components
Information conventionally stored in a route cache is stored in several data structures for CEF switching.
The data structures provide optimized lookup for efficient packet forwarding. The two main components
of CEF operation are described in the following sections:
• Forwarding Information Base
• Adjacency Tables
Adjacency Tables
Nodes in the network are said to be adjacent if they can reach each other with a single hop across a link
layer. In addition to the FIB, CEF uses adjacency tables to prepend Layer 2 addressing information.
The adjacency table maintains Layer 2 next-hop addresses for all FIB entries.
Adjacency Discovery
The adjacency table is populated as adjacencies are discovered. Each time an adjacency entry is created
(such as through ARP), a link-layer header for that adjacent node is precomputed and stored in the
adjacency table. Once a route is determined, it points to a next hop and corresponding adjacency entry.
It is subsequently used for encapsulation during CEF switching of packets.
Adjacency Resolution
A route might have several paths to a destination prefix, such as when a router is configured for
simultaneous load balancing and redundancy. For each resolved path, a pointer is added for the
adjacency corresponding to the next hop interface for that path. This mechanism is used for load
balancing across several paths.
Unresolved Adjacency
When a link-layer header is prepended to packets, the FIB requires the prepend to point to an adjacency
corresponding to the next hop. If an adjacency was created by the FIB and not discovered through a
mechanism, such as ARP, the Layer 2 addressing information is not known and the adjacency is
considered incomplete. Once the Layer 2 information is known, the packet is forwarded to the Route
Processor (RP), and the adjacency is determined through ARP.
Supported Media
CEF currently supports ATM/AAL5snap, ATM/AAL5mux, ATM/AAL5nlpid, Frame Relay, Ethernet,
FDDI, PPP, HDLC, and tunnels.
Cisco 7500
series router Route Processor
running CEF
Interface Interface
Interface card
card card
S6783
E1 E2 E1 E2 E1 E2
Cisco
Catalyst
switches
Route Processor
IPC
S6784
OC-12 OC-3 FE Serial T3 FDDI
In this Cisco 12000 series router, the line cards perform the switching. In other routers where you can
mix various types of cards in the same router, all of the cards you are using may not support CEF. When
a line card that does not support CEF receives a packet, the line card forwards the packet to the next
higher switching layer (the RP) or forwards the packet to the next hop for processing. This structure
allows legacy interface processors to exist in the router with newer interface processors.
Note The Cisco 12000 series GSR operate only in dCEF mode; dCEF switching cannot be configured on
the same VIP card as distributed fast switching, and dCEF is not supported on Cisco 7200 series
routers.
TMS Data
The TMS feature allows an administrator to gather the following data:
• The number of packets and bytes that travel across the backbone from internal and external sources.
The packets and bytes are called traffic matrix statistics and are useful for determining how much
traffic a backbone handles. You can analyze the traffic matrix statistics using the following methods:
– Collecting and viewing the TMS data through the application of the Network Data Analyzer
(NDA).
– Reading the TMS data that resides on the backbone router.
The following sections explain how to collect and view the traffic matrix statistics using the
command-line interface (CLI) and the NDA. For detailed instructions on using the NDA, see the
Network Data Analyzer Installation and User Guide.
• The neighbor autonomous systems of a BGP destination. You can view the neighbor autonomous
systems of a BGP destination by reading the tmasinfo_ascii file that resides on the backbone router.
Figure 11 shows a sample backbone, represented by darkly shaded routers and bold links. The lighter
shaded and unshaded routers are outside the backbone. The traffic that travels through the backbone is
the area of interest for TMS collection.
EGBP
ISP 2
EGBP
Atlanta POP
Legend:
Backbone router
Edge router
Router
47160
Backbone
Figure 12 shows an exploded view of the backbone router that links the Los Angeles point of presence
(POP) in Figure 11 to the Atlanta POP. The bold line represents the backbone link going to the Atlanta
POP.
47161
B
The following types of traffic travel through the backbone router shown in Figure 12:
• The dotted line marked A represents traffic entering the backbone from a router that is not part of
the backbone. This is called external traffic.
• The dotted lines marked B and D represent traffic that is exiting the backbone. The router interprets
traffic from paths B and D as being generated from within the backbone. This is called internal
traffic.
• The dotted line marked C represents traffic that is not using the backbone and is not of interest to
TMS.
You can determine the amount of traffic the backbone handles by enabling a backbone router to track the
number of packets and bytes that travel through it. You can separate the traffic into the categories
“internal” and “external.” You separate the traffic by designating incoming interfaces on the backbone
router as internal or external.
Once you enable a backbone router to collect traffic matrix statistics, it starts free running counters,
which dynamically update when network traffic passes through the backbone router. You can retrieve a
snapshot of the traffic matrix statistics, either through a command to the backbone router or through the
NDA.
External traffic (path A) is the most important for determining the amount of traffic. Internal traffic
(paths B and D) is useful for ensuring that you are capturing all the TMS data. When you receive a
snapshot of the traffic matrix statistics, the packets and bytes are displayed in internal and external
categories.
Viewing the TMS Data by Reading the Virtual Files That Reside on the Backbone Router
You can read the TMS data that resides on the backbone router and is stored in the following virtual files:
• tmstats_ascii—TMS data in ASCII (human readable) format.
• tmstats_binary—TMS data in binary (space-efficient) format.
To view statistics in the ASCII file, enter the following command on the backbone router:
Router# more system:/vfiles/tmstats_ascii
Each file displayed consists of header information and records. A line of space follows the header and
each record. A bar (|) separates consecutive fields within a header or record. The first field in a record
specifies the type of record. The following example shows a sample TMSTATS_ASCII file:
VERSION 1|ADDR 172.27.32.24|AGGREGATION TrafficMatrix.ascii|SYSUPTIME 41428|routerUTC
3104467160|NTP unsynchronized|DURATION 1|
p|10.1.0.0/16|242|1|50|2|100
p|172.27.32.0/22|242|0|0|0|0
The following sections describe the header and the various types of records you can display.
File Header
The ASCII file header provides the address of the backbone router and information about how much time
the router used to collect and export the TMS data. The header occupies one line and uses the following
format:
VERSION 1|ADDR<address>|AGGREGATIONTrafficMatrix.ascii|SYSUPTIME<seconds>|
routerUTC<routerUTC>|NTP<synchronized|unsynchronized>|DURATION<aggregateTime>|
Table 5 describes the fields in the file header of the TMSTATS_ASCII file.
Maximum Field
Length Field Description
10 VERSION File format version.
21 ADDR The IP address of the router.
32 AGGREGATION The type of data being aggregated.
21 SYSUPTIME The time of export (in seconds) since the router booted.
21 routerUTC The time of export (in seconds) since 1900-01-01
(Coordinated Universal Time (UTC)), as determined by
the router.
19 NTP Whether Coordinated Universal Time (UTC) of the router
has been synchronized by the Network Time Protocol
(NTP).
20 DURATION The time needed to capture the data (in seconds).
Maximum
Field Length Field Description
2 <recordType> p means that the record represents dynamic label
switching data or traffic engineered (TE) tunnel traffic
data.
19 destPrefix/Mask The IP prefix address/mask (a.b.c.d/len format) for this
IGP route.
11 creationSysUpTime The sysUpTime when the record was first created.
21 internalPackets Internal packet count.
21 internalBytes Internal byte count.
21 externalPackets External packet count.
20 externalBytes External byte count (no trailing |).
Maximum
Field Length Field Description
2 <recordType> t means that the record represents TE tunnel midpoint
data.
27 headAddr<space>tun_id The IP address of the tunnel head and tunnel interface
number.
11 creationSysUpTime The sysUpTime when the record was first created.
21 internalPackets Internal packet count.
21 internalBytes Internal byte count.
21 externalPackets External packet count.
20 externalBytes External byte count (no trailing |).
The binary file tmstats_binary contains the same information as the ASCII file, except in a
space-efficient format. You can copy this file from the router and read it with any utility that accepts files
in binary format.
Each file consists of header information and a number of records. A line of space follows the header and
each record. A bar (|) separates consecutive fields within a header or a record.
Header Format
The file header provides the address of the router and indicates how much time the router used to collect
and export the data. The file header uses the following format:
VERSION 1|ADDR<address>|AGGREGATION ASList.ascii|SYSUPTIME<seconds>|routerUTC
<routerUTC>|DURATION<aggregateTime>|
Maximum Field
Length Field Description
18 nonrecursivePrefix/Mask The IP prefix address/mask (a.b.c.d/len format) for this
IGP route.
5 AS The neighbor autonomous system.
18 destinationPrefix/Mask The prefix/mask for the FIB entry (typically BGP route).
Note Before enabling the PXF processor, you must have IP routing and IP CEF switching turned on.
For information on configuring NSE, see the “Cisco Express Forwarding Overview” chapter later in this
publication.
Network Services Engine benefits and requirements are as follows:
• Accelerated services—The following features are accelerated on the NSE: Network Address
Translation (NAT), weighted fair queueing (WFQ), and NetFlow for both enterprise and service
provider customers.
• PXF field upgradable—PXF is based on microcode and can be upgraded with new software features
in future Cisco IOS releases.
The PXF processor enables IP parallel processing functions that work with the primary processor to
provide accelerated IP Layer 3 feature processing. The PXF processor off-loads IP packet
processing and switching functions from the RP to provide accelerated and highly consistent
switching performance when coupled with one or more of several IP services features such as access
Control Lists (ACLs), address translation, quality of service (QoS), flow accounting, and traffic
shaping.
PXF offers the advantage of hardware-based switching power, plus the flexibility of a programmable
architecture. The PXF architecture provides future-proofing—if additional features are added, an
application-specific integrated circuit (ASIC) will not be required. New features for accelerated
services can be added by reprogramming the PXF processor.
• System requirements—An NSE-1 can be used on existing Cisco 7200 VXR series routers with
Cisco Release IOS 12.1(1)E or a later version of Cisco IOS Release 12.1 E, and with
Cisco IOS Release 12.1(5)T or a later version of Cisco IOS Release 12.1 T.
• High performance—Network-layer services such as traffic management, security, and QoS benefit
significantly from NSE-1 high-performance. NSE-1 is the first Cisco processing engine to offer
integrated hardware acceleration, increasing Cisco 7200 VXR series system performance by 50 to
300 percent for combined “high-touch” WAN edge services.