One Way Trip To The CCNA Routing & Switching Certification
One Way Trip To The CCNA Routing & Switching Certification
WAY TRIP
To The CCNA Routing & Switching
Certification
By Randall Keith
Table of Contents
Chapter 1: Understanding Networks And Their Building Blocks
Introduction To Networks
Networking Types
Internetworking Models
OSI Reference Model
TCP/IP Model
Transport Control Protocol (TCP)
User Datagram Protocol (UDP)
Internet Protocol (IP)
Routing Protocols
Internet Control Message Protocol (ICMP)
Ethernet Technologies & Cabling
Collision Detection in Ethernet
Half and Full Duplex Ethernet
Ethernet at the Data Link Layer
Ethernet at the Physical Layer
Ethernet Cabling
Cisco Layer 3 Model
Summary
Chapter 2: IP Addressing & Subnets
IP Addresses: Composition, Types and Classes
Private And Public IP Addresses
Subnetting
Variable Length Subnet Masks (VLSM)
Route Summarization
Troubleshooting IP Addressing
Summary
Chapter 3: Introduction To Cisco Routers, Switches, and IOS
Routers, Switches, and the Boot Process
Using the Command Line Interface (CLI)
Basic Configuration Of Routers And Switches
Configuring Router Interfaces
Gathering Information And Verifying Configuration
Configuring DNS & DHCP
Managing The Configuration And IOS Files
Password Recovery On A Cisco Router
Cisco Discovery Protocol (CDP)
Using Telnet On IOS
Lab #1
Chapter 4: Introduction To IP Routing
Understanding IP Routing
Types Of Routing
Static Routing
Default Routing
Dynamic Routing
Administrative Distance & Routing Metrics
Understanding Routing Protocols
Classes Of Routing Protocols
Routing Loops
Maximum Hop Count
Split Horizon
Route Poisoning
Hold Downs
Route Redistribution
Static & Default Route Lab
Solution:
Verification:
Summary
Chapter 5: Routing Protocols
RIPv1 & RIPv2
Configuring RIPv1 & RIPv2
Verifying & Troubleshooting RIP
Enhanced Interior Gateway Routing Protocol (EIGRP)
Configuring EIGRP
Verifying & Troubleshooting EIGRP
Open Shortest Path First (OSPF)
Configuring OSPF
Verifying & Troubleshooting OSPF
EIGRP & OSPF Summary and Redistribution Routes
Lab 5-1: RIP
Solution
Verification
Lab 5-2: EIGRP
Solution
Solution
Lab 5-3: OSPF
Solution
Solution
Summary
Chapter 6: Switching & Spanning Tree Protocol (STP)
Understanding Switches & Switching
Initial Configuration Of A Catalyst Switch
Port Security
Spanning Tree Protocol (STP)
Cisco’s Additions To STP (Portfast, BPDUGuard, BPDUFilter, UplinkFast, BackboneFast)
Rapid Spanning Tree Protocol (RSTP) – 802.1w
PVST+ & Rapid-PVST
Etherchannel
Lab 6-1: Port Security
Lab 6-2: STP
Summary
Chapter 7: VLANs & VTP
MAC Address Table
Virtual LANs (VLANs)
Types Of Switch Ports
VLAN Trunking: ISL & 802.1Q
VLAN Trunking Protocol (VTP)
Inter-VLAN Routing
VLAN Configuration
Inter-VLAN Routing Configuration
VTP Troubleshooting
Voice VLAN Configuration
Summary
Chapter 8: Network Security
Network Security
Cisco Firewalls
Layer 2 Security
AAA Security Services
Secure Device Management
Secure Communications
Summary
Chapter 9: Access Lists
Introduction To Access Lists
Standard Access Lists
Extended Access Lists
Access Lists: Remote Access, Switchport, Modifying & Helpful Hints
Cisco Configuration Professional Initial Setup and Access List Lab
Summary
Chapter 10: Network Address Translation (NAT)
Introduction To NAT
Static NAT Configuration & Verification
Dynamic NAT Configuration
NAT Overloading aka Port Address Translation (PAT)
NAT Troubleshooting
NAT Configuration With Cisco Configuration Professional
Summary
Chapter 11: Wide Area Networks (WANs)
Introduction To Wide Area Networks
Point-To-Point WANs: Layer 1
Point-To-Point WANs: Layer 2
PPP Concepts
PPP Configuration
Troubleshooting Serial Links
Frame Relay
LMI & Encapsulation Types
Frame Relay Congestion Control
Frame Relay Encapsulation
Frame Relay Addressing
Frame Relay Topology Approaches
Frame Relay Configuration
Other WAN Technologies
Summary
Chapter 12: Virtual Private Networks (VPNs)
VPN Concepts
Types Of VPNs
Encryption
IPSec VPNs
SSL VPNs & Tunneling Protocols
GRE Tunnels
Summary
Chapter 13: IPv6
IPv6 Introduction
IPv6 Address Configuration
OSPF Version 3
EIGRP For IPv6
Summary
Chapter 14: IP Services
High Availability: VRRP, HSRP, GLBP
Cisco IOS Netflow
Summary
Chapter 1: Understanding Networks And Their Building Blocks
This chapter covers the following topics:
Introduction to Networks
Networking Types
OSI Reference Model
TCP/IP Model
Ethernet Technologies and Cabling
Cisco 3 Layer Model
Summary
Welcome to the amazing world of computer networking as you prepare for your Cisco CCNA
Routing & Switching 200-120 exam. This chapter will help you get your feet wet by explaining
what a network is, the different types of networks, and different devices used in them.
Once you know the basics of networking, this chapter will help you understand both the OSI
reference model and the TCP/IP model. Both of these models are very important for you to
understand from not only from the CCNA exam perspective, but also for the rest of your
networking career. Most of this chapter is dedicated to these two reference models.
This chapter also covers Network Applications, Ethernet Technologies and ends with a
discussion on the Cisco Three Layer Model which was created by Cisco to help design,
implement, and troubleshoot networks.
Introduction To Networks
Before you learn Cisco Internet working, it is important to understand what a network is and the
importance of networks themselves. Simply put, a network is a collection of interconnected
devices (such as computers, printers, etc.). To understand the importance of networks, let us look
at how things worked before networks were created. For this, consider a large multinational
company that sells food products in a time when networks did not exist.
Let us call this company ABC Inc. Imagine the amount of information such as sales, inventory, etc.
required by the management of the company to make everyday decisions. To get this information
they will need to call their local offices. Their local offices will need to mail (postal!) or fax
printed reports or even send media (floppies!) though the postal service. By the time the mail is
received, the data is already days old. Even if reports are faxed, it will be a cumbersome task to
consolidate all reports. This task also increases chance of human error since large numbers of
reports are manually collated. This is just one part of the equation. You also need to consider the
information required by the local offices. They also need various data from the head office and
other offices around the world.
Now consider the same company, but in the present time with all their offices interconnected.
They would use a single application around the world that takes advantage of their global
network. The data from all offices would be instantly stored at the central location and with a
single click, the management team can see data from around the world in any format they like.
This data would also be real-time. This means that they see it as its happening. Since the data is
centralized, any office location can see data pertaining to any location.
As you can see, the cost, time and effort involved in transferring data was much higher without
networks. So networks decrease cost, time, and effort and thereby increase productivity. They
also help in resource optimization by helping to share resources. A simple example of resource
sharing is a printer in a typical office. Without networks, each computer would require a
dedicated printer. However with a network, the printer can be shared between many different
computers.
Now that you know how beneficial networks are, its time to look at how networks work. Figure
1-1 shows the most basic form of a network. This figure shows two hosts (end-user devices such
as computers are commonly called hosts in networking terms) directly connected to each other
using a networking cable. Today every host has a Network Interface Card (NIC) that is used to
connect it to a network.
Figure 1-1 Most basic form of Network
One end of the network cable connects to the NIC on a host and the other connects to the network.
In this case, the cable directly connects to another host. At this stage do not worry about network
cables and how the hosts communicate across the network. This will be covered in detail later in
the chapter. At this stage it is important to understand how hosts connect to a network.
In Figure 1-1, the hosts are “networked” and can share information. This network is effective, but
not scalable. If you have more than 2 hosts to this “network”, it will not work without a separate
NIC card for each connection and that is not scalable or realistic. For more than 2 hosts to be
networked, you require a network device such as a hub. Figure 1-2 shows three hosts connected
to a hub.
Figure 1-2 Network with a Hub
A hub is a network device that repeats information received from a host to all other connects
hosts. In Figure 1-2 the hub will relay any information received from HostA to HostB and HostC.
This means that all the three hosts can communicate with each other. Communication between
hosts can be classified into three types:
Unicast – Communication from one host to another host only.
Broadcast – Communication from one host to all the hosts in the network.
Multicast – Communication from one host to few hosts only.
When a hub is used to network hosts, there are two problems that arise:
1. A hub repeats information received from one host to all the other hosts. To understand this,
consider HostA in Figure 1-2 sending a unicast message to HostB. When the hub receives
this message; it will relay the message to both HostB and HostC. Even though the message
was a unicast intended only for HostB, HostC also receives it. It is up to HostC to read the
message and discard it after seeing that the message was not intended for it.
2. A hub creates a shared network medium where only a single host can send packets at a
time. If another host attempts to send packets at the same time, a collision will occur. Then
each device will need to resend their packets and hope not to have a collision again. This
shared network medium is called a single collision domain. Imagine the impact of having a
single collision domain where 50 or 100 hosts are connected to hubs that are
interconnected and they are all trying to send data. That is just a recipe for many collisions
and an inefficient network.
The problems associated with hubs can cause severe degradation of a network. To overcome
these, switches are used instead of hubs. Like hubs, switches are used to connect hosts in a
network but switches break up collision domain by providing a single collision domain for every
port. This means that every host (one host connects to one port on the switch) gets its own
collision domain thereby eliminating the collisions in the network. With switches, each host can
transmit data anytime. Switches simply “switch” the data from one port to another in the switched
network. Also, unlike hubs, switches do not flood every packet out all ports. They switch a
unicast packet to the port where the destination host resides. They only flood out a broadcast
packet. Figure 1-3 shows a switched network.
Figure 1-3 A switched network
Remember that each host in Figure 1-3 is in its own collision domain and if HostA sends a packet
to HostC, HostB will not receive it.
Figure 1-4 and 1-5 show two networks. See if you can figure out how many collision domains
exist in them.
Figure 1-4 Collision Domains – 1
Figure 1-5 Collision Domains – 2
If you answered 5 for Figure 1-4, then you are absolutely correct since each port of the Switches
represent a single collision domain. If you answered more than 5 then you need to remember that
a hub does not break collision domains. Similarly, Figure 1-5 has 7 collision domains.
Now that you know how a switch works and improves a network, consider the one problem
associated with a switched network. Earlier, you learned that hubs flood out all packets, even the
unicast ones. A switch does not flood out unicast packets but it does flood out a broadcast packet.
All hosts connected to a switched network are said to be in the same broadcast domain. All hosts
connected to it will receive any broadcast sent out in this domain. While broadcasts are useful
and essential for network operations, in a large switched network too many broadcasts will slow
down the network. To remedy this situation, networks are broken into smaller sizes and these
separate networks are interconnected using routers. Routers do not allow broadcasts to be
transmitted across different networks it interconnects and hence effectively breaks up a broadcast
domain. Figure 1-6 shows three switched networks interconnected by a router.
Figure 1-6 Router in an Internetwork
In the network shown in Figure 1-6, broadcasts from hosts connected to Switch1 will not reach
hosts connected to Switch2 or Switch3. This is because the router will drop the broadcast on its
receiving interface.
In addition to breaking up broadcast domains, routers also perform the following four essential
functions in your network:
Packet Switching – At the barest minimum, routers are like switches because they
essentially switch packets between networks.
Communication between Networks – As shown in Figure 1-6, routers allow
communication between networks connected to it.
Path Selection – Routers can talk to each other to learn about all the networks connected
to various routers and then select the best path to reach a network. This is function is
discussed in detail later in the book.
Packet Filtering – Routers can drop or forward packets based on certain criteria like
their source and destination. This is also discussed in detail later in the book.
Exam Alert: Remember that switches break collision domains and routers break broadcast
domains. In addition to that it is important to remember the functions of a router for your CCNA
certification exam.
Now that you know what a network is and what various network devices do, its time to learn
about various network types followed by networking models.
Networking Types
As you know a network is a collection of devices connected together. Networks are further
classified into various types depending on their size, expanse, security, purpose and many other
parameters. While covering all these classifications is beyond the scope of the CCNA exam, there
are two important network classifications that you need to know about for the exam. In fact a large
part of the CCNA exam revolves around these two types of networks:
Local Area Network (LAN) – This is a term used to describe a network covering a
limited geographical area such as a floor, building or a campus. LAN usually has a high
data-transfer rate. The Ethernet standard is the most commonly used technology in LANs.
Ethernet is so common that it is almost synonymous with LAN today. As of late, wireless
technology is also becoming increasingly common for a local LAN. Both these standards
are covered in depth further in the book.
Wide Area Network (WAN) – This is a term used to describe a network covering a large
geographical area such as a multiple cities, a country or even across the world. They are
used to connect LANs across the area they cover. A typical example would be the LANs at
various offices of a company connected by WAN. Various technology standards used in
WAN will be covered later in the book.
Internetworking Models
As the importance of computers grew, vendors recognized the need for networking them. They
created various protocols whose specifications were not made public. Hence each vendor had
different ways of networking computers and these ways were not compatible to each other. This
means that computers of one vendor could not be networked with another vendor’s computers.
Slowly these specifications were made public and some inter-vendor compatibility was created
but this still represented too many complications. In 1977 the International Organization for
Standardization (ISO) started working on an open standard networking model that all vendors
would support to promote inter-operability. This standard was published in 1984 and was known
as the Open Systems Interconnection (OSI). During the same time period (1973 to 1985)
another effort by the Defense Advanced Research Projects Agency (DAPRA) was underway to
create an open standard network model. This network model came to be known as the TCP/IP
Model. By 1985, the TCP/IP model started gaining more prominence and support from vendors
and eventually replaced the OSI model.
This section starts by discussing the OSI Reference model in some depth before moving into a
deep discussion on the TCP/IP model and its protocols.
OSI Reference Model
As discussed earlier, the OSI model was created to promote communication between devices of
various vendors. It also promotes communication between disparate hosts such as hosts using
different operating platforms (Windows, OSX, Linux, etc.). Remember that you are very unlikely
to ever work on a system that uses protocols conforming to the OSI reference model. But it is
essentially to know the model and its terminology because other models such as the TCP/IP model
are often compared to the OSI reference model. Hence the discussion on this model will be
limited compared to the discussion on the TCP/IP model.
The OSI reference model, like most other network models, divides the functions, protocols, and
devices of a network into various layers. The layered approach provides many benefits, some of
which are:
Communication is divided into smaller and simpler components. This makes designing,
developing and troubleshooting easier.
Since it is a layered approach, the vendors write to a common input and output
specification per layer. The guts of their products functions in between the input and
output code of that layer.
Changes in one layer do not affect other layers. Hence development in one layer is not
bound by limitations of other layers. For example, wireless technologies are new but old
applications run seamless over them without any changes.
It is easier to standardize functions when they are divided into smaller parts like this.
It allows various types of hardware and software, both new and old to communicate with
each other seamlessly.
The OSI reference model has seven such layers that can be divided into two groups. The upper
layers (Layers 7, 6 and 5) define how applications interact with the host interface, with each
other, and the user. The lower four layers (Layers 4, 3, 2 and 1) define how data is transmitted
between hosts in a network. Figure 1-7 shows the seven layers and a summary of their functions.
Figure 1-7 Seven Layers of OSI Reference Model
he sections below discuss each layer in detail.
Application Layer
The Application Layer provides the interface between the software application on a system and
the network. Remember that this layer does not include the application itself, but provides
services that an application requires. One of the easiest ways to understand this layer’s function is
to look at how a Web Browser such as Internet Explorer or Firefox works. IE or FF is the
application. When it needs to fetch a webpage, it uses the HTTP protocol to send the request and
receive the page contents. This protocol resides at the application layer and can be used by an
application such as IE or FF to get webpages from web servers across the network. On the other
side, the web server application such as Apache or IIS interacts with the HTTP protocol on the
Application layer to receive the HTTP request and send the response back.
Presentation Layer
As the name suggest, this layer presents data to the Application layer. The Presentation Layer is
responsible for data translation and encoding. It will take the data from the Application layer and
translate it into a generic format for transfer across the network. At the receiving end the
Presentation layer takes in generically formatted data and translates into the format recognized by
the Application layer. An example of this is an EBCDIC to ASCII translation. The OSI model
has protocol standards that define how data should be formatted. This layer is also involved in
data compression, decompression, encryption, and decryption.
Session Layer
In a host, different applications or even different instances of the same application might request
data from across the network. It is the Sessions layer’s responsibility to keep the data from each
session separate. It is responsible for setting up, managing and tearing down sessions. It also
provides dialog control and coordinates communication between the systems.
Transport Layer
Where the upper layers are related to applications and data within the host, the transport layer is
concerned with the actual end-to-end transfer of the data across the network. This layer
establishes a logical connection between the two communicating hosts and provides reliable or
unreliable data delivery and can provide flow control and error recovery. Although not
developed under the OSI Reference Model and not strictly conforming to the OSI definition of the
Transport Layer, typical examples of Layer 4 are the Transmission Control Protocol
(TCP) and User Datagram Protocol (UDP). These protocols will be discussed in great detail
later in this chapter.
Network Layer
To best understand what the Network layer does, consider what happens when you write a letter
and use the postal service to send the letter. You put the letter in an envelope and write the
destination address as well as your own address so that an undelivered letter can be returned
back to you. In network terms, this address is called a logical address and is unique in the
network. Each host has a logical address. When the post office receives this letter, it has to
ascertain the best path for this letter to reach the destination. Similarly in a network, a router
needs to determine the best path to a destination address. This is called path determination.
Finally the post office sends the letter out the best path and it moves from post office to post
office before finally being delivered to the destination address. Similarly data is moved across
network mainly by routers before being finally delivered to the destination.
All these three functions – logical addressing, path determination and forwarding – are done at
the Network Layer. Two types of protocols are used for these functions – routed protocols are
used for logical addressing and forwarding while routing protocols are used for path
determinations. There are many routed protocols and routing protocols available. Some of the
common ones are discussed in great detail later the book. Routers function at this layer.
Remember that routers only care about the destination network. They do not care about the
destination host itself. The task of delivery to the destination host lies on the Data Link Layer.
Data Link Layer
While the Network layer deals with data moving across networks using logical addresses, Data
Link layer deals with data moving within a local network using physical addresses. Each host has
a logical address and a physical address. The physical address is only locally significant and is
not used beyond the network boundaries (across a router). This layer also defines protocols that
are used to send and receive data across the media. You will remember from earlier in the
chapter that only a single host can send data at a time in a collision domain or else packets will
collide and cause a host to back off for sometime. The Data Link layer determines when the media
is ready for the host to send the data and also detects collisions and other errors in received data.
Switches function at this layer.
Physical Layer
This layer deals with the physical transmission medium itself. It activates, maintains and
deactivates the physical link between systems (host and switch for example). This is where the
connectors, pin-outs, cables, electrical currents etc. are defined. Essentially this layer puts the
data on the physical media as bits and receives it in the same way. Hubs work at this layer.
Data Encapsulation
In the previous sections you learned about various layers of the OSI reference model. Each layer
has its distinct function and it interacts with the corresponding layer at the remote end. For
example, the transport layer at the source will interact with the transport layer of the destination.
For this interaction, each layer adds a header in front of the data from the previous layer. This
header contains control information related to the protocol being used at that layer. This process
is called encapsulation. This header and the data being sent from one layer to the next lower
layer is called a Protocol Data Unit (PDU). Figure 1-8 shows how data gets encapsulated as it
travels from layer 7 down to layer 1.
Figure 1-8 Encapsulation in OSI Reference Model
As shown in Figure 1-8, The Application layer adds its protocol dependent header to the data and
creates the Layer 7 PDU which is then passed down to the Presentation Layer. This layer then
adds its header to the Layer 7 PDU to create the Layer 6 PDU and sends it down to the Session
layer. This goes on till Layer 2 receives the Layer 3 PDU. Layer 2 adds a header and a trailer to
the Layer 3 PDU to create the Layer 2 PDU that is then sent to Layer 1 for transmission.
At the receiving end, Layer 1 takes the data off the wire and sends it to Layer 2. Here the Layer 2
header and trailer are examined and removed. The resulting Layer 3 PDU is sent to Layer 3.
Layer 3 in turn examines the header in the PDU and removes it. The resulting Layer 4 PDU is sent
to Layer 4. Similarly, each layer removes the header added by the corresponding layer at the
source before sending the data to the upper layer. Finally the Application layer removes the Layer
7 header and sends the data to the application. This process of examining, processing and
removing the header is known as decapsulation.
Exam Alert: It is very important to remember the Layer names, their functions and the
encapsulation process. You can use a common mnemonic to remember the layer names and their
sequence – All People Seem To Need Data Processing. This is an important concept on your
CCNA exam.
TCP/IP Model
As mentioned earlier, the OSI reference model and the TCP/IP model are two open standard
networking models that are very similar. However, the latter has found more acceptance today
and the TCP/IP protocol suite is more commonly used. Just like the OSI reference model, the
TCP/IP model takes a layered approach. In this section we will look at all the layers of the
TCP/IP model and various protocols used in those layers.
The TCP/IP model is a condensed version of the OSI reference model consisting of the following
4 layers:
Application Layer
Transport Layer
Internet Layer
Network Access Layer
The functions of these four layers are comparable to the functions of the seven layers of the OSI
model. Figure 1-9 shows the comparison between the layers of the two models.
The following sections discuss each of the four layers and protocols in those layers in detail.
Figure 1-9 Comparison between TCP/IP and OSI models
Application Layer
The Application Layer of the TCP/IP Model consists of various protocols that perform all the
functions of the OSI model’s Application, Presentation and Session layers. This includes
interaction with the application, data translation and encoding, dialogue control and
communication coordination between systems.
The following are few of the most common Application Layer protocols used today:
Telnet – Telnet is a terminal emulation protocol used to access the resourses of a remote host. A
host, called the Telnet server, runs a telnet server application (or daemon in Unix terms) that
receives a connection from a remote host called the Telnet client. This connection is presented to
the operating system of the telnet server as though it is a terminal connection connected directly
(using keyboard and mouse). It is a text-based connection and usually provides access to the
command line interface of the host. Remember that the application used by the client is usually
named telnet also in most operating systems. You should not confuse the telnet application with
the Telnet protocol.
HTTP – The Hypertext Transfer Protocol is foundation of the World Wide Web. It is used to
transfer Webpages and such resources from the Web Server or HTTP server to the Web Client or
the HTTP client. When you use a web browser such as Internet Explorer or Firefox, you are using
a web client. It uses HTTP to transfer web pages that you request from the remote servers.
FTP – File Transfer Protocol is a protocol used for transferring files between two hosts. Just like
telnet and HTTP, one host runs the FTP server application (or daemon) and is called the FTP
server while the FTP client runs the FTP client application. A client connecting to the FTP server
may be required to authenticate before being given access to the file structure. Once authenticated,
the client can view directory listings, get and send files, and perform some other file related
functions. Just like telnet, the FTP client application available in most operating systems is
called ftp. So the protocol and the application should not be confused.
SMTP – Simple Mail Transfer Protocol is used to send e-mails. When you configure an email
client to send e-mails you are using SMTP. The mail client acts as a SMTP client here. SMTP is
also used between two mails servers to send and receive emails. However the end client does not
receive emails using SMTP. The end clients use the POP3 protocol to do that.
TFTP – Trivial File Transfer Protocol is a stripped down version of FTP. Where FTP allows a
user to see a directory listing and perform some directory related functions, TFTP only allows
sending and receiving of files. It is a small and fast protocol, but it does not support
authentication. Because of this inherent security risk, it is not widely used.
DNS – Every host in a network has a logical address called the IP address (discussed later in the
chapter). These addresses are a bunch of numbers. When you go to a website such as
www.cisco.com you are actually going to a host which has an IP address, but you do not have to
remember the IP Address of every WebSite you visit. This is because Domain Name Service
(DNS) helps map a name such as www.cisco.com to the IP address of the host where the site
resides. This obviously makes it easier to find resources on a network. When you type in the
address of a website in your browser, the system first sends out a DNS query to its DNS server to
resolve the name to an IP address. Once the name is resolved, a HTTP session is established with
the IP Address.
DHCP – As you know, every host requires a logical address such as an IP address to
communicate in a network. The host gets this logical address either by manual configuration or by
a protocol such as Dynamic Host Configuration Protocol (DHCP). Using DHCP, a host can be
provided with an IP address automatically. To understand the importance of DHCP, imagine
having to manage 5000 hosts in a network and assigning them IP address manually! Apart from the
IP address, a host needs other information such as the address of the DNS server it needs to
contact to resolve names, gateways, subnet masks, etc. DHCP can be used to provide all these
information along with the IP address.
Transport Layer
The protocols discussed above are few of the protocols available in the Application layer. There
are many more protocols available. All of them take the user data and add a header and pass it
down to the Transport layer to be sent across the network to the destination. The TCP/IP transport
layer’s function is same as the OSI layer’s transport layer. It is concerned with end-to-end
transportation of data and setups up a logical connection between the hosts.
Two protocols available in this layer are Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP). TCP is a connection oriented and reliable protocol that
uses windowing to control the flow and provides ordered delivery of the data in segments. On
the other hand, UDP simply transfers the data without the bells and whistles. Though these two
protocols are different in many ways, they perform the same function of transferring data and they
use a concept called port numbers to do this. The following sections cover port numbers before
looking into TCP and UDP in detail.
Port Numbers
A host in a network may send traffic to or receive from multiple hosts at the same time. The
system would have no way to know which data belongs to which application. TCP and UDP solve
this problem by using port numbers in their header. Common application layer protocols have
been assigned port numbers in the range of 1 to 1024. These ports are known as well-known
ports. Applications implementing these protocols listen on these port numbers. TCP and UDP on
the receiving host know which application to send the data to based on the port numbers received
in the headers.
On the source host each TCP or UDP session is assigned a random port number above the range
of 1024. So that returning traffic from the destination can be identified as belonging to the
originating application. Combination of the IP address, Protocol (TCP or UDP) and the Port
number forms a socket at both the receiving and sending hosts. Since each socket is unique, an
application can send and receive data to and from multiple hosts. Figure 1-10 shows two hosts
communicating using TCP. Notice that the hosts on the left and right are sending traffic to the host
in the center and both of them are sending traffic destined to Port 80, but from different source
ports. The host in the center is able to handle both the connections simultaneously because the
combination of IP address, Port numbers and Protocols makes each connection different.
Figure 1-10 Multiple Sessions using Port Numbers
Table 1-1 shows the transport layer protocol and port numbers used by different common
application layer protocols.
Table 1-1 Well-known Port Numbers
Application Protocol Transport Protocol Port Number
HTTP TCP 80
HTTPS TCP 443
FTP (control) TCP 21
FTP (data) TCP 20 Exam Alert: It is important to
SSH TCP 22 remember the well-know port
Telnet TCP 23 numbers and which application
layer protocol they are assigned to
DNS TCP, UDP 53
as you will see this on your CCNA
SMTP TCP 25 exam in a multiple choice question
TFTP UDP 69 or an access-list question.
Transport Control Protocol (TCP)
TCP is one of the original protocols designed in the TCP/IP suite and hence the name of the
model. When the application layer needs to send large amount of data, it sends the data down to
the transport layer for TCP or UDP to transport it across the network. TCP first sets up a virtual-
circuit between the source and the destination in a process called three-way handshake. Then it
breaks down the data into chunks called segments, adds a header to each segment and sends them
to the Internet layer.
The TCP header is 20 to 24 bytes in size and the format is shown in Figure 1-11. It is not
necessary to remember all fields or their size but most of the fields are discussed below.
Figure 1-11 TCP header
When the Application layer sends data to the transport layer, TCP sends the data across using the
following sequence:
Connection Establishment – TCP uses a process called three-way handshake to establish a
connection or virtual-circuit with the destination. The three-way handshake uses
the SYN and ACK flags in the Code Bits section of the header. This process is necessary to
initialize the sequence and acknowledgement number fields. These fields are important for TCP
and will be discussed below.
Figure 1-12 TCP three-way handshake
As shown in Figure 1-12, the source starts the three-way handshake by sending a TCP header to
the destination with the SYN flag set. The destination responds back with the SYN and ACK flag
sent. Notice in the figure that destination uses the received sequence number plus 1 as the
Acknowledgement number. This is because it is assumed that 1 byte of data was contained in the
exchange. In the final step, the source responds back with only the ACK bit set. After this, the data
flow can commence.
Data Segmentation – The size of data that can be sent across in a single Internet layer PDU is
limited by the protocol used in that layer. This limit is called the maximum transmission unit
(MTU). The application layer may send data much larger than this limit; hence TCP has to break
down the data into smaller chucks called segments. Each segment is limited to the MTU in size.
Sequence numbers are used to identify each byte of data. The sequence number in each header
signifies the byte number of the first byte in that segment.
Flow Control – The source starts sending data in groups of segments. The Window bit in the
header determines the number of segments that can be sent at a time. This is done to avoid
overwhelming the destination. At the start of the session the window in small but it increases over
time. The destination host can also decrease the window to slow down the flow. Hence the
window is called the sliding window. When the source has sent the number of segments allowed
by the window, it cannot send any further segments till an acknowledgement is received from the
destination. Figure 1-13 shows how the window increases during the session. Notice the
Destination host increasing the Window from 1000 to 1100 and then to 1200 when it sends an
ACK back to the source.
Figure 1-13 TCP Sliding Window and Reliable delivery
Reliable Delivery with Error recovery – When the destination receives the last segment in the
agreed window, it has to send an acknowledgement to the source. It sets the ACK flag in the
header and the acknowledgement number is set as the sequence number of the next byte expected.
If the destination does not receive a segment, it does not send an acknowledgement back. This
tells the source that some segments have been lost and it will retransmit the segments. Figure 1-13
shows how windowing and acknowledgement is used by TCP. Notice that when source does not
receive acknowledgement for the segment with sequence number 2000, it retransmits the data.
Once it receives the acknowledgement, it sends the next sequence according to the window size.
Ordered Delivery – TCP transmits data in the order it is received from the application layer and
uses sequence number to mark the order. The data may be received at the destination in the wrong
order due to network conditions. Thus TCP at the destination orders the data according to the
sequence number before sending it to the application layer at its end. This order delivery is part
of the benefit of TCP and one of the purposes of the Sequence Number.
Connection Termination – After all data has been transferred, the source initiates a four-way
handshake to close the session. To close the session, the FIN and ACK flags are used.
Exam Alert: TCP is one of the most important protocols you will learn about while preparing for
the CCNA exam. Understanding how TCP works is very important and you will more than likely
see an ACK question on the exam!
User Datagram Protocol (UDP)
The only thing common between TCP and UDP is that they use port numbers to transport traffic.
Unlike TCP, UDP neither establishes a connection nor does it provide reliable delivery. UDP
is connectionless and unreliable protocol that delivers data without overheads associated with
TCP. The UDP header contains only four parameters (Source port, Destination Port, Length and
Checksum) and is 8 bytes in size.
At this stage you might think that TCP is a better protocol than UDP since it is reliable. However
you have to consider that networks now are far more stable than when these protocols where
conceived. TCP has a higher overhead with a larger header and acknowledgements. The source
also holds data till it receives acknowledgement. This creates a delay. Some applications,
especially those that deal with voice and video, require fast transport and take care of the
reliability themselves at the application layer. Hence in lot of cases UDP is a better choice than
TCP.
Internet Layer
Once TCP and UDP have segmented the data and have added their headers, they send the segment
down to the Network layer. The destination host may reside in a different network far from the
host divided by multiple routers. It is the task of the Internet Layer to ensure that the segment is
moved across the networks to the destination network.
The Internet layer of the TCP/IP model corresponds to the Network layer of the OSI reference
model in function. It provides logical addressing, path determination and forwarding.
The Internet Protocol (IP) is the most common protocol that provides these services. Also
working at this layer are routing protocols which help routers learn about different networks they
can reach and the Internet Control Message Protocol (ICMP) that is used to send error
messages across at this layer.
Almost half of the book is dedicated IP and Routing protocols so they will be discussed in detail
in later chapters, but the following sections discuss these protocols in brief.
Internet Protocol (IP)
The Internet layer in the TCP/IP model is dominated by IP with other protocols supporting its
purpose. Each host in a network and all interfaces of a router have a logical address called the IP
address. All hosts in a network are grouped in a single IP address range similar to a street
address with each host having a unique address from that range similar to a house or mailbox
address. Each network has a different address range and routers that operate on layer 3 connect
these different networks.
As IP receives segments from TCP or UDP, it adds a header with source IP address and
destination IP address amongst other information. This PDU is called a packet. When a router
receives a packet, it looks at the destination address in the header and forwards it towards the
destination network. The packet may need to go through multiple routers before it reaches the
destination network. Each router it has to go through is called a hop.
Figure 1-14 Packet flow in internetwork
Consider the Internetwork shown in Figure 1-14 to understand the routing process better. When
Host1 needs to send data to Host2, it does not get routed because the hosts are in the same
network range. The Data Link layer takes care of this. Now consider Host1 sending data to Host3.
Host1 will recognize that it needs to reach a host in another network and will forward the packet
to Router1. Router1 checks the destination address and knows that the destination network is
toward Router2 and hence forwards it to Router2. Similarly Router 2 forwards the packet to
Router3. Router3 is directly connected to the destination network. Here the data link layer takes
care of the delivery to the destination host. As you can see, the IP address fields in the IP header
play a very important role in this process. In fact IP addresses are so important in a network that
the next Chapter is entirely dedicated to it!
Figure 1-15 IPv4 Header
There are various versions of the Internet Protocol. Version 4 is the one used today and version 6
is slowly starting to replace it which is why it’s presence has increased on the CCNA Routing &
Switching 200-120 exam compared to previous CCNA exam versions. Figure 1-15 shows the
header structure of IPv4. The following fields make up the header:
Version – IP version number. For IPv4 this value is 4.
Header Length – This specifies the size of the header itself. The minimum size is 20 bytes. The
figure does not show the rarely used options field that is of a variable length. Most IPv4 headers
are 20 bytes in length.
DS Field – The differentiated Services field is used for marking packets. Different Quality-Of-
Service (QoS) levels can be applied on different markings. For example, data belonging to voice
and video protocols have no tolerance for delay. The DS field is used to mark packets carrying
data belonging to these protocols so that they get priority treatment through the network. On the
other hand, peer-to-peer traffic is considered a major problem and can be marked down to give in
best effort treatment.
Total Length – This field specifies the size of the packet. This means the size of the header plus
the size of the data.
Identification – When IP receives a segment from TCP or UDP; it may need to break the segment
into chucks called fragments before sending it out to the network. Identification fields serves to
identify the fragments that make up the original segment. Each fragment of a segment will have the
same identification number.
Flags – Used for fragmentation process.
Fragment Offset – This field identifies the fragment number and is used by hosts to reassemble
the fragments in the correct order.
Time to Live – The Time to Live (TTL) value is set at the originating host. Each router that the
packet passes through reduces the TTL by one. If the TTL reaches 0 before reaching the
destination, the packet is dropped. This is done to prevent the packet from moving around the
network endlessly.
Protocol – This field identifies the protocol to which the data it is carrying belongs. For example
a value of 6 implies that the data contains a TCP segment while a value of 17 signifies a UDP
segment. Apart from TCP and UDP there are many protocols whose data can be carried in an IP
packet.
Header Checksum – This field is used to check for errors in the header. At each router and at
the destination, a cyclic redundancy check performed on the header and the result should match
the value stored in this field. If the value does not match, the packet is discarded.
Source IP address – This field stores the IP address of the source of the packet.
Destination IP address – This field stores the IP address of the destination of the packet.
Figure 1-16 Source and Destination IP address
Figure 1-16 shows how Source and Destination IP address is used in an IP packet. Notice how the
source and destination addresses changed during the exchange between HostA and HostB
Routing Protocols
In Figure 1-14, Router1 knew that it needed to send the packet destined to Host3 toward Router2.
Router2 in turn knew that the packet needed to go toward Router3. To make these decisions, the
routers need to build their routing table. This is a table of all networks known by it and all the
routers in the internetwork. The table also lists the next router towards the destination network.
To build this table dynamically, routers use routing protocols. There are many routing protocols
and their sole purpose is to ensure that routers know about all the networks and the best path to
any network. Chapter 4 and Chapter 5 discuss the routing process and some routing protocols in
detail.
Internet Control Message Protocol (ICMP)
ICMP is essentially a management protocol and messaging service for IP. Whenever IP encounters
an error, it sends ICMP data as an IP packet. Some of the reasons why an ICMP message can be
generated are:
Destination Network Unreachable – If a packet cannot be routed to the network in which the
destination address resides, the router will drop the packet and generate an ICMP message back
to the source informing that the destination network is unreachable.
Time Exceeded – If the TTL of a packet expiries (reduces to zero), the router will drop it and
generate an ICMP message back to the source informing it that the time exceeded and the packet
could not be delivered.
Echo Reply – ICMP can be used to check network connectivity. Popular utility called Ping is
used to send Echo Requests to a destination. In reply to the request, the destination will send back
an Echo reply back to the source. Successful receipt of Echo reply shows that the destination host
is available and reachable from the source.
Network Access Layer
The Network Access layer of the TCP/IP model corresponds with the Data Link and Physical
layers of the OSI reference model. It defines the protocols and hardware required to connect a
host to a physical network and to deliver data across it. Packets from the Internet layer are sent
down the Network Access layer for delivery within the physical network. The destination can be
another host in the network, itself, or a router for further forwarding. So the Internet layer has a
view of the entire Internetwork whereas the Network Access layer is limited to the physical layer
boundary that is often defined by a layer 3 device such as a router.
The Network Access layer consists of a large number of protocols. When the physical network is
a LAN, Ethernet at its many variations are the most common protocols used. On the other hand
when the physical network is a WAN, protocols such as the Point-to-Point Protocol
(PPP) and Frame Relay are common. In this section we take a deep look at Ethernet and its
variations. WAN protocols are covered in detail in Chapter 11.
Before we explore Ethernet remember that:
Network Access layer uses a physical address to identify hosts and to deliver data.
The Network Access layer PDU is called a frame. It contains the IP packet as well as a
protocol header and trailer from this layer.
The Network Access layer header and trailer are only relevant in the physical network.
When a router receives a frame, it strips of the header and trailer and adds a new header
and trailer before sending it out the next physical network towards the destination.
Ethernet Technologies & Cabling
Ethernet in the term used for a family of standards that define the Network Access layer of the
most common type of LAN used today. The various standards differ in terms of speeds supported,
cable types and the length of cables. The Institute of Electrical and Electronics Engineers
(IEEE) is responsible for defining the various standards since it took over the process in 1980.
To make it easier to understand Ethernet, its functions will be discussed in terms of the OSI
reference models’ Data Link and Physical layers. (Remember that Network Access Layer is a
combination of these two layers).
IEEE defines various standards at the physical layer while it divides the Data Link functions into
the following two sublayers:
The 802.3 Media Access Control (MAC) sublayer
The 802.2 Logical Link Control (LLC) sublayer
Even though various physical layer standards are different and require changes at the layer, each
of them use the same 802.3 header and the 802.2 LLC sublayer.
The following sections look at the collision detection mechanism used by Ethernet and how
Ethernet functions at both the layers.
Collision Detection in Ethernet
Ethernet is a contention media access method that allows all hosts in a network to share the
available bandwidth. This means that multiple hosts try to use the media to transfer traffic. If
multiple hosts send traffic at the same time, a collision can occur resulting in loss of the frames
that collided. Ethernet cannot prevent such collision but it can detect them and take corrective
actions to resolve. It uses the Carrier Sense Multiple Access with Collision Detection
(CSMA/CD) protocol to do so. This is how CSMA/CD works:
1. Hosts looking to transmit a frame listen until Ethernet is not busy.
2. When Ethernet is not busy, hosts start sending the frame.
3. The source listens to make sure no collision occurred.
4. If a collision occurs, the source hosts send a jamming signal to notify all hosts of the
collision.
5. Each source host randomizes a timer and waits that long before resending the frame that
collided.
CSMA/CD works well but it does create some performance issues because:
1. Hosts must wait till the Ethernet media is not busy before sending frames. This means
only one host can send frames at a time in a collision domain (such as in the case of a
network connected to a hub). This also means that a host can either send or receive at one
time. This logic is called half-duplex.
2. During a collision, no frame makes it across the network. Also, the offending hosts must
wait a random time before they can start to resend the frames.
Many networks suffered this sort of performance degradation due to the use of hubs until switches
became affordable. In fact, statistics showed that anything over 30 percent utilization caused
performance degradation in Ethernet.
Remember that switches break collision domains by providing a dedicated port to each host. This
means that hosts connected to a switch only need to wait if the switch is sending frames destined
to the host itself.
Half and Full Duplex Ethernet
In the previous section, you learned about the logic called Half Duplex in which a host can only
send or receive at one time. In a hub-based network, hosts are connected in a half-duplex mode
because they must be able to detect collisions.
When hosts are connected to a switch, they can operate at Full duplex. This means they can send
and receive at the same time without worrying about collisions. This is possible because full
duplex uses two pairs of wire instead of one pair. Using the two pairs, a point-to-point connection
is created between the transmitter of the host to the receiver of the switch and vice versa. So the
host sends and receives frames via different pairs of wires and hence need to listed to see if it
send frames or not. You should note that CSMA/CD is disabled at both ends when full duplex is
used.
Figure 1-17 Full Duplex
Apart from eliminating collisions, each device actually gets to use twice the bandwidth available
because it now has same bandwidth on both pairs of wire and each pair is used separately for
sending and receiving.
Figure 1-17 shows how the transmitter on the host’s interface card is connected to the receiver on
the switch interface while the receiver on the host interface is connected to the transmitter on the
switch interface. Now traffic sent by the host and traffic sent to the host both have a dedicated
path with equal bandwidth. If each path has a bandwidth of 100Mbps, the host gets 200Mpbs of
dedicated bandwidth to the switch. In case of half-duplex, there would have been only a single
path of 100Mbps that would have been used for both receiving and sending traffic.
Ethernet at the Data Link Layer
Ethernet at Data Link layer is responsible for addressing as well as framing the packets received
from Network Layer and preparing them for the actual transmission.
Ethernet Addressing
Ethernet Addressing identifies either a single device or a group of devices on a LAN and is
called a MAC address. MAC address is 48 bits (6 bytes) long and is written is hexadecimal
format. Cisco devices typically write it in a group of four hex digits separated by period while
most operating systems write it in groups of two digits separated by a colon. For example, Cisco
devices would write a MAC address as 5022.ab5b.63a9 while most operating systems would
write it as 50:22:ab:5b:63:a9.
A unicast address identifies a single device. This address is used to identify the source and
destination in a frame. Each LAN interface card has a globally unique MAC address. The IEEE
defines the format and the assignment of addresses.
Figure 1-18 48bit MAC address
To keep addresses unique, each manufacturer of LAN cards is assigned a code called
the organizationally unique identifier (OUI). The first half of every MAC address is the OUI of
the manufacturer. The manufacturer assigns the second half of the address while ensuring that the
number is not used for any other card. The complete MAC address is then encoded into a ROM
chip in the card. Figure 1-18 shows the composition of a MAC address.
MAC address can also identify a group of devices. These are called group addresses. IEEE
defines the following two types of group addresses:
Broadcast Address – This address has a value of FFFF.FFFF.FFFF and means that all
devices in the network should process the frame.
Multicast Address – Multicast addresses are used when a frame needs to go to a group of
hosts in the network. When IP multicast packets need to travel over Ethernet a multicast
address of 0100.5exx.xxxx is used where xx.xxxx can be any value.
Ethernet Framing
When the Data Link layer receives a packet from the Network layer for transmission, it has to
encapsulate the packet in frames. These frames are used to identify the source and destination
device by the switch. It also tells the receiving host how to interpret the bits received by the
physical layer.
Figure 1-19 IEEE Frame (1997)
The framing used by Ethernet has changed few times over the year. Xerox defined the original
frame. When IEEE took over Ethernet in early 1980s it defined a new frame. In 1997 IEEE
finalized the Ethernet frame that took a few components from the Xerox definition and a few from
IEEE’s original frame. The finalized frame is shown in Figure 1-19. Table 1-2 lists the fields in
the frame, their size and a brief description.
Table 1-2 Frame Fields
Field Length Description
in bytes
Preamble 7 It is used for synchronization. It tells the received device where the header
starts.
SFD 1 Start Frame Delimiter (SFD) tells the receiving device that the next byte is
the destination address
Destination 6 Identifies the intended destination of the frame.
Address
Source 6 Identifies the source of the frame.
Address
Length 2 Contains the length of the data field of the frame. (This field can either be
length or type but not both)
Type 2 Identifies the Network layer protocol whose data is contained in the
frame. (This field can either be length or type but not both)
Data 46-1500 The Network layer data.
FCS 4 Stores the CRC value which is used to check for errors in transmission.
The Length/Type field is something you need to understand a little more about. The type field is
very important because it tells the receiving end about the protocol whose data is contained in the
frame. If the value of the field is less than a hex value of 0600 (decimal value 1536), it signifies
that the field is used as a length field in that frame. For cases where this field is used as a length
field, either one or two additional headers are added after the Ethernet 802.3 header, but before
the layer 3 header. When IP packets are being carried, the Ethernet frame has the following two
additional headers:
An IEEE 802.2 Logical Link Control (LLC) header.
An IEEE Subnetwork Access Protocol (SNAP) header.
Figure 1-20 shows an Ethernet frame with these two additional headers.
Figure 1-20 802.3 Frame with LLC and SNAP header
Exam Alert: It is not necessary to remember the fields of the frame. Just remember why LLC and
SNAP headers are used for your CCNA exam.
Ethernet at the Physical Layer
Ethernet was originally implemented by a group comprised of Digital, Xerox and Intel (DIX).
IEEE then took over and created the 802.3 standard. This was a 10Mbps Ethernet that used co-
axial cables.
Exam Alert: Ethernet is used to describe the family of standard that includes FastEthernet,
Gigabit Ethernet etc. It is also used to describe the 10Mpbs variant also which is simply noted as
Ethernet.
IEEE then extended the 802.3 committee to two new committees known as the 802.3u
(FastEthernet) and 802.3ab (Gigabit Ethernet on category 5 cable). Then it created another
committee known as the 802.3ae (10Gbps over fiber and co-axial).
On the other hand the Electronics Industries Association and the newer Telecommunication
Industries Alliance (EIA/TIA) is the standards body that creates the physical layer specifications
for Ethernet. It specifies that a registered jack (RJ) connector with a 4 5 wiring sequence on
an unshielded twisted-pair (UTP) cabling should be used with Ethernet. This cable comes in
categories where higher category has less of the following two problems associated with them:
Attenuation – This is the loss of signal strength as it travels the length of the cable. It is
measured in decibels.
Crosstalk – This is the unwanted signal interference from adjacent pairs in the cable.
What this means is that category 5 cable has lesser attenuation and crosstalk than category 3
cables.
Now that you know about the standards bodies involved and what they have done, it is time to
look at the various Ethernet standards. Table 1-3 lists the original 3 standards. Remember that
each standard is different in terms of Speed, Cable and the Maximum Length of cables.
Table 1-3 Original Ethernet Standards
Name Speed Cable Max Connector Description
Type Cable
length
10Base2 10Mbps Coaxial 185 AUI Known as thinnet, it can support up to 30 hosts in a
meters single segment. A single collision domain across the
network.
10Base5 10Mbps Coaxial 500 AUI Known as thicknet, it can support up to 100 users in
meters a single segment. A single collision domain across
the network.
10BaseT 10Mbps UTP 100 RJ45 The first standard to use UTP cable with RJ45. A
meters single host can be connected to a segment or wire. It
required use of hubs to connect multiple hosts.
Table 1-4 shows the extended Ethernet Standards.
Table 1-4 Extended Ethernet Standards
Name Speed
Cable Type Maximum Connector
Cable
Length
100BaseTX (IEEE UTP cat. 5,
100
802.3u) 100 Mbps 6 or 7 two- RJ45
meters
pair wiring
100BaseFX (IEEE Multimode 412 ST or SC
100Mbps
802.3u) Fiber meters connector
Copper
1000BaseCX (IEEE twisted pair DE-9 or
1000Mpbs 25 meters
802.3z) called 8P8C
twinax
1000BaseSX(IEEE Multimode 220 ST or SC
1000Mbps
802.3z) Fiber meters connector
1000BaseLX(IEEE Single ST or SC
1000Mpbs 5km
802.3z) mode Fiber connector
1000BaseT(IEEE 100
1000Mpbs Cat 5 UTP RJ45
802.3ab) meters
Ethernet Cabling
When connecting different kinds of devices to each other, different kinds of cabling is used. The
following three types of Ethernet cablings exist:
Straight-through cable (a normal patch cable)
Crossover cable
Rolled cable
The three cabling types are discussed below:
Straight-Though – A UTP cable has 8 wires. A straight-through uses 4 out of these 8 wires.
Figure 1-21 shows the configuration of the wire on both ends in a straight-through cable. Notice
that only wires 1, 2, 3 and 6 are used and they connect straight to corresponding number on the
other end.
Figure 1-21 Wire configuration in Straight-Through cable
Note: If you are wondering why the wire configuration is important remember that the transmitter
on one end needs to connect to the receiver on the other end. If wiring configuration is incorrect,
bits sent from one end will not be received at the other end.
Crossover – Crossover cable also uses the same four wires that are used in straight-through
cable but different pins are connected here. Figure 1-22 shows the configuration of the wires in a
crossover cable.
Figure 1-22 Wire configuration in Crossover cable
Exam Alert: Cable types and where they are used is a very important topic not only for the
CCNA Exam as you will see questions on it, but also for your networking career as well.
Data Encapsulation in TCP/IP Model
The last thing you need to know about TCP/IP model is the Data encapsulation process and
PDUs. As in case of the OSI reference model, the data is encapsulated in a header (and trailer in
case of Network layer) to create a Protocol Data Unit (PDU) and is passed down to the next
layer. Though you are aware of the process, you must know the names of each layer’s PDU. The
PDU in TCP/IP model are:
Transport Layer -> Segment
Internet Layer -> Packet
Network Access Layer -> Frame
Figure 1-24 shows the encapsulation process in TCP/IP model.
Figure 1-24 Data encapsulation in TCP/IP Model
Cisco Layer 3 Model
In a large organization it is common to see large and complicated networks consisting of many
locations, devices, services, and protocols. It can be cumbersome to manage and troubleshoot
such networks. In addition to that as technologies evolve, the network has to evolve also. Making
changes to a complex network is often difficult. Cisco with its years of experience in network
equipment as well as managing its own network has defined a Three-layer hierarchical model.
This model provides a hierarchical and modular method of building networks that makes it easy
to implement, manage, scale and troubleshoot networks.
The model breaks an internetwork down to the following three layers:
The Core layer
The Distribution layer
The Access layer
These layers are logical and not physical. They have specific functions in an internetwork which
are discussed below:
The Core Layer – This layer is the backbone of an internetwork. It is the simplest yet the most
critical layer whose sole function is to transport large amount of data fast. It gets data from the
distribution layer and sends it back to the distribution layer after transportation. Speed and fault
tolerance are the two major requirements of this layer because it has to transport large amount of
data and any fault at this layer will impact every user. Considering the functions of this layer, the
following should be avoided at this layer:
Any thing that can slow down the traffic. For example, packet filtering, inter-VLAN
routing etc.
Direct user connections
Direct server connections
Complex service policies
While designing the core, the following should be kept in mind:
Routing protocol should have low convergence time.
Network Access layer technologies should be fast with low latency
Redundancy should be built into this layer.
The Distribution Layer – This layer acts as an interface between the Core and the Access layers.
The primary function of the distribution layer is to provide routing, filtering, and WAN access and
to determine how packets can access the core, if needed. Path determination is the most important
function at the layer. It has to select the fastest way an access request can be completed. This
layer also acts as the convergence point for all access layer switches. Hence it is generally the
best place to apply most of the policies. The following are generally done at this layer:
Routing between subnets and VLANs and route distribution between routing protocols
Implementation of security policies, including firewalls, address translations, packet
filtering, etc.
Breaking broadcast domains
The Access Layer – This layer is the edge of the network where wide variety of devices such as
PCs, printers, iPads etc. connects to the network. Common resources needed by users are
available at this layer while access request to remote resources are sent to the distribution layer.
This layer is also known as the desktop layer. The following are generally done at this layer:
Access control and policies in addition to what exists in the distribution layer.
Dynamic configuration mechanisms
Breaking collision domains
Ethernet switching and static routing
Summary
Though this chapter was long, it helped lay the foundation of your CCNA networking knowledge.
The importance of understanding every topic in this chapter cannot be stressed enough. I would
strongly suggest going through the chapter again to reinforce the basics.
The chapter started off with the importance of networks, basic network devices and network types
and collision and broadcast domains.
Then the seven-layered OSI model was discussed. It is important to remember the functions of all
the layers and how they map to the TCP/IP model. Remember that hubs work at Physical Layer,
switches at Data-Link Layer and routers at the Network Layer of the OSI model.
The chapter then covered a long discussion on the TCP/IP model and its many protocols.
Remember that TCP/IP and Ethernet form a major part of the CCNA exam and have a few
chapters dedicated to them.
Lastly, the chapter covered the Cisco three-layer hierarchical model and how it is designed to
help implement and manage a complex network.
The next chapter looks at IP addressing. Before heading to it, we suggest you review the
CCNA Exam Alerts scattered through this chapter to recap the various important concepts.
Chapter 2: IP Addressing & Subnets
Chapter 1 introduced you to the various layers of the TCP/IP model. The CCNA exam is almost
entirely about the Internet and the Network Access layer. So this chapter will cover one of the
most important subjects of networking – IP Addresses. As you already know, each host in the
network has a logical address called the IP address. This address helps in routing packets from
source to destination across internetworks. This chapter delves deep into IP addresses, subnet
mask, subnetting and Variable Length Subnet Mask (VLSM). Finally this chapter looks at some
troubleshooting techniques that are used to solve IP address related problems. The two current
versions of IP addresses in use today are IPv4 and IPv6. This chapter focuses on IPv4. IPv6 is
discussed in Chapter 12.
2-1 IP Addresses – Composition, Types and Classes
2-2 Private and Public IP addresses
2-3 Subnetting
2-4 Variable Length Subnet Masks (VLSM)
2-5 Route Summarization
2-6 Troubleshooting IP Addressing
2-7 Summary
IP Addresses: Composition, Types and Classes
Before heading deeper into IP addresses, you should be aware of the following terms
Bit – A bit is a single digit with a value of 0 or 1.
Byte – A byte is composed of 8 bits.
Octet – An octet is also made up of 8 bits. Throughout this chapter the terms byte and
octet are interchangeable.
Network Address – This refers to a remote network in terms of routing. All hosts in the
remote network fall within this address. For example, 10.0.0.0, 172.16.0.0 and
192.168.1.0
Broadcast Address – This is the address used to send data to all hosts in a network. The
broadcast address 255.255.255.255 refers to all hosts in all networks while an address
such as 192.168.1.255 refers to all hosts in a particular network.
An IP address is 32 bits in length. To make the address easier to read, it is divided into four
sections of 8 bits each divided by a period. Each section is therefore, 1 byte (also called octet)
long. To further make it easier to read and remember, the binary numbers are converted to
decimal. For example, an IP address such as 11000000100000000000110000000001 is divided
to make it 11000000.10000000.00001100.00000001. When this address is converted to decimal,
it will become 192.128.12.1. This format of IP address is called the dotted decimal format.
Some applications also covert the address to hexadecimal format instead of decimal format.
However this is not commonly seen and as far as the CCNA exam is concerned, you need to only
work with the dotted decimal format.
Topics in this chapter require binary to decimal conversions. Table 2-1 shows the decimal value
of each bit location in a byte. To easily convert from binary to decimal, add up the decimal value
corresponding to the bit place that is “on” (1). For example, a binary value of 10110000 can be
easily converted to decimal by adding the decimal value of each bit that is 1. That gives us
128+32+16 = 176.
Table 2-2 shows the decimal value for the most common binary numbers you will encounter in
this chapter.
Table 2-1 Decimal Value for each bit place in a byte
128 64 32 16 8 4 2 1
Table 2-2 Decimal Values for common binary numbers
Exam Alert: Class of addresses and their address range is a very important topic. You will have
to remember the range associated with each class.
Before moving ahead, spend some time to figure out the class of some addresses given below.
Also try to figure out which portion is the network and which portion is the host part:
1. 1. 9.140.2.87 – This is a Class A address because the first octet lies in 1-126 range. 9 is
the network part while 140.2.87 is the host part because class A addresses have a
network.host.host.host format.
1. 2. 172.30.4.190 – This is a Class B address because the first octet lies in 128-191
range. 172.30 is the network part while 4.190 is the host part because class B addresses
have a network.network.host.host format.
1. 3. 194.144.5.10 – This is a Class C address because the first octet lies in the 192-223
range. 194.144.5 is the network part while 10 is the host part because class C addresses
have a network.network.network.host format.
1. 4. 45.22.187.1 – This is again a class A address with 45 being the network part and
22.187.1 being the host part.
Some IP address such as 127.0.0.1 have a special meaning. Table 2-4 lists such addresses and
what they represent.
Table 2-4 Reserved IP addresses
Exam Alert: It is important to remember that if all host bits in an address are set to 0 then it is a
network address. On the other hand if all host bits are set to 1 then it is a broadcast address.
These addresses cannot be assigned to a host.
Private And Public IP Addresses
As you know already, every host on a network requires a unique IP address. This is easily
manageable in a small network but not a network as large as the Internet. The Internet Assigned
Numbers Authority (IANA) is responsible for managing and distributing IP addresses. The IANA
has created 5 address registrars in five locations of the world. ISPs and large organizations
purchase the addresses from these registrars. The end user in turn gets the IP address from the ISP.
These purchasable IP addresses are called public addresses and are routable on the Internet.
Every host on the Internet has one of these addresses, in theory.
The IANA also designated a range of addresses in class A, B and C for use in private networks.
These addresses can be used by anyone within their network without any required permission but
these addresses are not routable on the Internet. You ISP or your organization usually assigns you
one of these addresses and later translates it to a public address when you want to get out to the
Internet. The designated ranges for private IP addresses are:
Class A – 10.0.0.0 to 10.255.255.255 (1 network)
Class B – 172.16.0.0 to 172.31.255.255 (16 networks)
Class C – 192.168.0.0 to 192.168.255.255 (256 networks)
Address translation and private IP addresses are discussed in detail in Chapter 9.
Exam Alert: It is very important to remember the range of private IP addresses as you will more
than likely see a question about them on your CCNA exam.
Subnetting
In case of class A and B IP addresses, each of them provides for a large number of hosts. For
class A, the total numbers of hosts available are 224-2 or 16,777,216 hosts (class A has 24 bits
available for host component and each bit can have two values – 0 and 1. Out of the total value
one address is for network address and the other for broadcast. So two addresses are deducted).
Similarly a Class B addresses provides for 216-2 or 65,534 hosts. In the first chapter you learned
about disadvantages of large networks and why it becomes necessary to divide them into smaller
networks joined by routers. So creating a network with total number of hosts allowed for class A
or B addresses will cause a lot of problems. Meanwhile creating small networks with class A or
B addresses will waste a lot of addresses.
To overcome this problem with class based addressing, subnetting was introduced. Subnetting
allows you to borrow some host bits and use them to create more networks. These networks are
commonly called subnets and are smaller in size. But since each network has a network address
and a broadcast address, some addresses get wasted.
To further understand how subnetting is useful consider a Class C address. Each class C address
has 28-2 or 254 host addresses available. If you wanted 2 networks with 100 addresses and used
2 class C networks, you would waste 308 addresses. Instead of using two class C networks, you
can subnet one to provide you two networks of 126 addresses each. This way lesser number of
addresses would be wasted.
While some of the benefits of subnetting are discussed above, the following list discusses all the
benefits associated with it:
Reduced broadcasts – While broadcasts are necessary, too many of them can bring down
a network and the number of broadcasts is proportionate to the size of the network. So
subnetting a network to smaller subnetworks, helps reduce broadcasts since routers do not
forward broadcasts.
Increased Network Performance – The direct result of reduced broadcasts is a network
that has more bandwidth available to the hosts. More bandwidth and lesser hosts result in
a better performance of the network.
Easier Management – Managing and troubleshooting a large network is cumbersome and
difficult. Subnetting breaks a network into smaller subnetworks, making it easier to
manage each of them.
Scalability – A single large network spanning a large geographical location will be more
difficult and costlier to manage. WAN links connecting different locations are costly and
having broadcasts choking the network can result is wasted money. Hence breaking down
a large network makes is easier to scale a network across geographical locations.
Now that you understand the concept and benefit of subnetting, consider the problem that arises
with it. In case of class based subnetting, the first octet of the dotted decimal address tells which
part of the address is the network component and which one is the host component. But when host
bits are borrowed for subnetting, the class based boundaries do not apply and it is not possible to
say which bits are network bits. To overcome this, a third component of IP addresses were added.
These are called the subnet masks.
Subnets masks, like IP addresses, are 32 bit long. The value of subnet mask represents which bits
of the IP address are network components and which are host component. A value of 1 in a subnet
mask shows that the corresponding bit in the IP address is a network component while a value of
0 shows that the corresponding bit is a host component. The following examples will help clarify
this further:
1. An IP address of 192.168.10.1 with a subnet mask of 255.255.255.0
(11111111.11111111.11111111.00000000) shows that the first three octets of the IP address
are the network component while the last octet is the host component.
2. An IP address of 172.16.100.1 with a subnet mask of 255.255.128.0
(11111111.1111111.100000000.00000000) shows that one bit from the third octet has been
borrowed from the host component. Hence the network component is now 17 bits long
instead of the default 16 bit in a class B address.
3. An IP address of 10.1.1.1.1 with a subnet mask of 255.255.0.0
(11111111.11111111.00000000.0000000) shows that the entire second octet has been
borrowed from the host component and now the network component is 16 bits long instead
of the default 8 bit of a class A address.
One restriction that applies to subnet masks is that all network bits (1) and all host bits (0) should
be contiguous. So a subnet mask of 11001100.11110000.11110000.00001111 is not valid because
the network and host bits are not contiguous. Table 2-5 shows the valid subnet mask values is an
octet.
Table 2-5 Valid subnet mask values in an octet
Exam Alert: It is very important to be able to understand subnet masks with both the dotted
decimal as well as the CIDR format. Also remember that any mask not given in Table 2-5 is not
valid for an octet.
By now you may have figured out that the default subnet mask of class A is 255.0.0.0 or /8, the
default mask of class B is 255.255.0.0 or /16 and the default mask of class C is 255.255.255.0 or
/24. Table 2-6 shows the default masks of each class. These default masks cannot be changed. For
example, you cannot use a mask of 255.255.0.0 for a class C address. If you try to use an invalid
mask such as this, every device will produce an error. For each class, the minimum mask is the
default mask and it cannot be reduced. Class A has to have a minimum mask of 255.0.0.0, class B
has to have a minimum mask of 255.255.0.0 and class C has to have a minimum mask of
255.255.255.0.
Table 2-6 Default Subnet masks
Exponent Value
21 2
22 4
23 8
24 16
25 32
26 64
27 128
28 256
29 512
210 1024
211 2048
212 4096
213 8192
214 16384
Now that you know what subnetting is and how subnet masks are used, it is time to create subnets.
When planning to subnet, you need to know three things:
1. Total number of subnets that you need
2. Total number of hosts per subnet that you need
3. Available network and subnet mask (which will be subnetted)
Armed with answers to this, you need to find the following:
1. Subnet Mask to be used across the network
2. Valid subnets
3. Network address for each subnet
4. Broadcast address for each subnet
5. Valid host addresses in each subnet.
For this section I will take a sample requirement of 8 networks with 30 hosts each with one class
C network of 192.168.10.0 255.255.255.0 available. Now that you have the requirement, first
thing you need to find is the new subnet mask that can satisfy the requirement. To find the subnet
mask, follow the steps given below:
1. Find the exponent of 2 whose value is more than or equal to the number of subnets
required. Lets call this 2sn. For our example, we need 8 subnets and 23 equals to 8. So our
2sn is 23.
2. Find the exponent of 2 whose value minus 2 is more than or equal to the maximum number
of hosts required in a subnet. Lets call this (2h-2) For our example, we need a maximum of
30 hosts in a subnet and 25-2 gives us 30 hosts per subnet.
3. Make sure sn + h from the above two steps does not exceed the number of host bits
available in the network available. If the sum of sn and h exceed the available host bits
then you will require another network of the same class or a network of a higher class. In
our example we have 8 bits of host addresses available in 192.168.10.0 255.255.255.0
network. Our sn+h is 3+5 that gives us 8.
4. Convert the available mask to the CIDR notation and add sn to it to get the new subnet
mask. For our example the mask 255.255.255.0 can be converted to /24. On adding 3 we
get a mask of /27. Converting from /27 to the dotted decimal format is easy. /24 is
255.255.255.0 or 11111111.1111111.1111111.00000000. /27 will be
11111111.1111111.1111111.11100000. You need not worry about the first 3 octets since
they are already known to be 255.255.255. For the last octet add the decimal value for
each network bit. In our case it will be 128+64+32 = 224. So the new subnet mask is
255.255.255.224. Table 2-7 also provides a list of dotted decimal and networking bits
value.
The most difficult part is now over. To find the rest of the 4 answers, follow the steps given
below:
1. Valid subnets – To find the valid subnets deduct the interesting octet value from 256.
Interesting octets are those octets that have host bits. Available subnets will be in
multiples of the resultant value up to 256. In our case the fourth is the interesting octet.
Deducting 224 from 256 gives us 32. So the available subnets are 0,32, 64, 96, 128, 160,
192, 224.
2. Network Address of each subnet – The network address is the very first address of each
subnet. So for our valid subnets, the network address would be 192.168.10.0,
192.168.10.32, 192.168.10.64, 192.168.10.96, 192.168.10.128, 192.168.10.160,
192.168.10.192 and 192.168.10.224
Exam Alert: Sometime back Cisco used to discard the first and the last subnet,
also called subnet zero. So the number of subnets used to be 2n-2. Starting IOS
version 12.0 the ip subnet-zero command is enabled by default and in Cisco
exams the first and last subnets are considered unless specified otherwise. Be on
the lookout for questions on your CCNA exam that ask you not to consider subnet
zero. In such cases, leave out the first and the last subnet. To fully understand
how the command affects the calculation, consider a Class C network with a
mask of /26. It will give you subnets 0, 64, 128 and 192 if subnet-zero is
allowed, else it will only give you subnets 64 and 128.
3. Broadcast Address of each subnet – The last address of a subnet is the broadcast address.
Simply deduct 1 from the next network address to find the broadcast address of a subnet.
For our example subnets the valid broadcast addresses are:
Network Address Broadcast Address 4. Valid hosts addresses in each subnet – For every
subnet, the valid host addresses lie between the network
192.168.10.0 192.168.10.31 address and the broadcast address. For our example, the
192.168.10.32 192.168.10.63 valid host addresses for each subnet are:
Exam Alert: Subnetting is one of the most important topics in the CCNA exam. Subnetting
related questions will not be straight forward like what you learned just now. Mostly you would
be given an IP address with a subnet mask and you will need to find out if it is a host, subnet or
broadcast address. In following examples review how to approach such questions.
In the following sections, you will encounter variations of subnetting questions. For all of them
the process is similar to what you just learned. The steps you need to follow are summarized
below:
1. Find the interesting octet in the given subnet mask. Remember that the octet with a value of
less than 255 will be the interesting octet.
2. Deduct the value of interesting octet from 256 to find the increment by which the network
numbers are increasing. These are also your subnet addresses.
3. Write down the subnet address and broadcast address for each subnet
4. Write down the host addresses of each subnet
5. Once you have all the above information, you will find the answer to the given question.
Subnetting Class C Addresses
Subnetting technique remains the same irrespective of the class of address. The difference that the
class makes is the number of bits available for subnetting. Class C starts with a mask of /24 and
can have a maximum mask of /30. We cannot use /31 or /32 because atleast 2 hosts bits are
required for the network and broadcast addresses and /31 and /32 give us 1 and zero host bits
respectively. In the examples below, you get to practice subnetting class C addresses.
Subnetting Class C Address – Example #1
Problem: Is 192.168.1.193/26 a host address?
Solution:
1. Converting /26 to dotted decimal format gives 255.255.255.192. The fourth octet is the
interesting octet.
2. Deducting 192 from 256 gives us 64. So the subnet addresses are 0,64,128 and 192
3. The network address and broadcast address are:
Exam Alert: A /30 or 255.255.255.252 is the highest mask which can be practically used in a
network. It gives 2 host addresses and is ideal for point-to-point links in a network. Point-to-
Point links are usually found in routers terminating WAN links.[/stextbox
Subnetting Class A address – Example #2
Problem: This is a different kind of a problem. Your network number is 21.0.0.0. You need to
have as many subnets as possible without exceeding 1000 subnets while at the same time having
at least 500 hosts per subnet. What subnet mask would you use?
Solution:
Since 21.0.0.0 is a Class A network, the default mask is /8. So you have 24 bits of host addresses
that can be borrowed for the subnetting. Looking back at Table 2-8, you will see that 210 gives us
1024 while 29 gives us 512. Since 1024 exceeds the given 1000 subnets, you will need to use 29.
This means 9 bits will be borrowed for the network part leaving the rest for the host part. The
table below shows the default mask and the new mask after borrowing 9 bits:
Earlier, it was required to use the same subnet mask across the network. This was called classful
networking. With increase in complexity of networks and decrease in available IP addresses it
became obvious that classful networking causes waste valuable of IP addresses. To understand
how, consider Figure 2-1. The largest subnet requires 30 host addresses. So across the network a
mask of /27 is used, which gives 30 hosts per subnet. You will notice that in every subnet except
the subnet attached to RouterD, some host addresses will remain unused. In particular, 28 host
addresses are wasted for each link between the routers. In total this network wastes 118
addresses and uses 92 addresses.
To avoid wasting of IP addresses, classless networking was introduced by way of VLSM. VLSM
allows you to use different subnet masks across the network for the same class of addresses. For
example, a /30 subnet mask, which gives 2 host addresses per subnet, can be used for point-to-
point links between routers. Figure 2-2 shows how VLSM can be used to save address space in
the network shown in Figure 2-1.
Figure 2-2 Classless Network with VLSM
In Figure 2-2, notice the different masks used for each subnet. The first network with 13 hosts is
using a mask of /28, which gives 16 hosts addresses. The point-to-point links between the routers
are using a /30 mask which gives 2 host addresses. In total the network is still using 92 addresses
but is wasting only 22 addresses. Now that you know the benefit of VLSM, take a look at how you
can use it in a network.
There are a few restrictions you need to consider when planning to use VLSM:
1. You need to use routing protocols that support classless routing such as Enhanced Interior
Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), Border Gateway
Protocol (BGP) or Routing Information Protocol (RIP) version 2. Classful protocols such
as RIPv1 cannot be used with VLSM. While routing protocols are covered in detail in
Chapter 4, you should understand that a routing protocol is classful because it does not
advertise the subnet mask along with the network address in its updates. Hence, routers
running these protocols, do not know the subnet mask and strictly follow the class of the
network. Classless protocols on the other hand advertise and understand subnet masks.
2. You need to use fixed block sizes. You have come across these block sizes during
subnetting practice and these are listed in Table 2-9. You cannot use any block sizes apart
from these. For example in Figure 2-2, for the networks connected to RouterB and
RouterC, a block size of 32 was used even though the total addresses required were 21 in
each subnet.
Table 2-9 Block Sizes for VLSM
To design the VLSM solution, follow the 5 steps discussed earlier:
1. The largest segment requires 125 host addresses. So a mask of /25 can be used. This gives
two subnets – 192.166.10.0/25 and 192.168.10.128/25. The first subnet can be assigned to
this segment.
2. The second largest segment requires 60 host addresses. You can take the second available
subnet – 192.168.10.128/25 – and divide it further using a /26 mask to give you subnets
192.168.10.128/26 and 192.168.10.192/26. Assign the first one to this segment.
3. The third largest segment requires 29 host addresses (28 host addresses and 1 for the
router interface). You will need to use a block of 32 and a mask of /27. Take the remaining
subnet from the previous step and divide it further using a /27 mask. This will give you
subnets 192.168.1.192/27 and 192.168.1.224/27. Assign the first one to this segment.
4. The fourth largest block requires 13 host addresses (add one for the router interface). You
can use a block of 16 and a mask of /28. Take the remaining subnet from the previous step
and divide it further using a mask of /28. This will give you subnets 192.168.1.224/28 and
192.168.1.240/28. Assign the first one to this segment.
5. Now you are left with 3 point-to-point links between the routers. These links require two
host addresses and a mask of /30. Take the remaining subnet from the previous step and
divide it using a mask of /30. This will give you subnets 192.168.1.240/30,
192.168.1.244/30, 192.168.1.248/30 and 192.168.1.252/30. Use the first three of these for
the point-to-point links. The remaining one subnet can be left for future use.
Figure 2-4 shows the solution derived in the above steps.
Figure 2-4 VLSM – Solution for Example #2
Route Summarization
You already know from the previous chapter that routers function by creating a table of all
networks it knows about. This table is called the routing table and routers use routing protocols to
tell each other about the networks they know of. As networks increase, so do the number of
entries in a routing table. Large routing tables cause increased processing and lower response
time in a router. To reduce the size of routing tables, networks can be grouped together
or summarized using a mask that incorporates them all. For example, in figure 2-5, a
192.168.10.0/24 subnet has been divided into smaller subnets of /27 mask. All of these networks
connect to RouterA which it turn is advertising these routes to RouterB. Without summarization,
RouterB will come to know of 8 networks which are available via RouterA. Since these networks
are contagious subnets can have been subnetted from a /24 address, they can be summarized back
into 192.168.1.0/24 network by RouterA while advertising to RouterB. This way, RouterB comes
to know of one large /24 network only instead of 8 smaller /27 networks.
Figure 2-5 Summarization
Summarization is similar to VLSM but in the opposite direction. When using VLSM you move to
the right in terms of the bits (/24 to /25, /25 to /26, so on and so forth) while during summarization
you move to the left (example /27 to /24).
Summarization is somewhat simple if you remember the following:
1. You can only summarize in the block sizes you learned about in VLSM – 128,64,32,16,8,4.
2. The network address used for the summarized address is the first network address in the
block.
For example, if you want to summarize networks 192.168.8.0 through 192.168.15.0, first find the
block size you can use. There are 8 networks so the block size of 8 can be used. The first network
address in the block is 192.168.8.0. Now to find the mask of the summarized route, remember the
mask used for a block of 8 – 248. You can also deduct the block size from 256 to find the mask.
Since we are summarizing the third octet the subnet mask for the summary address will be
255.255.248.0.
Take another example, 172.16.0.0 through 172.16.35.0. This one is not as simple as the first one.
Notice that you have 36 networks to summarize which does not conform to the block sizes. There
are two things that you can do here:
1. Summarize in block size of 32 (mask of 224). This will give you a summary address of
172.16.0.0 255.255.224.0 but will only summarize networks 172.16.0.0 through
172.16.31.0. The rest of the 4 networks will be advertised as individual routes.
2. Summarize in block of 64 (mask of 192). This will give you a summary address of
172.16.0.0 255.255.192.0 but will summarize networks 172.16.0.0 through 172.16.63.0.
The correct answer depends on the network. If you are planning to add networks 36 to 63 then the
second options works. Otherwise the first option is the best one.
Take a third example where you know the summary address of 172.10.16.0 with a mask of
255.255.224.0 and need to find which networks are being summarized. This is really easy. The
third octet is the interesting octet and gives a block size of 32. This means the networks
172.10.16.0 through 172.10.47.0 have been summarized.
As a final example, consider the following networks:
192.168.1.0/25
192.168.1.128/25
192.168.2.0/24
192.168.3.0/24
192.168.4.0/26
192.168.4.64/26
192.168.4.128/26
192.168.4.192/26
Try to figure out the summary address that can be used for these networks. If you look carefully
the third octet forms a contiguous block of 4 and can be summarized with the address 192.168.1.0
255.255.252.0 or 192.168.1.0/22.
In the last example notice that we summarized a contiguous block of class C using a mask. This is
called supernetting. Supernetting is an extension of VLSM and summarization. In summarization
you summarize networks subnetted while in supernetting you summarize a block of contiguous
blocks of Class A, B or C networks. Supernetting is usually practiced by ISPs to reduce the
Internet routing table size.
Troubleshooting IP Addressing
As you know by now, IP Addressing is an integral part of networking and given the complexity of
addressing and subnetting, it is common to have IP addressing errors in the network. So it is
essential for you to be able to troubleshoot common problems related to IP Addressing. Before
troubleshooting a network, you have to understand the below given common protocols and
utilities that are used to troubleshoot:
Packet InterNet Grouper (PING) – Ping is one of the most commonly used utility that is
used to troubleshoot addressing and connectivity problems. This utility is available in
almost all operating systems, including Cisco devices and can be accessed by the
command line interface using the ping command. It uses the ICMP protocol to check if the
destination host is live or not.
Traceroute – Traceroute is another common utility that is available with all operating
systems. In some operating systems the utility can be access using
the tracert or traceroute command on the CLI. It is used to find each hop between the
source and destination hosts and is useful to see the path taken by a packet.
ARP table – Sometimes it is useful to look at the ARP table of a system. This table
contains the MAC address to IP address bindings learned by the system. On most operating
systems the ARP table can be viewed using the arp –a command. On a Cisco device the
arp table can be viewed using the show ip arp command.
IP config – Sometimes, you need to verify the IP address, subnet mask, default gateway
and DNS addresses the host is using. On a windows machine all this information can be
seen in the output of the ipconfig /all command. On a unix based system, this information
can be seen using the ifconfig command.
For the following section consider the network shown in Figure 2-6. In this network, HostA is
trying to reach ServerA and ServerB but is not able to.
Before looking at the IP addressing, you should quickly check network connectivity using four
steps that Cisco recommends:
1. Ping 127.0.0.1, the loopback address from the Host. You will need to open a terminal window
of your operating system to use the ping utility. If you get an output similar to the following, it
shows that the IP stack in the host is working well:
ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.073 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.095 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.145 ms
Figure 2-6 Troubleshooting IP Addressing Scenario
2. Ping the IP address of the host itself. If its successful then it shows that the host’s NIC is
working well.
>ping 192.168.1.50
PING 192.168.1.50 (192.168.1.50): 56 data bytes
64 bytes from 192.168.1.50: icmp_seq=0 ttl=64 time=0.075 ms
64 bytes from 192.168.1.50: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 192.168.1.50: icmp_seq=2 ttl=64 time=0.155 ms
64 bytes from 192.168.1.50: icmp_seq=3 ttl=64 time=0.151 ms
3. Ping the default gateway from the host. If the ping works it shows that your host is able to
communicate with the network and the default gateway.
>ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=0.075 ms
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.155 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.151 ms
4. Finally ping the remote host, ServerA or ServerB in our case. If the ping is successful, this
means there is a DNS or application layer protocol problem between the host and ServerA.
However, in our case the ping fails.
>ping 192.168.2.65
PING 192.168.2.65 (192.168.2.65): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Now that you have used the Cisco recommended way to determine that the problem lies in the
network, it is time to look at the addressing. In this exercise, you need to look at the IP address,
subnet mask and default gateway configured (as shown in Figure 2-6) to see if they are correctly
configured. You can simply look at the subnet mask and see which are valid host addresses in that
subnet to see if valid IP addresses have been configured. Take a step-by-step approach as shown
below to narrow down the problem area:
1. The Host has an IP address of 192.168.1.50/25. A mask of /25 shows that the host lies in
the 192.168.1.0/25 subnet (/25 = 255.255.255.128, which gives two subnets – 0 and 128).
So the IP address given to the host is a valid host address.
2. The Gateway address on the host is 192.168.1.1 and that is the IP address on the Router
interface connected to the network. The IP address lies in the same subnet range as the host
address. Step 1 and Step 2 eliminate addressing problem in the network segment to which
the host is connected.
3. The next network segment is the point-to-point link between RouterA and RouterB. The
subnet mask of /30 gives subnets 0,4,8,12….128. The valid host addresses in the network
192.168.1.128/30 are 192.168.1.129 and 192.168.1.130. So the point-to-point links have
valid addresses.
4. The next network segment is the one to which ServerA is connected. /26 mask converts to
255.255.255.192. 192 deducted from 256 leaves 64. This means the valid subnets are
192.168.2.0, 192.168.2.64, 192.168.2.128, 192.168.2.192. ServerA’s address is a valid
address in the 192.168.2.64 subnet but the default gateway and the router’s address is in
the 192.168.2.0 subnet. So ServerA’s address is in the wrong subnet and needs to be
changed to a valid address in the 192.168.2.0 subnet. This explains why HostA is not able
to reach ServerA.
5. The final segment is the one to which ServerB connects. From the calculations done in the
previous step, you can see that ServerB’s address lies in the 192.168.2.128 subnet. The
valid host addresses in this subnet are 129 to 190. 191 is the broadcast address of the
subnet. While the router (default gateway) is configured with a valid address, ServerB has
been assigned the broadcast address, which needs to be changed. This explains why
HostA is not able to reach ServerB.
If you are careful about going step-by-step and finding out valid addresses in each subnet, you can
figure out any addressing problem in no time. Lets take a look at another example two examples.
For these examples, we will use the network shown in Figure 2-7.
Figure 2-7 Troubleshooting IP Address – Example #2 & #3
Example #2
Problem: HostB is able to reach HostD but it is not able to reach HostA
Solution: The question tells us two things. First that HostB is able to reach HostD, that means the
network from HostB all the way to HostD is working fine. Second, HostB is not able to reach
HostA. It is simple to figure out that there is a problem at HostA. To find the problem, take a look
at the IP address information given for HostA:
1. A subnet mask of /27 coverts to 255.255.255.224.
2. Deducting 224 from 256 gives us 32. So the valid host subnets are 0, 32, 64 and so on.
3. HostB and RouterA’s address are in the 192.168.1.0/27 subnet that has a valid host range
of 1 to 30. The broadcast address for this subnet is 192.168.1.31.
4. You will notice that HostA has an IP address of 192.168.1.31/27, which is the broadcast
address of this subnet and not a valid host address. Hence, HostA cannot be reached from
the network.
Example #3
Problem: HostD is able to reach HostB but not HostC.
Solution: Again this problem statement tells us that the network from HostD to HostB is working
well. So the problem requires a look at HostC’s addressing:
1. Again, a mask of /27 gives us subnets 0, 32, 64, 96, 128 and so on.
2. HostD and RouterB’s addresses lie in the 192.168.1.64/27 network. The valid host
addresses for this subnet are 192.168.1.65-94. The broadcast address for the subnet is
192.168.1.95.
3. The next subnet is 192.168.1.94/27 that has a valid host range of 192.168.1.95-
192.168.1.127.
4. You will notice that the IP address of HostC lies in the 192.168.1.94/27 subnet and not the
192.168.1.64/27 subnet. It lies it a different subnet that the default gateway (RouterB) and
HostD. Hence, HostD is not able to reach HostC.
Exam Alert: Expect a lot of questions in different forms where such IP addressing errors will be
hidden during the exam. Each time you will need to patiently find the subnet and valid host
addresses.
Broadcast Addresses
Broadcast and broadcast addresses are discussed many times in Chapter 1 and Chapter 2.
Broadcast is a generic term meaning message or data sent to all hosts in a network while
broadcast address is a generic term meaning an address to which broadcasts are sent. It is
important to understand that not all broadcasts are same. They can be divided into two different
types:
Layer 2 broadcasts – These broadcasts are sent at layer 2 and are limited to a LAN.
These do not cross the boundary of a LAN, which is defined by a router.
Layer 3 broadcasts – These broadcasts are sent at layer 3 and go to the network.
You already know what unicast and multicast are but just to put them into perspective of
broadcasts, these terms are defined below again:
Unicast – Messages or data sent to a single host are called unicast.
Multicast – Messages or data sent to a group of devices is called multicast.
Like broadcasts, broadcast addresses also differ based on the layer. The different types are
discussed below:
Layer 2 Broadcast Address – Layer 2 address are 48bit hexadecimal values. An example
of layer 2 addresses is a3.4c.56.ea.f5.aa. Similarly, a layer 2 broadcast is a hexadecimal
value of all Fs or a binary value of all 1s – FF.FF.FF.FF.FF.FF
Layer 3 Broadcast Address – This chapter showed you that the last address of a subnet is
a broadcast address such as 192.168.1.255/24. These addresses have all host bits on and
refer to all hosts in that subnet. An address with all its bits turned on – 255.255.255.255 –
is a special broadcast address that refers to all hosts in all networks.
A good example to understand how broadcast addresses are used, consider the following example
of how a host requests IP address from a DHCP server:
When a host boots up and needs to get an IP address from the DHCP server, it does not
know if the DHCP server in this same LAN segment or across a router. So it sends a
DHCP request with the destination IP address set to 255.255.255.255 and the destination
MAC address set to FF.FF.FF.FF.FF.FF
The layer 2 broadcast goes out to the LAN and if a DHCP server is connected to the
segment, it will respond back.
If the DHCP server is not on the segment, the router will see the packet and covert it into a
unicast message and send it to the DHCP server. The router needs to be configured for this
though.
The DHCP will reply back with a unicast.
As the above example demonstrates, broadcast is very useful and can be converter to unicast
when required.
Summary
This chapter is one of the most important chapters in this book and covers the most fundamental
blocks of a network. IP Address Classes, Private and Public addresses and subnetting are very
important for both the CCNA exam as well as for understanding the rest of the topics coming up
I cannot stress enough the importance of these topics and would strongly suggest you to go through
it again and clarify any doubts you might have before moving ahead.
Chapter 3: Introduction To Cisco Routers, Switches, and IOS
3-1 Introduction to Cisco Routers, Switches, IOS & the Boot Process
3-2 Using the Command-Line Interface (CLI)
3-3 Basic Configuration of Router and Switches
3-4 Configuring Router Interfaces
3-5 Gathering Information and Verifying Configuration
3-6 Configuring DNS & DHCP
3-7 Saving, Erasing, Restoring and Backing up Configuration & IOS File
3-8 Password Recovery on a Cisco Router
3-9 Cisco Discovery Protocol (CDP)
3-10 Using Telnet on IOS
3-11 CCNA Lab #1
Routers, Switches, and the Boot Process
The previous two chapters helped you learn the basics of networking. You are aware of various
layers of the OSI and TCP/IP models and the devices that work on these layers, especially routers
and switches. The rest of the book focuses on various functions of Cisco routers and switches. So
before moving to the various functions, it is necessary to know what makes them tick. This
chapter is dedicated to Cisco Internetwork Operating System (IOS). Cisco IOS is a
proprietary operating system that Cisco routers and switches run on. This chapter looks at the
boot process, connectivity options, ways to configure the devices and show basic configuration
and verification commands.
Cisco Integrated Services Router (ISR)
Cisco provides various series and models of routers geared towards different types of customer
and requirements. Some of them just do routing whereas others provide some other functions such
as Wireless connectivity, Security features and Voice-over-IP services. Cisco’s ISR series routers
are example of routers that provide various services.
The earlier CCNA exams used to focus on Cisco 2500 and 2600 routers that have been replaced
by ISR 1800 and 2800/2900 series routers. 2500 and 2600 routers are End-of-Life now and
cannot be bought from Cisco anymore. Figure 3-1 shows a part of the backplane of a Cisco 1841
router with important parts labeled. These parts are described in Table 3-1. Figure 3-2 shows the
front panel of the router.
Exam Alert: CCNA is not a device specific exam. You can practice using a 2500 or 2600 router
or even a 3800 series ISR router. Every command and concept discussed in this book holds true
for all of these routers. The only difference that you need to be aware of is the output difference in
memory, interface type (Ethernet or FastEthernet) and number of interfaces
Figure 3-1 Rear view of a Cisco1800 Series ISR
Exam Alert: As with routers, you can use any switch model as long as it runs IOS when studying
for your CCNA exam. I suggest practicing with either a 2950 or a 2960 switch. If your budget can
afford one, a 3550 or 3560 Layer 3 switch can be used with its enhancements. But stay away
from the 4000 or 6000 series switches.
Each model in the 2960 series switch is different in terms of the number of physical network
interfaces it has but overall each model looks similar. Figure 3-3 shows the front faceplate of the
switch. The back of the switch only consists of the AC power input.
Table 3-2 describes the important components shown in Figure 3-3.
Figure 3-3 Front plane of a Cisco Catalyst 2960 Switch
Console Port It is a port used to connect to the switch to configure, monitor and troubleshoot.
More on connecting to the switch is discussed shortly.
Status LEDs These LEDs show the status of various components of the switch. Apart from
these, there is a LED over each interface showing the status of that interface.
Each LED can be either off, amber or green.
Cisco Internetwork Operating System (IOS)
Cisco IOS (different from Apple’s iOS) is a proprietary kernel which controls all functions of a
Cisco router and most switches. Cisco IOS is based on the operating system created by William
Yeager at Stanford University between 1980 and 1986. Cisco licensed Yeager’s work and created
the IOS out of it. The Cisco kernel allocates resources and manages things such as low-level
hardware interfaces and security.
Some important items that the Cisco router IOS is responsible for include:
Carrying network protocols and functions
Connecting high-speed traffic between devices
Adding security to control access and stop unauthorized network use
Providing scalability for ease of network growth and redundancy
Supplying network reliability for connecting to network resources
Apart from the routing, switching, telecommunications and security functions, the IOS also
provides a Command Line Interface (CLI) for configuration, management, monitoring and
troubleshooting. The CLI can be access using the console port, the auxiliary port (if it is
available) and Telnet or SSH. Telnet or SSH access requires IP connectivity, hence the initial
configuration requires you to access the device using the console port.
The rest of the chapter is dedicated to connecting to the CLI and basic configuration.
Connecting to the CLI using Console port
To get to the CLI of Cisco router or switch you will need to connect your PC to the console port
of the device. The console port on a Cisco router or switch is a RJ45 port. You need to use a
UTP rollover cable (discussed in Chapter 1) with RJ45 connector on one end to insert into the
router or switch’s console port and there will be a 9 pin serial connection on the other end which
you will plug into a 9 pin serial port on your computer. Cisco ships a blue console cable with
almost every device. *Note: Many computers today do not come with a 9 pin serial port so you
will need to purchase a 9 pin serial to USB converter and put this on the end of your Cisco
console kit so you can make the physical connection.
Connect the serial connector end to the serial port of your PC and the RJ45 connector to the
console port of the router or switch. After the physical connection, you will need to use software
known as a Terminal Emulator to connect to the CLI. HyperTerminal is an example of a Terminal
Emulator that comes pre-installed on some Windows systems. If you do not have HyperTerminal
on your Windows PC, you may want to download PuTTY which is a free terminal emulator.
Minicom is a free terminal emulator for Unix based operating systems.
Figure 3-4 Hyperterminal configuration to connect to IOS CLI
Launch your terminal emulator and configure it to connect to the serial interface using the
following settings:
9600 bits/second
8 data bits
Parity None
1 stop bit
No flow control
Figure 3-4 shows Hyperterminal configured to use the above settings.
Booting Up a Router or a Switch
When you power up a Cisco router or a switch, it first runs the Power-On Self-Test (POST).
After POST the device looks for and loads the Cisco IOS from flash memory. Flash memory is an
Electronically Erasable Programmable Read-Only Memory (EEPROM). When the IOS loads, it
looks for the configuration file in the non-volatile RAM or NVRAM. Take a look at the booting
process of a Cisco Router shown below. The following output is from an 1841 router.
Exam Alert: The boot up sequence and type of messages will be similar across all routers. The
only noticeable differences will the reported size of RAM, NVRAM, flash and the number/type of
interfaces. Expect to see this type of output in your CCNA simulation exam questions.
The boot process of a Cisco catalyst switch is similar. The following outputs show the messages
that appear when a 2950 switch is booted up.
C2950 Boot Loader (C2950-HBOOT-M) Version 12.1(11r)EA1, RELEASE SOFTWARE (fc1)
Compiled Mon 22-Jul-02 17:18 by antonino
WS-C2950G-24-EI starting…
The above message shows the bootstrap program running. The output below shows the IOS being
decompressed and then loaded into the RAM.
[output truncated]
Loading “flash:/c2950-i6q4l2-mz.121-22.EA6.bin”…################################
File “flash:/c2950-i6q4l2-mz.121-22.EA6.bin” uncompressed and installed, entry point:
0x80010000
executing…
After the IOS is decompressed, the IOS version is displayed. Note the version displayed below is
12.1(22)EA6.
[output truncated]
Cisco Internetwork Operating System Software
IOS ™ C2950 Software (C2950-I6Q4L2-M), Version 12.1(22)EA6, RELEASE SOFTWARE (fc1)
Copyright (c) 1986-2005 by cisco Systems, Inc.
Compiled Fri 21-Oct-05 01:59 by yenanh
After IOS loads, it runs POST on various components of the switch as can be seen below.
[output truncated]
POST: System Board Test : Passed
POST: Ethernet Controller Test : Passed
ASIC Initialization Passed
POST: FRONT-END LOOPBACK TEST : Passed
After the last POST is passed IOS completes loading and displays the information learned during
the POST. The output is similar to the one displayed when the router completes booting and
provides information regarding the device.
cisco WS-C2950G-24-EI (RC32300) processor (revision L0) with 21013K bytes of memory.
Processor board ID FOC1028Y1TA
Last reset from system-reset
Running Enhanced Image
24 FastEthernet/IEEE 802.3 interface(s)
2 Gigabit Ethernet/IEEE 802.3 interface(s)
32K bytes of flash-simulated non-volatile configuration memory.
[output truncated]
The output above shows that the 2950 switch has 20MB of RAM and 32KB of flash. There are 24
FastEthernet interfaces and 2 Gigabit Ethernet Interfaces in the switch. Just as in the case of the
Router, once IOS has loaded, it will copy the startup config into the RAM as running config.
In both, the case of the router as well as the switch, if startup config is not present, the device will
go into the setup mode and start the System Configuration dialog. This is a step-by-step process
to help you with basic configuration. You can tell that the device has gone into the setup mode if
you see the following output after IOS loads:
— System Configuration Dialog —
Would you like to enter the initial configuration dialog? [yes/no]:
% Please answer ‘yes’ or ‘no’.
You will not be going through the setup mode since CCNA is all about configuring the switches
and the routers using the CLI.
Table 3-3 sums up all the components and their functions that you learned about in this section.
Figure 3-3 Important components used during boot
Component Function
Bootstrap A small program that runs the POST test and then
loads the IOS on bootup.
Flash An EEPROM where the IOS file is stored. The
Memory bootstrap looks for the IOS file here first.
RAM The working memory of the device. A copy of the
configuration is also stored here after bootup.
NVRAM Non-volatile RAM it stores a copy of the
configuration. On bootup, IOS reads the
configuration file from here.
Note: The rest of the chapter is dedicated to basic configuration using the CLI and the commands
and concepts apply to both a router and a switch, unless specifically mentioned otherwise. The
CCNA exam only uses the CLI and no GUI at this time.
Using the Command Line Interface (CLI)
Once IOS has finished loading up, it will ask you to press Return to continue. While waiting for
you to press return, it will display the status of every interface as shown below.
Press RETURN to get started!
*Mar 1 00:09:01.271: %LINK-5-CHANGED: Interface FastEthernet0/0, changed state to
administratively down
*Mar 1 00:09:01.583: %LINK-5-CHANGED: Interface FastEthernet0/1, changed state to
administratively down
*Mar 1 00:09:02.271: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
changed state to down
*Mar 1 00:09:02.583: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1,
changed state to down
Once you press enter, you will arrive at the Router> prompt. If the router has a startup config
with authentication configured, such as in the case of most brand new ISRs, you will be prompted
for a username and/or password before you will arrive at the prompt. For new ISRs cisco is the
username and password. We will cover authentication later in the chapter. For now consider the
prompt that you will see. The text before the greater-than sign (>) is the hostname of the device.
By default Router or Switch is the default name depending on the device.
IOS modes
The CLI of the IOS is divided into different modes or levels. Each mode serves a different
purpose and has different sets of commands. It is important to be familiar with different modes
that you will encounter in this book. Covering all the modes is out of the scope of CCNA.
The character after the hostname of the device tells you which mode you are in. When you first
start a router and press enter, you are at the Router> prompt. The greater-than sign (>) tells you
that you are in the user exec mode or level 1. This mode is mostly used to view statistics. You
cannot view or edit configuration of the device from this mode. This mode also serves as the
stepping-stone to the next mode, the privileged exec mode or level 15. At this level the prompt
changes to the dollar sign (#). To go to the privileged exec mode from the user exec mode,
type enable command on the prompt and press enter as shown below. Notice the change in
prompt after the command is entered.
Router>enable
Router#
Congratulations! You just entered your first command on an IOS device.
To go back to the user exec mode, you can use the disable command as shown below:
Router#disable
Router>
To close the CLI session, use the logout command in any mode.
At the privileged exec mode you can view the configuration and statistics related to every
component and process of the device but cannot make changes to the configuration. To be able to
make changes to the configuration of the device, you will need to go to the global configuration
mode using the configure terminal command in the privileged exec mode as shown below.
Notice that the prompt changes to Router(config)# after you enter the command. (config)# tells
you that you are in the global configuration mode.
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#
In this mode, you can make changes to the configuration of the device. You must remember three
things about the global configuration mode:
1. All changes affect the running config. These changes are not persistent after a reboot
unless running config is saved to the startup config.
2. All changes have an immediate effect on the device.
3. The global configuration mode has sub-modes. While some changes can be made in the
global configuration mode, changes to specific components, such as interfaces, must be
done in dedicated sub-modes.
From the global configuration mode you can go to different sub modes to configure specific
components. While most of the sub modes are beyond the scope of CCNA, a few of the modes
that you will come across in the book are discussed in Table 3-4.
Table 3-4 IOS Sub-modes
Sub-mode Purpose Sub-mode Command to enter sub-
name prompt mode
Interface In this mode you can Router(config- interface <interface-
Configuration configure individual if)# name>
interfaces of the
device. You can Example:
configure protocol,
layer 3 addressing
etc. in this mode. Router(config)#interface
fastEthernet 0/0
Router(config-if)#
Line In this mode you can Router(config- line {con | vty |
configuration configure the line)# aux} number
console, telnet and
auxillary lines, Example:
which are used for
exec sessions. Router(config)#line
console 0
Router(config-line)#
Routing In this mode you can Router(config- router protocol [number]
Configuration configure the router)#
routing protocols. Example:
Router(config)#router rip
Router(config-router)#
IOS Editing and Help Features
While configuring a device running IOS, using the CLI is mostly about remembering the different
commands and options. Cisco makes it easier to do this by providing various editing and help
features. The help feature is a lifesaver. You can use a question mark (?) at any place to see a list
of available commands or options, as shown below.
Router#configure ?
confirm Confirm replacement of running-config with a new config
file
memory Configure from NV memory
network Configure from a TFTP network host
overwrite-network Overwrite NV memory from TFTP network host
replace Replace the running-config with a new config file
terminal Configure from the terminal
<cr>
In the above output when a question mark (?) is entered after the configure command, a list of
available options is displayed. Notice that terminal is one of the options. Another example is
given below.
Router#?
Exec commands:
access-enable Create a temporary Access-List entry
access-profile Apply user-profile to interface
access-template Create a temporary Access-List entry
alps ALPS exec commands
archive manage archive files
audio-prompt load ivr prompt
auto Exec level Automation
beep Blocks Extensible Exchange Protocol commands
bfe For manual emergency modes setting
call Voice call
ccm-manager Call Manager Application exec commands
cd Change current directory
clear Reset functions
clock Manage the system clock
cns CNS agents
configure Enter configuration mode
connect Open a terminal connection
copy Copy from one file to another
credential load the credential info from file system
crypto Encryption related commands.
ct-isdn Run an ISDN component test command
–More–
In the above output, the numbers of options are more than the available screen size, hence the
output pauses and you see the –More– text. At this point you can press space to see the rest of the
output or press q to quit back to the prompt. A final example of the help feature is given below.
Router(config)#i?
identity interface ip ipc
iphc-profile ipv6 ipx irec-agent
isis iua ivr ixi
In the above output notice that a question mark was entered after a single character. This causes
IOS to display a list of options starting with that character. You can enter a question mark after
multiple characters to see a list of options starting with those characters. For example, type in? at
the above prompt will show a list consisting of interface option only. This brings up an
interesting feature of the CLI. If you type a few characters which are unique to a command and
press the tab key, the IOS will complete the command for you. In fact if you type the first few
unique characters of the command, you need not press tab or complete the command. IOS will
understand which command you want. For example if you type int and press tab then IOS will
complete the command. Another example is shown below.
Router#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#
Notice that the configure terminal command is executed at conf t. The IOS sees that the only
command which starts with conf is configure, while terminal is the only option which starts
with t.
Apart from these help features, the IOS provides some meaningful messages when you enter an
incomplete or wrong command. Take a look at few of these messages shown below.
Router#confguire terminal
^
% Invalid input detected at ‘^’ marker.
The above message tells that there is an error in the command marked by the caret sign (^).
Because of the sign, it is easy to see that there is a typing mistake in the command.
Router(config)#interface
% Incomplete command.
The above message tells that you have entered and incomplete command. More options are
needed with the command. In such a situation, you can use the question mark after the command to
see available options.
Router(config)#s
% Ambiguous command: “s”
The above message shows that you have not typed enough unique characters. There are multiple
commands that start with the characters that you have entered.
While using the CLI, these help features and messages are immensely useful, but you also need to
know about a few key combinations that you can use while typing commands. Table 3-5 shows a
list of these key combinations.
Table 3-5 IOS editing key combinations
Key or Purpose
Combination
Left Arrow or Move cursor one character back
Ctrl+b
Right Arrow or Move cursor one character forward
Ctrl+f
Esc+b Move cursor one word back
myRouter>en
Password: test123 (password will not be shown when typed on the device]
myRouter#
To configure a line password for console, you will first need to enter the line configuration mode
for the console using the line console command in the global configuration mode as shown below:
myRouter(config)#line console ?
<0-0> First Line number
myRouter(config)#line console 0
myRouter(config-line)#
In the above output, I used a question mark at the end of the first line. The help output shows that 0
is the only option available. First thing to know here is that there can be multiple lines of a kind
(example multiple telnet lines). Second you will need to specify the line number that you want to
configure. In the case of console, there will always be only a single line, zero, available. So the
command line console 0 will bring you to the line configuration mode for the console line (notice
the change in router prompt to (config-line)#).
In the line config mode, use the password password command to set a password for the line.
After that you will need to use the login command to enable login with the password you just
configured. The output below shows an example.
myRouter(config)#line console 0
myRouter(config-line)#password test
myRouter(config-line)#login
Now when someone tries to connect using the console, they will be prompted for a password as
shown below.
myRouter(config)#line aux 0
myRouter(config-line)#password test
myRouter(config-line)#login
Configuring the password for the telnet lines is no different, but you need to know two things
before doing that:
1. Telnet lines are called vty lines because they are virtual unlike console and auxiliary
2. Each IOS device has a minimum of 5 vty lines (0 to 4). Some of them can have 15 or
more.
3. You can configure all the vty lines together, in a group or one at a time. They need not have
the same configuration.
4. A new telnet or SSH session will use the lowest available vty line. So there can be 5
telnet or SSH sessions to the device at any time.
5. Telnet or SSH sessions to the device will not be allowed unless a password has been
configured and login is enabled.
To configure a password on line vty, you need to use the password and login commands in the
line configuration mode. You can enter the vty line configuration mode using the line
vty linenumber linenumber command. The following example shows the available number of vty
lines:
myRouter(config)#line vty ?
<0-4> First Line number
myRouter(config)#line vty 0 ?
<1-4> Last Line number
<cr>
myRouter(config)#line vty 0 4
myRouter(config-line)#
The line vty 0 4 command in the above example will enter the line configuration mode and you
will be able to configure all the available vty lines at one time.
The example below shows a password configured for all the vty lines:
myRouter(config)#line vty 0 4
myRouter(config-line)#password test
myRouter(config-line)#login
Once the password has been configured and login enabled, the device will allow Telnet sessions
to be initiated to the device. As you already know, Telnet is not a secure protocol because the
session is transmitted in plain text and is vulnerable to snooping. To overcome this problem, SSH
can be used. SSH encrypts the entire session but it requires encryption keys to start a session. By
default IOS does not have these keys and hence a SSH session cannot be initiated. To generate
those keys, you must first set the hostname and domain name of the device and then use the crypto
key command as shown below:
myRouter(config)#hostname Gateway
Gateway(config)#ip domain-name test.edu
Gateway(config)#crypto key generate rsa general-keys modulus 1024
% The key modulus size is 1024 bits
% Generating 1024 bit RSA keys, keys will be non-exportable…
Jun 9 00:43:43.599: %SSH-5-ENABLED: SSH 1.99 has been enabled
Once the keys are generated, the vty line can be configured to accept SSH sessions using the
following command:
Gateway(config-line)#transport input ssh telnet
If you leave out the telnet option from the above command, only SSH will be allowed to the
device.
One final thing you need to know about passwords is that the line passwords and the enable
password is stored in the configuration as plain text. What this means is that anyone who comes
across the configuration stored outside the device, can learn the passwords. To prevent this, the
passwords can be encrypted using the service password-encryption command in the global
configuration mode.
Configuring Router Interfaces
While configuring interfaces of a switch are covered in Chapter 6, configuring the interfaces of a
router is one of the basic things that you should know before forging further ahead. This is
because unless the router is connected to the network, there isn’t much it can do. Configuring the
interfaces is easy and usually consists of only two steps. But before proceeding, you need to
understand the interfaces and their numbering.
You will remember from earlier in the chapter that the number and type of interfaces are shown
during the boot up. While there are many different types of interfaces that can be present in a
router, the three types that you will encounter in the CCNA exam are Ethernet, FastEthernet and
Serial. Some of these interfaces are built into the device while some are added as modules in
available slots. The built-in devices are said to be in slot zero; while module go into slot numbers
starting from 1. Depending on the router the interfaces can be numbered simply as type
number or type slot/number or in some high-end routers as type router/slot/number. Router, slot
and number are numerical and start from 0. Some examples are:
myRouter(config)#interface ?
Async Async interface
BVI Bridge-Group Virtual Interface
CDMA-Ix CDMA Ix interface
CTunnel CTunnel interface
Dialer Dialer interface
FastEthernet FastEthernet IEEE 802.3
Group-Async Async Group interface
Lex Lex interface
Loopback Loopback interface
MFR Multilink Frame Relay bundle interface
Multilink Multilink-group interface
Null Null interface
Tunnel Tunnel interface
Vif PGM Multicast Host interface
Virtual-PPP Virtual PPP interface
Virtual-Template Virtual Template interface
Virtual-TokenRing Virtual TokenRing
range interface range command
In the above output, the help output shows the different kinds of interfaces that can be configured.
myRouter(config)#interface FastEthernet ?
<0-0> FastEthernet interface number
In the above output, notice that only a single slot number, zero, is available
myRouter(config)#interface FastEthernet 0?
/
myRouter(config)#interface FastEthernet 0/?
<0-1> FastEthernet interface number
The above output shows that there are two FastEthernet Interfaces that can be configured, zero
and one.
myRouter(config-if)#no shutdown
After this command is given, the router will bring up the interface and assume the IP Address you
configured and will effectively be connected to the network on that interface. You can quickly
verify connectivity at this stage using the ping and traceroute command from the privileged exec
mode as shown below:
myRouter#ping 192.168.1.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
myRouter#traceroute 192.168.40.1
Type escape sequence to abort.
Tracing the route to 192.168.1.10
1 192.168.1.10 8 msec 4 msec 0 msec
While you already know about the ping command, the traceroute command might be new to you.
The traceroute command uses the TTL field in the IP header to discover layer 3 devices between
your device and a given host in the internetwork. Here the ping output shows that 192.168.1.10 is
reachable from the router and traceroute command shows that it is the next hop device. Both of
these outputs confirm that the router now has network connectivity and is able to function properly
at all layers.
While FastEthernet interfaces usually required just the IP address and subnet mask, the serial
interface might require some more configurations. However, that is discussed in detail in Chapter
11.
Certain poor network designs may require you to have a second IP address on an interface. While
this is very inefficient, if you do run into a need to configure this, you will need to use
the secondary keyword at the end of the ip address command. If you do not use
the secondary keyword, the new one will replace the configured IP address on the interface. An
example of adding a secondary IP address is shown below:
myRouter(config-if)#ip address 192.168.10.40 255.255.255.0 secondary
The secondary IP address can belong to the same subnet as the primary address or a different one.
Another optional command that you can use in interface configuration mode is
the description command. This command will add a description text for the interface in the
configuration. While this is not necessary for operation of the interface, it can be useful to have a
short description containing the purpose of the interface. Some routers and most switches can
have a lot of interfaces and it can be very difficult to decipher what the interface connects to. So it
is recommended that you make a habit of adding description to all interfaces as shown below:
myRouter#show protocols
Global values:
Internet Protocol routing is enabled
FastEthernet0/0 is up, line protocol is up
Internet address is 192.168.1.1/24
FastEthernet0/1 is administratively down, line protocol is down
The show protocols command is more useful if you have multiple layer 3 protocols running on the
device.
When working with show commands, one useful feature that you can use is piping. Using pipes,
you can search for specific lines in an entire output. Take a look at the example below:
myRouter#ping Router1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
In the above example notice that the router resolved the name in the ping command to the IP
address.
Using DHCP
From the previous chapter, you would remember that DHCP is used to dynamically provide IP
address to hosts in a network. An IOS device can be configured to receive as well as give out IP
address. What this means is that the device can be a DHCP client as well as a DHCP server.
When you configure the device as a DHCP client, it takes an IP address from a DHCP server for
its interface. To configure the interface to take the IP address from DHCP server use the ip
address dhcp command in the interface configuration mode. After that, when the interface is
bought up, it will start requesting for an IP addresses.
To configure the device as a DHCP server, you need to define a pool that consists of the subnet
from which the device will give out addresses. Apart from the addresses, you can also configure
other parameters such as DNS server and default gateway that can be sent to the client. To create
the pool, use the ip dhcp pool pool-name command in the global configuration mode. This
command will create the pool and bring you to the DHCP configuration mode (the prompt will
change to dhcp-config). Here you can use the network, dns-server and default-router commands
to define the subnet, DNS server and the gateway address as shown below:
myRouter#erase startup-config
Erasing the nvram filesystem will remove all configuration files! Continue? [confirm]
[OK]
Erase of nvram: complete
To reboot the device, use the reload command in the privilege exec mode as shown below:
myRouter#reload
System configuration has been modified. Save? [yes/no]: no
Proceed with reload? [confirm]y
If the configuration is not saved, the router will prompt you to save the configuration when you
enter the reload command. Type no and press enter to not save the configuration and reload with
an empty NVRAM. When the devices comes up, there will be no configuration on it and you can
start over.
Working with IOS files and IOS File System (IFS)
Similar to how the configuration can be copied around, the IOS file that is used by devices can be
backed up or restored or simply changed using the copy command. Remember that IOS file is
saved in the flash memory. To copy the file currently used by the router to a TFTP server, use
the copy flash: tftp: command as shown below:
myRouter#copy flash: tftp:
Source filename []? c1841-advipservicesk9-mz.124-25e.bin
Address or name of remote host []? 192.168.1.40
Destination filename [c1841-advipservicesk9-mz.124-25e.bin]?
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
23348556 bytes copied in 230.972 secs (101088 bytes/sec)
To copy an image from the TFTP server to the flash memory, reverse the above command as
shown below:
myRouter#dir
Directory of flash:/
1 -rw- 23348556 Feb 23 1907 16:27:44 +00:00 c1841-advipservicesk9-mz.124-25e.bin
To free up some space you can delete some the existing files using the delete command. Since
there is only a single file in the flash currently, it has to be deleted as shown below:
myRouter#delete flash:c1841-advipservicesk9-mz.124-25e.bin
Delete filename [c1841-advipservicesk9-mz.124-25e.bin]?
Delete flash:/c1841-advipservicesk9-mz.124-25e.bin? [confirm]
Now that you have free space, try the copy command again:
myRouter#mkdir test
Create directory filename [test]?
Created dir flash:test
myRouter#dir
Directory of flash:/
1 -rw- 21177448 Sep 7 2011 00:12:50 +00:00 c1841-advsecurityk9-mz.124-23.bin
2 drw- 0 Jan 21 2012 14:09:34 +00:00 test
31936512 bytes total (10752000 bytes free)
cd – This command is used to move to a sub-directory or move back to parent directory on
IFS. For example, cd test will take you inside the sub-directory created above.
pwd – This command can be used to find the name of directory you are currently in. An
example is shown below:
myRouter#pwd
flash:
myRouter#cd test
myRouter#pwd
flash:/test/
myRouter#cd ..
myRouter#pwd
flash:/
rmdir – This command can be used to delete a directory.
copy – Similar to how you copied an IOS file to the flash (IFS), you can copy any file to
IFS. For example, you can copy the running-config to the IFS as shown below:
myRouter#more backup.conf
!
version 12.4
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname myRouter
!
–output truncated–
In case you have enough free space in the flash memory, you can have multiple IOS files there. By
default, the first file that the system encounters is used for booting. You can specify the IOS file
that want the system to load to boot up, using the boot system command in the global
configuration mode as shown below:
Exam Alert: In reality the break sequence differs from client to client and operating system to
operating system. For example, when using OSX, you might have to use Cmd+b. As far as the
CCNA exam goes, Ctrl+Break is the only option. I would suggest using Windows/Hyperterminal
for practice.
Router>en
Router#copy startup-config running-config
Destination filename [running-config]?
1244 bytes copied in 0.548 secs (2270 bytes/sec)
myRouter#config t
myRouter(config)#enable secret newpass
myRouter(config)#line con 0
myRouter(config-line)#password newpass
myRouter(config-line)#^Z
myRouter#copy running-config startup-config
Destination filename [startup-config]?
Building configuration…
[OK]
Now that the password has been changed in the startup config, you will be able to access the
device once it boots back normally. But if you boot the router now, it will keep loading without
the startup config because the configuration register is still set to 0x2142. To change the
configuration register use the config-register command in the global configuration mode as
shown below and then save the configuration again before rebooting:
myRouter#conf t
Enter configuration commands, one per line. End with CNTL/Z.
myRouter(config)#config-register 0x2102
myRouter(config)#exit
myRouter#copy running-config startup-config
Destination filename [startup-config]?
Building configuration…
[OK]
myRouter#reload
Proceed with reload? [confirm]
Cisco Discovery Protocol (CDP)
Cisco Discovery Protocol (CDP) is a proprietary protocol designed by Cisco to help in finding
information about neighboring devices. Devices connected to each other exchange CDP packets to
learn about each other. This can be useful in troubleshooting and documenting the network.
CDP is enabled on all interfaces of all Cisco routers and switches. You can disable CDP globally
using the no cdp run command in the global configuration mode. It can be enabled again using
the cdp run command. CDP can be disabled on an interface using the no cdp enable command in
the interface configuration mode.
Each device running CDP sends out a packet every 60 seconds to its neighbors. The timers
associated with CDP on a device can be seen using the show cdp command in the privilege exec
mode as shown below:
myRouter#show cdp
Global CDP information:
Sending CDP packets every 60 seconds
Sending a holdtime value of 180 seconds
Sending CDPv2 advertisements is enabled
In the above output you can see that CDP is sending packets every 60 seconds. Each neighbor will
keep the information contained in a packet for 180 seconds. The timers can be changed using
the cdp timer command and the cdp holdtime command in the global configuration mode as
shown below:
myRouter(config)#cdp ?
advertise-v2 CDP sends version-2 advertisements
holdtime Specify the holdtime (in sec) to be sent in packets
log Log messages generated by CDP
run Enable CDP
source-interface Insert the interface’s IP in all CDP packets
timer Specify rate (in sec) at which CDP packets are sent
myRouter(config)#cdp timer ?
<5-254> Rate at which CDP packets are sent (in sec)
myRouter(config)#cdp timer 120
myRouter(config)#cdp holdtime ?
<10-255> Length of time (in sec) that receiver must keep this packet
myRouter(config)#cdp holdtime 240
myRouter(config)#do show cdp
Global CDP information:
Sending CDP packets every 120 seconds
Sending a holdtime value of 240 seconds
Sending CDPv2 advertisements is enabled
myRouter(config)#
As mentioned, earlier CDP can be used to troubleshoot as well as document a network. When you
need information regarding devices directly connected to a device, you can check the neighbors
learned by CDP using the show cdp neighbor command. An example is shown below:
myRouter#telnet 192.168.1.7
Trying 192.168.1.7 … Open
Username: admin
Password:
Switch>
Remember that if the remote device is an IOS device, it should have a line password and an
enabled password/secret configured. Without these, you will not be able to telnet in or get into the
privilege exec mode.
When you exit the telnet session using logout or exit command on the remote device, you will be
back to the prompt of the device from where the session was initiated, as shown below:
Switch#logout
[Connection to 192.168.1.7 closed by foreign host]
myRouter#
You can telnet to multiple devices simultaneously. For that, you will first need to toggle back to
the local device prompt form the remote prompt using Crtl+Shift+6 followed by the X key. For
example, in the output below, I first telnet to the switch like before and enter its privilege exec
mode. Then I press the Ctrl+Shift+6 sequence following by X. The whole sequence will not be
seen in the output but you will notice that I am back to the prompt of myRouter:
myRouter#telnet 192.168.1.7
Trying 192.168.1.7 … Open
Username: admin
Password:
Switch>enable
Password:
Switch# [I pressed Crtl+Shift+6 X here]
myRouter# [Notice the change in prompt. I am back to myRouter]
Though you are back to the first device, the telnet session to the remote device is still active, but
in the background. Now you can initiate another telnet session to another device as shown below:
myRouter#telnet 192.168.1.137
Trying 192.168.1.137 … Open
User Access Verification
Username: admin
Password:
Switch2>
You can again leave this session active and go back to the prompt of the first device using
the Ctrl+Shift+6 X sequence as shown below:
myRouter#show sessions
Conn Host Address Byte Idle Conn Name
1 192.168.1.7 192.168.1.7 0 7 192.168.1.7
* 2 192.168.1.137 192.168.1.137 0 1 192.168.1.137
The asterisk (*) sign next to a session shows the most recent session. You can return to that
session by pressing Enter twice. You can return to any session by typing the number of the session
and pressing Enter. In the output below, I toggle between the two sessions. Notice how pressing
enter twice or just entering the session number as a command takes me back to the telnet sessions.
Also notice that I can toggle between the sessions and the first device using Ctrl+Shift+6 X
sequence.
myRouter#1
[Resuming connection 1 to 192.168.1.7 … ]
Switch#logout
[Connection to 192.168.1.7 closed by foreign host]
myRouter#show sessions
Conn Host Address Byte Idle Conn Name
* 2 192.168.1.137 192.168.1.137 0 4 192.168.1.137
myRouter#disconnect 2
Closing connection to 192.168.1.137 [confirm]
myRouter#show sessions
% No connections open
While managing a device, you can see who is connected to the device using the show
users command as shown below:
myRouter#show users
Line User Host(s) Idle Location
*194 vty 0 admin idle 00:00:00 10.1.10.228
195 vty 1 admin idle 00:00:00 10.1.10.18
The above output shows that two users are connected to the device using Telnet. The asterisks (*)
sign denotes the connection where the show users command was executed. The first connection is
using line 194 and vty line 0 and the second is using line 195 and vty line 1. It is possible to
disconnect someone’s session from the device using the clear line <line number> command. In
the output below, I have disconnected the second connection from the first session:
myRouter#show users
Line User Host(s) Idle Location
*194 vty 0 admin idle 00:00:00 10.1.10.10
195 vty 1 admin idle 00:04:12 10.1.10.10
Interface User Mode Idle Peer Address
myRouter#clear line 195
[confirm]
[OK]
myRouter#show users
Line User Host(s) Idle Location
*194 vty 0 admin idle 00:00:00 10.1.10.10
Interface User Mode Idle Peer Address
Lab #1
Your employer has installed two new Cisco routers in the network. Figure 3-5 shows the network
layout. Your task is to configure the routers such that HostA can telnet into them to configure it
further. Ensure correct hostnames and IP addresses are assigned. All passwords should be set to
mypass123. When configuration is complete, save the configuration and back it up to the TFTP
server running on HostA.
Figure 3-5 Network Diagram for CCNA Lab #1
Solution
1. Connect to the console port of RouterA and when prompted enter no to exit out of setup
mode. Press Enter to go to the User exec mode.
2. Enter the privileged exec mode using the command enable and then configure the
hostname and enable secret as shown below:
Router#config terminal
Router(config)#hostname RouterA
RouterA(config)#enable secret mypass123
3. Configure the IP address on the interface as shown below:
RouterA(config)#interface fa0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.0
RouterA(config-if)#exit
4. Configure a password and enable login on line vty as shown below:
RouterA(config)#line vty 0 4
RouterA(config-line)#password mypass123
RouterA(config-line)#login
RouterA(config-line)#exit
5. Save the config and then copy it to the TFTP server as shown below:
RouterA(config)#exit
RouterA#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
RouterA#copy run tftp:
Address or name of remote host []? 192.168.1.10
Destination filename [routera-confg]? [enter]
!!
763 bytes copied in 0.712 secs (1071 bytes/sec)
6. Repeat steps 1-5 on RouterB. The configuration steps are given below:
Router#config terminal
Router(config)#hostname RouterB
RouterB(config)#enable secret mypass123
RouterB(config)#interface fa0/0
RouterB(config-if)#ip address 192.168.1.2 255.255.255.0
RouterB(config-if)#exit
RouterB(config)#line vty 0 4
RouterB(config-line)#password mypass123
RouterB(config-line)#login
RouterB(config-line)#exit
RouterB(config)#exit
RouterB#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
RouterB#copy run tftp:
Address or name of remote host []? 192.168.1.10
Destination filename [routerb-confg]? [enter]
!!
763 bytes copied in 0.712 secs (1071 bytes/sec)
Chapter 4: Introduction To IP Routing
In the previous chapter, you configured a Cisco router such that it connects to the network and can
be managed remotely amongst other things. Now it is finally the time to look at the single most
important function of routers – IP routing. As you already know, routers look at the destination IP
address of a packet and route it to the destination. Though the routing process itself is easy,
building a list of networks, called the routing table, is not as easy. This chapter looks at the IP
routing process itself and various ways to build to the routing table.
4-1 Understanding IP Routing
4-2 Static, Default and Dynamic Routing
4-3 Administrative Distance and Routing Metrics
4-4 Classes of Routing Protocols
4-5 Routing Loops
4-6 Route Redistribution
4-7 Static and Default Route Lab
4-8 Summary
Understanding IP Routing
In the simplest terms, IP Routing is the process of moving packets from its source to its
destination across internetworks. To be able to route packets, a router must know at a minimum
the following:
Destination address
Neighbor routers from which it can learn about remote networks
Possible routes to all remote networks
The best route to each remote network
Be able to maintain and verify routing information
Unfortunately the process is not as simple as it sounds because it involves multiple protocols at
multiple layers. To understand the complete process of how a packet moves from the source to the
destination, consider the network shown in Figure 4-1.
Figure 4-1 Understanding IP Routing
In the network shown above, when Host1 sends a TCP segment to Host3, the following happens:
1. The TCP segment is handed off to IP, which adds a header consisting of the source
address, 192.168.1.10 and destination address 192.168.5.20 and hands off that packet to
the next layer.
2. Using the subnet mask of the host, it is determined that the destination address lies in a
remote network and hence the packet must be sent to the default gateway, 192.168.1.1. So
Host1 sends out an ARP request to find the MAC address of Router1. When a response is
received, it frames the packet with the source MAC address of Host1 and destination
MAC address of Router1.
3. When Router1 receives the frame, it strips of the header and trailer and looks at the
destination address in the IP header. Since the packet is not destined to Router1, it must be
routed out.
4. It tries to match the destination address to a list of known networks, called the routing
table. It finds that the destination network is reachable via Router2, so it frames the packet
with the source MAC address of its exit interface (interface with the IP address of
10.1.1.1) and the destination address of Router2’s interface.
5. When Router2 receives the frame, it repeats the strip and lookup process and frames the
packet again before sending it to Router3. This time the MAC address of Router2’s exit
interface is the source address while the MAC address of Router3 is the destination
address.
6. Finally Router3 looks at the destination MAC address and realizes that the destination
network is directly connected. It finds the MAC address of the destination host and frames
the packet using its own MAC address as the source while the MAC address of Host3 as
the destination address. At last the frame is sent out and reaches the destination host.
7. At the destination, the frame is stripped and the destination IP address is verified. Then the
IP header is stripped and the TCP segment reaches Layer 4 of the destination.
8. Now when Host3 needs to reply back to Host1, TCP will hand off the reply segment to IP.
9. IP will add a header consisting of a source address of 192.168.5.20 and a destination
address of 192.168.1.10 and will send it to layer 2 for framing.
10. By the subnet mask of Host3, it is determined that the destination lies in a remote network.
Hence the frame will need the MAC address of the default gateway as destination. If
Host3 does not have the MAC address of Router3, it will send an ARP query to get it.
Once Host3 has the MAC address, it will frame the segment and send it out to Router3.
11. Router3 strip the frame header and look at the destination IP address in the IP header.
From its routing table, it will know that the packet needs to go to Router2. It will frame the
packet with a source MAC address of its fa0/0 interface and the destination MAC address
will be the address of Router2’s fa0/1 interface and then send it out to the wire.
12. Router2 receives the frame and repeats process to send the packet to Router 1.
13. Router1 receives the frame from Router2 and removes the frame. By the destination IP
address it knows that the packet belongs to a directly connected interface.
14. Since it received a frame from Host1 earlier, it has the MAC address of the host mapped
to its IP address in the ARP table. The router uses that to create a frame with its fa0/0
interface’s MAC address as source and Host1’s MAC address as destination and sends
the frame out the interface.
15. When Host1 receives the frame, it verifies the destination address, strips the frame and IP
header and sends the TCP segment to layer 4.
Exam Alert: Remember that the source and destination IP address do not change throughout the
process while the source and destination MAC address changes at each segment. You will see
multiple questions about this on the CCNA exam! The MAC address is only locally significant
and changes each hop.
The above steps show how a TCP segments moves from its source to its destination across an
internetwork. The steps above assume that each router in the path knows where the destination
network lies. But as you have seen in the previous chapter, a new router has no configuration and
the router is not going to discover remote networks by itself. You will need to tell the router about
the remote networks manually or configure it to learn the routes dynamically by talking to other
routers.
Note: The network shown in Figure 4-1 will be used throughout the chapter. I strongly suggest
you setup the above network and configure the basic connectivity. It will also allow you to
practice everything learned in the previous chapter, once again.
Types Of Routing
To be able to route a packet, a router must know at least the following:
Destination address to where the packet is destined. Layer 3 protocols such an IP take care
of this.
Neighboring routers from which remote networks can be learned of and packets can be
moved to on way to its destination.
Routes to remote networks and a way to determine the best route to each of them.
Way to learn, verify and manage routing information. Incomplete, incorrect or unstable
routing information is worse than not having any routing information. If a router does not
have routing information, it will drop the packets and let the source know. If a router has
incorrect routing information, loops can form and bring down networks.
As you would have realized by now, the essence of routing is how the router learns about the
remote networks. Routing information is stored in the routing table also called the Routing
Information Base (RIB). The RIB consists of routes to destination networks. Each route is a
combination of the destination network address, subnet mask and the next hop towards the
destination. There are three ways for a router to learn routes:
1. Static Routing – This is the method by which an administrator manually adds routes to the
routing table of a router. This is a method for small networks but it is not scalable for
larger networks.
2. Default Routing – This is the method where all routers are configured to send all packets
towards a single router. This is a very useful method for small networks or for networks
with a single entry and exit point. It is usually used in addition to Static and/or Dynamic
routing.
3. Dynamic Routing – This is the method where protocols and algorithms are used to
automatically propagate routing information. This is the most common method and most
complex method of routing. Each routing protocol can have chapters or even whole books
written about then. Most of them have one or more RFCs dedicated to them. In fact, the
whole of the next chapter is dedicated to dynamic routing.
The following sections look at each of these routing types while implementing the first two types
in our example network.
Static Routing
When you manually add routes to the routing table, it is called static routing. There are advantages
and disadvantages in using static routing. The advantages are:
1. There is no overhead in terms of CPU usage of the router as well as bandwidth between
routers. When dynamic routing is used, packets are exchanged between routers and that
uses bandwidth. That can be costly when they traverse across WAN links. The routers also
need to process these packets and that consumes some CPU cycles as well.
2. It adds a certain degree of security since the administrator controls which routes the
routers can know and learn.
The disadvantages of static routing are:
1. The administrator needs to know the internetwork so well that he/she knows where each
destination network lies and which is the next hop towards it.
2. Every change needs to be manually done on each router in the internetwork.
3. In large networks this can be unmanageable.
To add a static route, use the following command in the global configuration mode:
Router1#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2
E1 – OSPF external type 1, E2 – OSPF external type 2
i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2
ia – IS-IS inter area, * – candidate default, U – per-user static route
o – ODR, P – periodic downloaded static route
Gateway of last resort is not set
S 192.168.5.0/24 [1/0] via 10.1.1.2
10.0.0.0/24 is subnetted, 1 subnets
C 10.1.1.0 is directly connected, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
Router2#sh ip route
-output truncated–
S 192.168.5.0/24 [1/0] via 10.1.2.2
10.0.0.0/24 is subnetted, 2 subnets
C 10.1.2.0 is directly connected, FastEthernet0/1
C 10.1.1.0 is directly connected, FastEthernet0/0
S 192.168.1.0/24 [1/0] via 10.1.1.1
Router3#sh ip route
-output truncated–
Gateway of last resort is not set
C 192.168.5.0/24 is directly connected, FastEthernet0/1
10.0.0.0/24 is subnetted, 1 subnets
C 10.1.2.0 is directly connected, FastEthernet0/0
S 192.168.1.0/24 [1/0] via 10.1.2.1
Though the output of the show ip route command will be discussed in detail later in the chapter
and in the next chapter, here are a few things you need to know now:
1. The letter at the start of each line shows how the router was learned. The meaning of each
letter is given at the beginning of the output as can be seen form the output from Router1. C
stands for directly connected routes. These are the networks to which the router is directly
connected. S stands for static routes. As you can see, the routes that you added are shown
in lines that start with S.
2. You should verify the network and subnet mask in the output to see if you typed the correct
information.
3. The IP address after “via” shows the next hop address for this destination.
The outputs show that all the routes that you added above have taken effect and traffic can flow
between the 192.168.1.0/24 and 192.168.5.0/24 networks in both directions now. You may have
noticed that Router1 still does not know about the network between Router2 and Router3
(10.1.2.0/24) and Router3 does not know about the network between Router1 and Router2
(10.1.1.0/24). Though it is not necessary for them to know about these networks, from a
troubleshooting perspective it better to add routes for these networks also as shown below:
Router1(config)#ip route 10.1.2.0 255.255.255.0 10.1.1.2
Router3(config)#ip route 10.1.1.0 255.255.255.0 10.1.2.1
After these routes are added, the example network has complete reachability using static routing.
Default Routing
Default routing can be considered a special type of static routing. The difference between a
normal static route and a default route is that a default route is used to send packets destined to
any unknown destination to a single next hop address. To understand how this works, consider
Router1 from our example (Figure 4-2), without any static routes in it. When it receives a packet
destined to 192.168.5.0/24 it will drop it since it does not know where the destination network is.
If a default route is added in Router1 with next hop address of Router2, all packets destined to
any unknown destination, such as 192.168.5.0/24 will be sent to Router2.
Default routes are useful when dealing with a network with a single exit point. It is also useful
when a bulk of destination networks have to be routed to a single next-hop device. When adding a
default route, you should ensure that the next-hop device can route the packet further, or else the
next hop device will drop the packet.
Another point to remember is that when a more specific route to a destination exists in the routing
table, the router will use that route and not the default route. The only time the router will use the
default route is when a specific route does not exist.
The command to add a default route is same as that of adding a static route, but with the network
address and mask set to 0.0.0.0 as shown below:
Router1#sh ip route
–output truncated–
Gateway of last resort is 10.1.1.2 to network 0.0.0.0
10.0.0.0/24 is subnetted, 1 subnets
C 10.1.1.0 is directly connected, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
S* 0.0.0.0/0 [1/0] via 10.1.1.2
Router3#sh ip route
–output truncated–
Gateway of last resort is 10.1.2.1 to network 0.0.0.0
C 192.168.5.0/24 is directly connected, FastEthernet0/1
10.0.0.0/24 is subnetted, 1 subnets
C 10.1.2.0 is directly connected, FastEthernet0/0
S* 0.0.0.0/0 [1/0] via 10.1.2.1
In the above output notice that the static route to 0.0.0.0/0 is now seen in the routing table. Apart
from that, the gateway of last resort is now the next-hop as specified in the default route.
A second way of adding a default route would be to specify the exit interface instead of the next-
hop address. For example, on Router1, you can use the following command instead of the one
used above:
Router1(config)#ip route 0.0.0.0 0.0.0.0 fa0/0
This tells the route to forward all packets, destined to unknown destinations, out fa0/0. While this
will accomplish the same thing, the big difference is that a static route with an exit interface
specified will take preference over a static route with next-hop specified. This is because the
administrative distance of a route with exit interface is lower than the other one. Administrative
distance is covered later in the chapter.
A third way of defining a default route is using the ip default-network command. Using this
command you can tell the router to use the next-hop address of a known network as the gateway of
last resort. For example, on Router1, you can use the following two commands to set the gateway
of last resort:
Assume that you had one routing protocol between RouterA, RouterB and RouterD while another
protocol between RouterA, RouterC and RouterD. (Yes, a router can run multiple routing
protocols at the same time). When both the routing protocols tell RouterA about the
192.168.5.0/24 network, which protocol’s route should it choose?
The answer lies in the AD, which is the trustworthiness of routing information received by a
router and it depends on the method or protocol by which that route was learned. What this means
is, that each protocol has an AD value, which defines the trustworthiness of the routes it tells a
router about. This value can be from 0 to 255, with a lower value being better. Each protocol has
been assigned a default AD.
When Router1 receives information regarding the same network, 192.168.50./24, from two
sources, it will compare the AD value of each source and the one with the lowest value will be
selected.
On the other hand, if a single routing protocol was running on all routers, the routing protocol
would see the multiple paths to the remote network and choose the best path depending on
the metric. A metric, or cost, of a route is calculated differently by each protocol.
Table 4-1 shows the default AD value of various route sources.
Table 4-1 Default AD Values
Route Source Default AD Value
Connected Interfaces 0
Static Route 1
EIGRP 90
OSPF 110
External EIGRP 170
Unknown 255
As you can see form the table above, a connected route will be preferred over a static route,
while a static route will be selected over any dynamic route. Similary, an EIGRP route will be
preferred over an OSPF route. Note that any route with an AD value of 255 will never be used.
It is important to remember the following when it comes to choosing routes:
1. When a routing protocol has more than one path to a destination, it will use metrics to
present a route to the router.
2. When a router is presented with multiple routes to a destination, it will use AD to decide
which one to use and will install that route in the routing table.
3. Finally when a routers needs to route a packet, it will look at the routing table and use the
route longest match prefix (subnet mask). For example, two routes are present in the
routing table – 10.1.1.0/24 and 10.1.1.0/28, and a packet destined to 10.1.1.1 is received,
the router will select the 10.1.1.0/28 route since it is route with the longest match prefix.
Exam Alert: It is important to remember where AD is used and where metric is used. When it
comes to actually routing the packet, the router will only look at the information in the routing
table. AD and metrics are used to decide which route goes into the routing table only. You will
surely see questions on the CCNA exam where they try to confuse you on how the route is
selected.
You should note that the ADs given in table 4-1 are default ones and can be changed on any router
easily. While changing the AD of dynamic routing protocols is out of CCNA context, you should
know how to change the AD of static routes. You will recall that the command to add a static
route is:
When converged, all the routers in the network shown above will know about the 192.168.5.0/24
network. If RouterD looses connectivity to 192.168.5.0/24, it will remove the route to that
network from its routing table. When RouterC receives the next periodic update from RouterD, it
will know that the route to 192.168.5.0/24 is lost, and will remove it from its routing table. At
this stage, RouterA and RouterB still think that 192.168.5.0/24 is reachable via RouterC.
While RouterC waits to send out the periodic update, if RouterB sends its own update, it will
contain 192.168.5.0/24 as a destination network. Since RouterC does not have that network in its
routing table, it will assume that it is a new destination and RouterB knows about and will install
the route to that network, pointing towards RouterB. After this, the periodic update form RouterC
will contain the 192.168.5.0/24 network and RouterB will assume that it knows of all the
networks contained in that update!
Now when RouterB receives a packet destined to 192.168.5.0/24, it will forward it out to
RouterC. When RouterC receives that packet, it will see that 192.168.5.0/24 is towards RouterB
and will send it back. This loop will continue till the IP TTL value in the packet header reaches
zero and one of the routers drops it.
To prevent against such routing loops, distance vector protocols have some checks in place.
These checks are discussed in the following sections.
Maximum Hop Count
Without checks in place, the wrong routing information can spread throughout the network. To
prevent this, protocols such as RIP have a maximum hop count. For RIP this value is set to 15.
Any route with more than the maximum hop count is deemed unreachable and will not be used. In
the above scenario, the original hop count of 192.168.5.0/24 on RouterB was 2. After RouterA
lost the connectivity and RouterC learned the wrong information, it would see 192.168.5.0/24 at 3
hop counts. When RouterB gets this update back from RouterC, it will add 1 to the hop count and
make it 4. This cycle will go on. Without a maximum hop count in place, this will go on. This
phenomenon is called counting to infinity. Without maximum hop count in place, the increasing
hop count will cause the routes to be deemed unreachable, and will be removed from the routing
table causing the loop to be resolved.
Split Horizon
The split horizon rule states that routing information learned from one interface cannot be
advertised back to that interface. With this rule in place in the above scenario, RouterB would
have never advertised 192.168.5.0/24 network back to RouterC since that’s where the route
originated. Hence a routing loop would never occur. By default split horizon is enabled for RIP
and EIGRP.
Route Poisoning
Route poisoning uses the maximum hop counts to stop network loops. When a router looses a
route, it advertises that route with a hop count of more than the maximum hop count. The receiving
router now finds the destination network unreachable and advertises it ahead as such. It also
sends the update back towards the source router to ensure that the route is now poisoned in the
entire network. This process is called poison reverse.
In the above network when RouterD looses 192.168.5.0/24, it would advertise the route to
RouterC with a hop count of more than the maximum hop count. RouterC in turn will update
RouterB. This is the route poisoning process. RouterC also sends the poisoned route back to
RouterD to ensure that the whole network is in sync. This is the poison reverse process.
Hold Downs
Routing protocols implement timers to allow lost routes to recover or to switch to the next best
route to the same destination. These timers are called hold down timers. This is typically useful is
case of links going down and coming back up rapidly (this is called flapping). One such route
going in and out of the routing table can cause loops and stop the network from converging. Hold
down timers also prevent changes which effect a route that was recently lost.
In the above example, a hold down timer would have prevented the update from RouterB from
effecting RouterC immediately after the route to 192.168.5.0/24 was lost. In the meantime,
RouterC would have updated RouterB about the lost route.
Exam Alert: All the loop prevention methods are important topics in the CCNA exam
Route Redistribution
While you are not really going to redistribute routes as a part of CCNA, it is important to know
what it is. Simply put, route redistribution is the process of distributing routes learned from one
source to another. This is useful when networks are expanding, or are merging, or in a phase of
transition.
For example, assume that RIP is being used in a growing network. Beyond a hop count of 15, it
will become impossible to use RIP. In this situation, you will need to switch to another routing
protocol. While switching, two protocols would need to co-exist in the network while
maintaining complete reachability. Redistribution of routes from RIP to the new protocol and vice
versa can achieve this.
Another example where you would need to use redistribution is when a company acquires another
and their networks need to merge. If both the networks were using different routing protocols,
redistribution between these protocols can provide full connectivity with the least amount of
effort.
Figure 4-5 Route Redistribution
A Few important points that you should remember about route redistribution are:
1. The routing protocol receiving the redistributed routes will mark them as external.
External routes are less preferred that Internal routes
2. Routes can only be redistributed at routers that run both the routing protocols. For
example, Figure 4-3 shows RouterB running EIGRP and OSPF both. I the network shown,
routes can be redistributed on RouterB only.
3. It is possible to redistribute between two different processes or AS of the same protocol.
For example, if you have two EIGRP AS running on a Router, you can redistribute
between them.
4. Static and Connected routes can also be redistributed
5. Only routes present in routing tables can be redistributed. For example, if a static route
points to an unknown next-hop, it will not be present in the routing table and cannot be
redistributed.
6. When redistributing routes, you have to ensure metric compatibility. For example, EIGRP
metrics can be large numbers while any metric above 15 is considered invalid in RIP. In
such cases, you have to tell the receiving routing protocol how to translate the metrics.
Static & Default Route Lab
Lab 4-1 Static and Default Route
Problem: In the network shown in Figure 4-4, configure each router such using static and default
routes such that there is complete connectivity through the network.
The initial configuration of each router is given below. Lab note: The DCE side of your
DCE/DTE back to back cable plugs into the interface with the clockrate configured. If you
neglect this, the lab will not work as the interface will not stay up.
RouterA
Router#config terminal
Router(config)#hostname RouterA
RouterA(config)#int fa0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.0
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#interface Serial0/0
RouterA(config-if)#ip address 192.168.3.1 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#interface Serial0/1
RouterA(config-if)#ip address 192.168.3.5 255.255.255.252
RouterA(config-if)#clock rate 2000000
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterB
Router#config terminal
Router(config)#hostname RouterB
RouterB(config)#interface FastEthernet0/0
RouterB(config-if)# ip address 192.168.2.1 255.255.255.0
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#interface Serial0/0
RouterB(config-if)# ip address 192.168.3.2 255.255.255.252
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#interface Serial0/1
RouterB(config-if)# ip address 192.168.3.9 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterC
Router#config t
Router(config)#hostname RouterC
RouterC(config)#interface FastEthernet0/0
RouterC(config-if)#ip address 200.1.20.1 255.255.255.0
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterC(config)#interface Serial0/0
RouterC(config-if)# ip address 192.168.3.10 255.255.255.252
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterC(config)#interface Serial0/1
RouterC(config-if)# ip address 192.168.3.6 255.255.255.252
RouterC(config-if)#no shut
RouterC(config-if)#exit
Figure 4-6 Lab 4-1
Solution:
To provide full connectivity across the network, each router will require static routes to the
different networks attached to the routers. To reach the Internet, all routers will require a default
route. The solution is shown below:
RouterA(config)#ip route 192.168.2.0 255.255.255.0 192.168.3.2
RouterA(config)#ip route 192.168.3.8 255.255.255.252 192.168.3.2
RouterA(config)#ip route 0.0.0.0 0.0.0.0 192.168.3.6
RouterB(config)#ip route 192.168.1.0 255.255.255.0 192.168.3.1
RouterB(config)#ip route 192.168.3.4 255.255.255.252 192.168.3.1
RouterB(config)#ip route 0.0.0.0 0.0.0.0 192.168.3.10
RouterC(config)#ip route 192.168.1.0 255.255.255.0 192.168.3.5
RouterC(config)#ip route 192.168.2.0 255.255.255.0 192.168.3.9
RouterC(config)#ip route 192.168.3.0 255.255.255.252 192.168.3.5
RouterC(config)#ip route 0.0.0.0 0.0.0.0 200.1.20.2
Verification:
To verify, first check the routing table of each router:
RouterA#sh ip route
–output truncated–
Gateway of last resort is 192.168.3.6 to network 0.0.0.0
C 192.168.1.0/24 is directly connected, FastEthernet0/0
S 192.168.2.0/24 [1/0] via 192.168.3.2
192.168.3.0/30 is subnetted, 3 subnets
S 192.168.3.8 [1/0] via 192.168.3.2
C 192.168.3.0 is directly connected, Serial0/0
C 192.168.3.4 is directly connected, Serial0/1
S* 0.0.0.0/0 [1/0] via 192.168.3.6
RouterB#sh ip route
–output truncated–
Gateway of last resort is 192.168.3.10 to network 0.0.0.0
S 192.168.1.0/24 [1/0] via 192.168.3.1
C 192.168.2.0/24 is directly connected, FastEthernet0/0
192.168.3.0/30 is subnetted, 3 subnets
C 192.168.3.8 is directly connected, Serial0/1
C 192.168.3.0 is directly connected, Serial0/0
S 192.168.3.4 [1/0] via 192.168.3.1
S* 0.0.0.0/0 [1/0] via 192.168.3.10
RouterC#sh ip route
–output truncated–
Gateway of last resort is 200.1.20.2 to network 0.0.0.0
C 200.1.20.0/24 is directly connected, FastEthernet0/0
S 192.168.1.0/24 [1/0] via 192.168.3.5
S 192.168.2.0/24 [1/0] via 192.168.3.9
192.168.3.0/30 is subnetted, 3 subnets
C 192.168.3.8 is directly connected, Serial0/0
S 192.168.3.0 [1/0] via 192.168.3.5
C 192.168.3.4 is directly connected, Serial0/1
S* 0.0.0.0/0 [1/0] via 200.1.20.2
You can also use ping command to verify connectivity across the network as shown below:
RouterA#ping 192.168.2.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.2.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/5/16 ms
RouterA#ping 200.1.20.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 200.1.20.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
RouterA#ping 192.168.3.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.3.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
RouterB#ping 200.1.20.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 200.1.20.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
RouterB#ping 192.168.3.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.3.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
RouterC#ping 192.168.2.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.2.1, timeout is 2 seconds:
!!!!!
Summary
While this chapter was light reading compared to the previous chapters, it lays the foundation for
the next chapter where you will learn the traits of individual protocols and how to configure them.
It is essential that you are able to use static and default routing well before heading into routing
protocols.
It is also important to understand the difference between administrative distance and metrics and
where each is used.
Chapter 5: Routing Protocols
The previous chapter introduced you to the IP routing. It also discussed the basics of dynamic
routing. This chapter continues from where the last one ended and looks at three routing protocols
– RIP, EIGRP and OSPF in detail. In this chapter you will learn about the traits of each of these
protocols, how to configure and how to troubleshoot them.
5-1 RIPv1 & RIPv2
5-2 Configuring RIPv1 & RIPv2
5-3 Verifying and Troubleshooting RIP
5-4 Enhanced Interior Gateway Routing Protocol (EIGRP)
5-5 Configuring EIGRP
5-6 Verifying and Troubleshooting EIGRP
5-7 Open Shortest Path First (OSPF)
5-8 Configuring OSPF
5-9 Verifying and Troubleshooting OSPF
5-10 EIGRP and OSPF Summary & Redistribution Routes
5-11 Lab 5-1: RIP
5-12 Lab 5-2: EIGRP
5-13 Lab 5-3: OSPF
5-14 Summary
RIPv1 & RIPv2
Routing Information Protocol (RIP)
Although the new version of the CCNA exam 200-120 does not cover RIP, we want to touch on it
for its historical value. This way you understand some of the basic characteristics of it and how a
hybrid protocol such as EIGRP took some distance vector based features from a true distance
vector protocol. So simply read through this to have a basic foundation of RIP and do not worry
about it from a test perspective.
As you already know, RIP is a distance-vector protocol. In fact, it is the only distance vector
protocol that is widely used today. There are two versions of RIP that can be used – RIP version
1 (RIPv1) and RIP version 2 (RIPv2). To make it easier to understand, this section first looks at
RIPv1.
RIPv1 was originally defined in RFC 1058 and is a classful protocol. Hence, it does not
advertise subnet mask information and assumes the default subnet mask based on the class of the
network.
When a router starts up, it recognizes the connected networks and adds them to its routing table as
connected routes (denoted by C in the routing table). When RIP is enabled, it will broadcast the
routing table using UDP port 520. All neighboring routers that have RIP enabled will get this
broadcast update and add the routes received in the update to their routing table. Each of these
neighbors will in turn broadcast out their routing tables. This will cause the routing tables across
the network to converge.
Being a distance-vector protocol, RIP has the following characteristics:
1. It sends out its entire routing table every 30 seconds.
2. It uses hop counts as metric and has a maximum hop count limit of 15.
3. It implements split horizon, route poisoning and holddown timers to prevent routing loops.
4. It has high convergence time
RIP Timers
Notice that there are two timers mentioned above. RIP actually uses 4 different timers. To
understand these timers consider the network shown in figure 5-1. If RIP is enabled on all the
routers, after convergence, all the routers will know the 192.168.5.0/24 network
Figure 5-1 Understanding RIP Timers
Now take a look at the four timers used by RIP:
Route update timer – RIP sends broadcasts out the entire routing table. This interval sets
the interval between these updates.
Route invalid timer – If a router does not hear any updates about a particular route for
certain duration, it will consider that route as invalid. The invalid timer determines this
duration. When a route becomes invalid, the router will send out poisoned routes to its
neighbors. By default this value is 180 seconds. In the network above, if RouterC looses
connectivity to RouterD, it will not hear about the 192.168.5.0/24 network. It will wait
180 seconds before considering the route as invalid and sending out poisoned routes.
Holddown timer – When a route becomes invalid, it enters into a holddown state. In this
state the route will remain in the routing table and packets will be forwarded towards the
destination but the router will not accept any updates regarding this route unless the update
contains a metric equal to or better than the existing metric. The holddown timer
determines the duration of the holddown state. By default this duration is 180 seconds.
This state is useful to ensure that flapping routes do not cause instability. In the network
above, when RouterB gets the poisoned route from RouterC, it will put the route to
192.168.5.0/24 in the holddown state for 180 seconds. If RouterC regains connectivity to
RouterD and updates RouterB, the route will be removed from the holddown state.
Router flush timer – Once a route becomes invalid, it is put in a holddown state. While in
the holddown state, the route is still in the routing table and will remain so for the duration
specific by the flush timer. Once this timer expires, the route is flushed out of the routing
table. By default this timer is 240 seconds and starts at the same time as the invalid timer.
Hence the flush timer must be more than the invalid timer. In the above example, RouterA,
RouterB and RouterC will remove the route to 192.168.5.0/24 60 seconds after it was
marked invalid.
The timers can be a little confusing. To make it easier to understand, remember that:
1. Invalid timer and Flush timer both start when the router receives an update for a route.
Each time an update is received, the timers are reset back.
2. If an update for a route is not heard for the duration of the invalid timer, it is marked
invalid and the holddown timer is started.
3. While the route is in the holddown state, the router will not accept an inferior route for
that destination. Inferior route is an update with a metric worse then or equal to the
existing one.
4. The route will be removed when the flush timer expires.
In the above network, route to 192.168.5.0/24 becomes invalid 180 seconds after RouterC looses
connectivity to RouterD. At this stage, 60 seconds are left in the flush timer. Hence 60 seconds
after the route became invalid; it will be removed from the routing table. As you can see, it takes
a total of 240 seconds or four minutes for a lost route to be removed from the routing tables
across the network.
Configuring RIPv1 & RIPv2
Configuring RIPv1
Configuring RIP is pretty easy and consists of the following two steps:
1. Enable RIP globally using the router rip global configuration command. This command
will bring you to the routing configuration mode as shown below:
Router(config)#router rip
Router(config-router)#
1. Tell the router which networks to advertise using the network <network> command in
the routing configuration mode as shown below:
Router(config-router)#network 192.168.0.0
Remember that the network command is used to tell the router that connected routes you want to
advertise. Any routes learned from other routers will automatically be advertised out. Since
RIPv1 is being used, the network command will accept classful networks only. As soon as
the network command is given, RIP will begin sending out updates as well as processing updates
received from neighbors.
The network shown in Figure 5-2 will be used for the rest of the RIP sections.
Now that you know how RIP works and how to configure it, let us configure the network shown in
Figure 5-2 to see effect of RIP on the routing table. For this example, we will enable RIP on
RouterA, RouterB and RouterD only. RouterC will be configured in one of the sections ahead.
The configuration required on the three routers is shown below:
RouterA(config)#router rip
RouterA(config-router)#network 192.168.1.0
RouterA(config-router)#network 192.168.2.0
RouterB(config)#router rip
RouterB(config-router)#network 192.168.2.0
RouterB(config-router)#network 192.168.3.0
RouterD(config)#router rip
RouterD(config-router)#network 192.168.3.0
RouterD(config-router)#network 192.168.4.0
Figure 5-2 RIP example
Now take a look at the routing table on each of the three routers to see the effect:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/2] via 192.168.2.2, 00:00:25, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/1
R 192.168.3.0/24 [120/1] via 192.168.2.2, 00:00:25, FastEthernet0/1
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/1] via 192.168.3.4, 00:00:19, FastEthernet0/1
R 192.168.1.0/24 [120/1] via 192.168.2.1, 00:00:15, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/1
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
R 192.168.1.0/24 [120/2] via 192.168.3.2, 00:00:23, FastEthernet0/0
R 192.168.2.0/24 [120/1] via 192.168.3.2, 00:00:23, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/0
In the above output, note the lines that start with R. The R signifies that these routes were learned
from RIP. In output from RouterA, notice that the route to 192.168.4.0/24 network was learned
from RIP. The 120/2 in the line shows that the administrative distance of the route is 120 (default
RIP AD) and that the destination network is two hops away. The next hop towards 192.168.4.0/24
is 192.168.2.2, which is RouterB. Similarly you will notice that each router now knows about
every subnet in the network. You may have noticed that compared to static or default routing,
configuring RIP was easier and faster. Now when there is a change in the network, the routing
table will automatically get updated across the network.
RIP version 2 (RIPv2)
RIPv1 was one of the earliest routing protocols and was very popular back when it was created.
With evolution in networking standards, RIP was found lacking in many places. Hence RIPv2 was
developed in 1993 and standardized under RFC 2453. While RIPv2 is also a distance-vector
routing protocols and fundamentally similar to RIPv1, there are some difference in the way it
works. Table 5-1 shows the differences between RIPv1 and RIPv2.
Table 5-1 Differences between RIPv1 and RIPv2
RIPv1 RIPv2
It is a classful protocol and Is a classless protocol and sends
does not send subnet masks in subnet masks in routing updates
routing updates
Uses broadcast to Uses multicast to communicate
communicate with neighbors with peers. Multicast address
224.0.0.9 is used.
RIPv1 does not support RIPv2 supports authentication
authentication
Does not support VLSM Supports VLSM
Remember that apart from the differences given in Table 5-1, RIPv2 is similar to RIPv1 with a
maximum hop count of 15 and same timers as RIPv1. It also implements the same loop prevention
techniques as RIPv1. The configuration for RIPv2 is same as RIPv1 but requires the addition
of version 2 command in the routing configuration mode. RouterA, RouterB and RouterD from
our previous example can be configured to use RIPv2 as shown below:
RouterA(config)#router rip
RouterA(config-router)#version 2
RouterB(config)#router rip
RouterB(config-router)#version 2
RouterD(config)#router rip
RouterD(config-router)#version 2
Take a look at the routing tables of these routers after the change:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/2] via 192.168.2.2, 00:00:11, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/1
R 192.168.3.0/24 [120/1] via 192.168.2.2, 00:00:11, FastEthernet0/1
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/1] via 192.168.3.4, 00:00:20, FastEthernet0/1
R 192.168.1.0/24 [120/1] via 192.168.2.1, 00:00:05, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/1
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
R 192.168.1.0/24 [120/2] via 192.168.3.2, 00:00:07, FastEthernet0/0
R 192.168.2.0/24 [120/1] via 192.168.3.2, 00:00:07, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/0
You will notice that the routing table output is same irrespective of the RIP version used. The
output will only differ between the two protocols if the default mask is not used for the given
class. In such a case, you will notice that when RIPv2 is used, the subnet mask is correctly seen
on the neighbor while in case of RIPv1 the neighbor assumes the default subnet mask. To show
this difference in the routing table, I temporarily added a 192.168.20.0/25 network on RouterA
and advertised it using RIPv1. The output of the routing table from RouterB is shown below:
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/1] via 192.168.3.4, 00:00:00, FastEthernet0/1
R 192.168.20.0/24 [120/1] via 192.168.2.1, 00:00:11, FastEthernet0/0
R 192.168.1.0/24 [120/1] via 192.168.2.1, 00:00:11, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/1
In the above output notice that the route to 192.168.20.0 network has a mask of /24 instead of /25.
When the version was changed to 2, notice the routing table output on RouterB:
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/1] via 192.168.3.4, 00:00:03, FastEthernet0/1
192.168.20.0/25 is subnetted, 1 subnets
R 192.168.20.0 [120/1] via 192.168.2.1, 00:00:03, FastEthernet0/0
R 192.168.1.0/24 [120/1] via 192.168.2.1, 00:00:03, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/1
Notice that the mask for the 192.168.20.0 is correct displayed at /25 when RIPv2 was used.
Stopping RIP updates on an Interface
As soon as RIP is enabled, it will start sending and receiving updates on interfaces. Many
situations require you to stop RIP from sending updates out an interface. An example of such a
situation is when an interface connects to the Internet. You do not want your routing updates to go
out to the Internet. In such situations, you can use the passive-interface interface command in the
routing configuration mode to stop RIP from sending updates out that interface. This command
stop RIP from sending updates but it will continue to receive updates on that interface.
In our example network, we do not need to send RIP updates out interface fa0/0 on RouterA and
interface fa0/1 on RouterD. We can stop updates going of of these interfaces using the following
commands:
RouterA(config)#router rip
RouterA(config-router)#passive-interface fa0/0
RouterD(config)#router rip
RouterD(config-router)#passive-interface fa0/1
RIP load balancing
Remember that we did not configure RouterC earlier? Let us configure RouterC to run RIP across
both its networks as shown below:
RouterC(config)#router rip
RouterC(config-router)#network 192.168.2.0
RouterC(config-router)#network 192.168.3.0
After the above configuration, the routing table on RouterC looks as shown below:
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/1] via 192.168.3.4, 00:00:16, FastEthernet0/1
R 192.168.1.0/24 [120/1] via 192.168.2.1, 00:00:11, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/1
The output above is similar to what was seen in RouterB. So why did we not configure RotuerC
earlier? Take a look at the routing table of RouterA after we enabled RIP on RouterC:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
R 192.168.4.0/24 [120/2] via 192.168.2.3, 00:00:18, FastEthernet0/1
[120/2] via 192.168.2.2, 00:00:10, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
C 192.168.2.0/24 is directly connected, FastEthernet0/1
R 192.168.3.0/24 [120/1] via 192.168.2.3, 00:00:18, FastEthernet0/1
[120/1] via 192.168.2.2, 00:00:10, FastEthernet0/1
In the above output notice that RouterA’s routing table has two paths listed to 192.168.4.0/24 and
192.168.3.0/24. Similarly, RouterD has two paths listed for 192.168.1.0/24 and 192.168.2.0/24
as shown below:
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
R 192.168.1.0/24 [120/2] via 192.168.3.3, 00:00:07, FastEthernet0/0
[120/2] via 192.168.3.2, 00:00:05, FastEthernet0/0
R 192.168.2.0/24 [120/1] via 192.168.3.3, 00:00:07, FastEthernet0/0
[120/1] via 192.168.3.2, 00:00:05, FastEthernet0/0
C 192.168.3.0/24 is directly connected, FastEthernet0/0
To explain this behavior, consider what happened when RouterC started advertising its routes to
RouterA and RouterD. Till that point, RouterA has only one way to reach 192.168.3.0/24 and
192.168.4.0/24 with hop counts of 1 and 2 respectively. When RouterC advertised its routes to
RouterA, it also advertised the networks 192.168.3.0/24 and 192.168.4.0/24 with hop counts of 1
and 2. At this stage, RouterA has two paths to the same destination and both paths have the same
metric. As you already know, when a routing protocol has multiple paths to a destination it
compares the metric to decide which path to use. In this same we have two equal cost paths.
When a routing protocol has two or more equal cost paths, it will use both the paths and the traffic
will be load balanced across both the paths. Hence in the above outputs you see two paths for the
destination networks.
RIP can load balance between 4 equal cost paths by default. The older codes of Cisco IOS
support load balancing across a maximum of 6 equal cost paths while the newer codes support
load balancing across a maximum of 16 equal cost paths. You can change the default value of 4
using the maximum-paths number under the routing configuration mode.
Verifying & Troubleshooting RIP
Know how to verify and troubleshoot a protocol or feature is as important as knowing how to
configure it because configurations do have errors and assuming that everything is working
correctly can lead to major network problems. The following three commands are used to verify
and troubleshoot RIP:
show ip route
show ip protocols
debug ip rip
The show ip route command has been covered in the previous chapter and earlier in this chapter.
Eventually a complete and correct routing table across the network is the best verification of a
routing table. The other two commands are covered below.
Using show ip protocols command to verify and troubleshoot RIP
As you already know, the show ip protocols command helps verify routing protocols running on
the router. An example of the output of this command from RouterB of our network is shown
below:
RouterA#sh ip protocols
Routing Protocol is “rip”
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Sending updates every 30 seconds, next due in 27 seconds
Invalid after 180 seconds, hold down 180, flushed after 240
Redistributing: rip
Default version control: send version 1, receive version 1
Interface Send Recv Triggered RIP Key-chain
FastEthernet0/1 1 1
Automatic network summarization is not in effect
Maximum path: 4
Routing for Networks:
192.168.1.0
192.168.2.0
Passive Interface(s):
FastEthernet0/0
Routing Information Sources:
Gateway Distance Last Update
192.168.2.2 120 00:00:00
192.168.2.3 120 00:00:22
Distance: (default is 120)
Notice in the above output that RIP is being used on the router and it is routing for the 192.168.2.0
and 192.168.3.0 networks. RIPv1 is being used on both the interfaces and updates are being sent
every 30 seconds. It also shows the fa0/0 is a passive interface.
While the output is very useful in verifying the configuration, it is also useful for troubleshooting.
Looking at the above output, it is fairly easy to figure out what has been configured. For example,
the output shows that it is routing for networks 192.168.1.0 and 192.168.2.0 so the following
network commands should be present under the routing configuration:
Router(config-router)#network 192.168.1.0
Router(config-router)#network 192.168.2.0
In addition to that, the fa0/0 interface is shown as passive, hence the passive-interface
fa0/0 command is also present in the configuration. A quick look at the show ip interface
brief command will show you the interfaces of the router and their IP address:
RouterA#sh ip int br
Interface IP-Address OK? Method Status Protocol
FastEthernet0/0 192.168.1.1 YES manual up up
FastEthernet0/1 192.168.2.1 YES manual up up
Comparing the above outputs, it is easy to see that RIP is running on the correct interfaces and
networks.
Another important thing, which the output shows you, is that it is sending and receiving RIPv1
updates. You can confirm the versions across the routers in the network to rule out version
mismatch if routing updates are not seen on few routers.
Using debug ip rip command to troubleshoot RIP
The debug ip rip command displays routing updates on the console as soon as they are sent or
received. This output is useful to see if the updates are being sent to and being received from the
neighbors or not. The following example shows the output of the command on RouterA:
RouterA#
Jul 21 00:58:11: RIP: ignored v2 packet from 192.168.2.2 (illegal version)
On the other hand, the following output is seen on RouterB when it receives updates from
RouterA:
RouterB#
Jul 21 01:00:01: RIP: ignored v1 packet from 192.168.3.4 (illegal version)
The two outputs above clearly show that there is a version mismatch and you will need to use
the version command in the routing configuration mode to fix it.
Enhanced Interior Gateway Routing Protocol (EIGRP)
While RIP is a good protocol to use in a small and simple network, its disadvantages become
obvious in large and complex networks. Few problems associated with RIP in such networks are:
1. It has a maximum hop count of 15. This means that RIP cannot be used on a network
spanning more than 15 routers
2. It uses hop count as the sole metric even where multiple paths are available. Hop count is
not a suitable metric since links can have varied bandwidths. For example, in Figure 5-2 if
the link between RouterA and RouterB has a bandwidth of 1Mbps while the link between
RouterA and RouterC has a bandwidth of 128Kbps, RIP will still consider both links
equal since the hop count is same. It is usually desirable to use the better link before the
slower one.
3. It has a high convergence time.
Due to these disadvantages, other routing protocols such as EIGRP should be considered in place
of RIP.
EIGRP is a Cisco proprietary classless routing protocol that is essentially an enhanced distance
vector protocol or a hybrid protocol. It takes various features of distance vector protocols and
link state protocols to overcome the disadvantages associated with distance vector protocols
while retaining the simplicity associated with them.
EIGRP inherits the following features of a distance vector protocol:
1. It has a maximum hop count limit of 100 by default and can be increased up to 255.
2. It uses routing-by-rumor mechanism.
3. Implements loop avoidance techniques such as split horizon.
It inherits the following features of a link state protocol:
1. It discovers neighbors and periodically checks their status
2. Instead of periodic updates, it send updates when change occurs
EIGRP has some features that make it standout from other protocols such as RIP and OSPF. While
discussing each of them in out of scope of CCNA, the most important ones are listed below:
1. Supports multiple routed protocols such as IPv4, IPv6, Appletalk, IPX etc via protocol-
dependent modules (PDMs)
2. Is a classless protocol and supports VLSM/CIDR.
3. Supports summaries and discontiguous networks
4. Uses neighbor discovery.
5. Utilizes Reliable Transport Protocol (RTP) for communication between neighbors
6. Uses Diffusing Update Algorithm (DUAL) for best path selection. This algorithm
considers multiple metrics for the purpose.
The following sections look at the various features of EIGRP in detail.
Multiple Network Protocol Support
EIGRP provides support for multiple Network layer protocols such as IPv4, IPv6, IPX and
Appletalk. It supports these protocols through the use of Protocol Dependent Modules (PDMs).
Separate tables are maintained for each network layer protocol for which EIGRP is being run.
While you will be learning about EIGRP in respect to IPv4 only, it is important to remember that
EIGRP supports multiple protocols. The only other routing protocol that supports multiple
network layer protocol is Intermediate System-to-Intermediate System (IS-IS).
Neighbor Discovery and Communication
One of the most important features that EIGRP adopts from link state protocols is neighbor
discovery and adjacency formation. Unlike distance vector protocols, link state protocols and
EIGRP will not exchange routes with just anyone. Routers running EIGRP will first discover
other routers running EIGRP by sending out Hello packets. They packets are multicast to address
224.0.0.10. When two routers receive Hello packets from each other, they compare the following
information found in the packet:
1. Autonomous System (AS) Number – A router can belong to one or more EIGRP
autonomous systems. As you know, an AS is group of devices under a single
administrative domain. EIGRP adjacency can be formed only between routers that belong
to the same AS. The hello packets contain the AS number to which the sending router
belongs to.
2. Identical Metrics (K values) – EIGRP uses various metrics to calculate the best path.
These metrics are also called K-values. A router can be configured to use some or all of
these metrics. Two routers cannot form an adjacency if they have been configured to use
different sets of K-values. These metrics are discussed in detailed later in the chapter.
EIGRP routers form adjacency because routing updates are not sent out periodically via EIGRP
like normal distance vector protocols. So EIGRP needs a way to know when a new neighbor has
joined or when a previously known neighbor went down. The only time EIGRP sends out the
entire table is when a new neighbor is discovered.
Another benefit of adjacency is that it helps divide the routers into different autonomous systems.
Routers belonging to different autonomous systems will not form adjacency and hence will not
share the routing tables. This is very beneficial in a large network where routing tables can
become huge. Dividing the routers into different autonomous systems can help reduce the routing
table size.
EIGRP uses a proprietary protocol called Reliable Transport Protocol (RTP) to manage
communications between neighbors. This protocol is designed to provide very reliable
communication between neighbors. RTP uses both multicast and unicast to deliver updates
quickly and tracks acknowledgement of updates.
EIGRP will send out updates whenever there is a change in the network. This update is sent to
multicast address 224.0.0.10. Each update is assigned a sequence number and neighbors have to
acknowledge receipt of each update. Using sequence numbers an EIGRP router is able to track
which neighbors have acknowledged an update. If an acknowledgement is not received from a
neighbor, EIGRP will send the same update to this neighbor using unicast. If an acknowledgement
is not received after 16 unicast messages, the neighbor is declared dead. This process is often
referred to as reliable multicast.
When EIGRP sends out an update, loss of packets can cause routing tables in the network to get
corrupted.Thus the reliability offered by RTP is very important to EIGRP.
Diffusing Update Algorithm (DUAL) and EIGRP metrics
EIGRP uses Diffusing Update Algorithm (DUAL) for selecting the best path to remote networks.
The main features of DUAL are:
1. Support of VLSMs
2. Recovering lost routes dynamically.
3. Determining the backup route and using it when the main route is lost.
4. Finding alternate routes if a route is lost and no backup route is found.
5. Using various metrics to determine the best routes.
DUAL is responsible for the fast convergence time in EIGRP. In fact, the convergence time of
EIGRP is possibly the fastest amongst all routing protocols. This fast convergence is achieved
because all EIGRP routers maintain a copy of the network topology. If the best route goes down, a
router simply scans the topology table and selects a backup route. If a backup route is not found in
the topology table, the router will reach out to its neighbors to find an alternate path.
Another feature that differentiates DUAL is the use of multiple metrics to calculate the best path
instead of using single metric like most other routing protocols. EIGRP can use the following four
metrics to calculate the best path:
1. Bandwidth (also called path bandwidth value)
2. Delay (also called cumulative line delay)
3. Load
4. Reliability
By default it uses only bandwidth and delay to calculate the best path, but it can be configured to
also use the other two metrics. Remember that an adjacency will not form between two routers
that have been configured to use different metrics.
A fifth element, maximum transmission unit (MTU) size, is also required in some situations such
as redistribution but is never used in EIGRP calculations. This value represents the smallest MTU
value between the router and the remote destination network.
To find the best path to a network, DUAL uses the different metrics of each path in an algorithm to
compute the cost of the path. The path with the lowest cost is considered the best. The exact
formula used to calculate the path using the metrics is out of scope of the CCNA exam.
Route Discovery and Best Path Selection
So far, you have learned about RTP and DUAL and how routers form adjacencies. EIGRP is one
protocol that believes in finding and storing as much information about the network as possible.
As a router learns about neighbors and forms an adjacency, it stores the details of each neighbor
in a table called the Neighborship or Neighbor table.
After adjacencies have formed, routing tables are exchanged between neighbors. These tables
contain information regarding remote networks and path to them. This information is stored in a
table called the Topology table. The information received from the neighbor consists of the
following:
1. Remote network’s address
2. Remote network’s subnet mask
3. Next hop to the remote network
4. Cost to the remote network
Figure 5-3 Reported and Feasible distance
While the first three items are self explanatory, the cost is something that needs further
explanation. The cost reported by the neighbor is the cost from the neighbor to the destination
network. It does not include the cost from the receiving router to the neighbor (advertising router).
This cost is known as the Reported or Advertised distance.
When the receiving router adds the cost between itself and the neighbor to the reported distance,
the resulting cost is known as the feasible distance.
To further understand this, consider the network shown in Figure 5-3. Assuming that EIGRP is
running on all routers in the network, RouterB learns about the 192.168.1.0/24 network from
RouterD. The cost (feasible distance) from RouterB to the destination network is x. When
RouterB advertises this network to RouterA, it will report the cost as x. Here x is the reported
distanced for RouterA. The cost between RouterA and RouterB is z. RouterA will add this cost to
the reported distance to find the feasible distance or the total cost to the destination network. For
the network 192.168.1.0/24, the feasible distance on RouterA is x+z.
It is important to understand here that the receiving router, RouterA in our case, has to add the cost
between itself and the advertising router (RouterB is our case) to get the total cost to the
destination network. This total cost is known as the feasible distance.
For each destination network that a router learns about, it will select the path with the lowest
cost. This path is then sent to the router to be added to the routing table and is known as
the Successor.
To select a backup path, the router compares the feasible distance of the successor with the
reported distance of other available paths to the same destination network. If the reported
distance of the other path is less than the feasible distance of the successor, the other path is
marked as a backup path and is known as the feasible successor.
To further understand how a feasible successor is selected, assume that the successor route to
192.168.1.0/24 network from RouterB in Figure 5-3, is the RouterA->RouterB->RouterD path.
The feasible distance of this path is the sum of z and x (z+x). On the other hand, RouterA learns of
another path to the 192.168.1.0/24 network from RouterC. The reported distance of this path is y.
This route will be considered as a backup route or the feasible successor only if y is less than the
sum of z and x (y < z+x).
EIGRP will store up to six feasible successors to a single destination in the topology table.
This section introduced a lot of new and important EIGRP concepts and it is important to
remember them. The list below summarizes important topics discussed in this section above:
1. Neighbor Table – Stores information about routers with whom an adjacency has been
formed.
2. Topology Table – Stores information about every route and destination network learned
from neighbors.
3. Reported Distance – The cost from the advertising router to the destination network.
4. Feasible Distance – The cost from the receiving router to the destination network. This is
the reported distance plus the cost of path between the receiving and the advertising
router.
5. Successor – The best route to a destination network.
6. Feasible Successor – The backup routes to a destination network. The reported distance
of a route has to be less that the feasible distance of the successor for it to be marked as a
feasible successor.
Configuring EIGRP
EIGRP configuration is divided into two modes – the router configuration mode and the Interface
configuration mode. The global configuration such as AS number and networks to advertise are
configured in the router configuration mode, the Interface specific configuration such as metrics
and timers are configured in the Interface configuration mode.
The steps to Enable EIGRP and define the networks to be advertised are similar to that of RIP and
can be done in the following two steps:
1. Enable EIGRP globally using the router eigrp as-number global configuration command.
This command will bring you to the routing configuration mode as shown below:
Router(config)#router eigrp 10
Router(config-router)#
The AS Number can be anything from 1 to 65535 but has to be same on all routers that need to
form adjacency.
1. Tell the router which networks to advertise using the network <network> command in
the routing configuration mode as shown below:
Router(config-router)#network 192.168.0.0
EIGRP gives the option to use wildcard masks when configuring EIGRP but for CCNA we will
use classful networks only when defining networks in EIGRP.
Remember that the network command is used to tell the router the connected routes you want to
advertise. Any routes learned from other routers will automatically be advertised out. As soon as
the network command is given, EIGRP will begin sending out hello packets to the network and
wait for hello packets from the neighbors.
Figure 5-4 will be used for the rest of EIGRP sections in this chapter.
Figure 5-4 EIGRP example network
Now that you know how EIGRP functions and how to configure it, let us configure the network
shown in Figure 5-4 to see EIGRP in effect. We will not configure RouterE in this section.
The configuration required on the four routers to get EIGRP working is shown below. All routers
will be configured to be EIGRP AS 10.
RouterA(config)#router eigrp 10
RouterA(config-router)#network 192.168.1.0
RouterA(config-router)#network 192.168.4.0
RouterB(config)#router eigrp 10
RouterB(config-router)#network 192.168.1.0
RouterB(config-router)#network 192.168.2.0
RouterB(config-router)#network 10.0.0.0
RouterC(config)#router eigrp 10
RouterC(config-router)#network 192.168.2.0
RouterC(config-router)#network 192.168.3.0
RouterC(config-router)#network 192.168.6.0
RouterD(config)#router eigrp 10
RouterD(config-router)#network 192.168.4.0
RouterD(config-router)#network 192.168.4.0
RouterD(config-router)#network 192.168.5.0
Now look at the routing table of each router to see the effect. Remember that lines starting with D
denote a route learned from EIGRP.
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
D 192.168.5.0/24 [90/307200] via 192.168.4.4, 00:03:22, FastEthernet0/1
D 10.0.0.0/8 [90/307200] via 192.168.1.2, 00:04:17, FastEthernet0/0
D 192.168.6.0/24 [90/2707456] via 192.168.1.2, 00:03:42, FastEthernet0/0
C 192.168.1.0/24 is directly connected, FastEthernet0/0
D 192.168.2.0/24 [90/2195456] via 192.168.1.2, 00:04:19, FastEthernet0/0
D 192.168.3.0/24 [90/2221056] via 192.168.1.2, 00:03:47, FastEthernet0/0
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
D 192.168.4.0/24 [90/307200] via 192.168.1.1, 00:04:37, FastEthernet0/0
D 192.168.5.0/24 [90/332800] via 192.168.1.1, 00:03:33, FastEthernet0/0
10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
D 10.0.0.0/8 is a summary, 00:04:27, Null0
C 10.1.0.0/16 is directly connected, FastEthernet0/1
D 192.168.6.0/24 [90/2681856] via 192.168.2.3, 00:03:53, Serial0/0
C 192.168.1.0/24 is directly connected, FastEthernet0/0
C 192.168.2.0/24 is directly connected, Serial0/0
D 192.168.3.0/24 [90/2195456] via 192.168.2.3, 00:03:57, Serial0/0
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
D 192.168.4.0/24 [90/2221056] via 192.168.2.2, 00:04:39, Serial0/1
D 192.168.5.0/24 [90/2246656] via 192.168.2.2, 00:04:11, Serial0/1
D 10.0.0.0/8 [90/2195456] via 192.168.2.2, 00:04:39, Serial0/1
C 192.168.6.0/24 is directly connected, Serial0/0
D 192.168.1.0/24 [90/2195456] via 192.168.2.2, 00:04:39, Serial0/1
C 192.168.2.0/24 is directly connected, Serial0/1
C 192.168.3.0/24 is directly connected, FastEthernet0/0
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
C 192.168.5.0/24 is directly connected, FastEthernet0/0
D 10.0.0.0/8 [90/332800] via 192.168.4.1, 00:04:22, FastEthernet0/1
D 192.168.6.0/24 [90/2733056] via 192.168.4.1, 00:04:22, FastEthernet0/1
D 192.168.1.0/24 [90/307200] via 192.168.4.1, 00:04:22, FastEthernet0/1
D 192.168.2.0/24 [90/2221056] via 192.168.4.1, 00:04:22, FastEthernet0/1
D 192.168.3.0/24 [90/2246656] via 192.168.4.1, 00:04:22, FastEthernet0/1
Stopping EIGRP updates on an Interface
As soon as EIGRP is enabled on an interface, it will start sending and receiving hello packets on
its interfaces. Many situations require you to stop EIGRP from sending hello packets out an
interface or forming an adjacency via that interface. An example of such a situation is when an
interface connects to the Internet. You do not want your routing updates to go out to the Internet. In
such situations, you can use the passive-interface interface command in the routing configuration
mode to stop EIGRP from sending hello packets out that interface.
In our example network, we do not need to send EIGRP updates out interface fa0/1 on RouterB.
We can stop updates going out off these interfaces using the following commands:
RouterB(config)#router eigrp 10
RouterB(config-router)#passive-interface fa0/1
Notice that the behavior of the passive-interface command is different in EIGRP than in RIP. In
RIP, updates will not be sent out a passive interface but will continue to be received. EIGRP on
the other hand, will not send or receive updates on a passive interface.
Multiple Autonomous Systems
As you already know, AS is used to group routers into a single administrative domain in EIGRP.
Routers belonging to different AS will not form an adjacency and thus will not exchange routes.
Having a single AS across a large network can cause it to have a complicated topology and
routing table. In such networks, convergence can slow down during network changes. To mitigate
this, large networks should be broken into multiple ASes.
Routing information is not shared between different ASes by default. Dividing a large network
into multiple ASes will cause incomplete routing tables. To mitigate that, routes
are redistributed between the ASes at points where they intersect. While redistribution is out of
scope of CCNA, you should remember the following when it comes to EIGRP redistribution:
1. Normal EIGRP routes are called internal routes and have an administrative distance (AD)
of 90. On the other hand, redistributed routes are called external routes and have an
administrative distance of 170. Even routes redistributed between two EIGRP ASes are
treated as external routes
2. When redistributing from one EIGRP AS to another, the metrics are not changed. This is
because EIGRP understands it own metrics! On the other hand when redistributing
between different routing protocols, you need to tell the receiving routing protocol how to
treat the metrics. This is because EIGRP will not understand metrics from OSPF and
similarly OSPF will not understand metrics from EIGRP.
VLSM Support and Summarization
EIGRP propagates subnet masks along with routes in its updates. This enables it to support
Variable length subnet masks (VLSM). As you already know VLSM helps conserve subnets
though use of subnet masks. This also helps EIGRP support discontiguous subnets. A
discontiguous network is one that has two subnets of a classful network connected together by
another classful network.
An example of discontiguous network can be seen in Figure 5-4 where 10.1.0.0/16 and
10.2.0.0/16 networks are separated by 192.168.x.0/24 networks. Remember that RIPv1 does not
support such networks.
EIGRP by default does not support discontigous networks, but can be configured to do so. To
understand the problems that arises when a protocol does not support discontiguous networks, let
us configure RouterE to use EIGRP:
RouterE(config)#router eigrp 10
RouterE(config-router)#network 192.168.5.0
RouterE(config-router)#network 192.168.6.0
RouterE(config-router)#network 10.0.0.0
Let us verify the routing table on RouterE first:
RouterE#sh ip route
–output truncated–
Gateway of last resort is not set
D 192.168.4.0/24 [90/307200] via 192.168.5.4, 00:00:39, FastEthernet0/0
C 192.168.5.0/24 is directly connected, FastEthernet0/0
10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C 10.2.0.0/16 is directly connected, FastEthernet0/1
D 10.0.0.0/8 is a summary, 00:00:26, Null0
C 192.168.6.0/24 is directly connected, Serial0/0
D 192.168.1.0/24 [90/332800] via 192.168.5.4, 00:00:39, FastEthernet0/0
D 192.168.2.0/24 [90/2246656] via 192.168.5.4, 00:00:39, FastEthernet0/0
D 192.168.3.0/24 [90/2195456] via 192.168.6.3, 00:00:34, Serial0/0
Now to see the problem associated with a routing protocol not supporting discontiguous
networks, take a look at the routing table of RouterA:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
D 192.168.5.0/24 [90/307200] via 192.168.4.4, 19:36:09, FastEthernet0/1
D 10.0.0.0/8 [90/307200] via 192.168.1.2, 00:01:02, FastEthernet0/0
D 192.168.6.0/24 [90/2221056] via 192.168.4.4, 00:01:10, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
D 192.168.2.0/24 [90/2195456] via 192.168.1.2, 19:37:06, FastEthernet0/0
D 192.168.3.0/24 [90/2221056] via 192.168.1.2, 00:01:10, FastEthernet0/0
Notice that there is only one route to the 10.0.0.0/8 network pointing towards RouterB whereas in
our network we have two 10.x.0.0/16 networks.
Similarly on RouterD, there is only a single 10.0.0.0/8 route pointing towards RouterE:
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
C 192.168.5.0/24 is directly connected, FastEthernet0/0
D 10.0.0.0/8 [90/307200] via 192.168.5.5, 00:02:41, FastEthernet0/0
D 192.168.6.0/24 [90/2195456] via 192.168.5.5, 00:02:49, FastEthernet0/0
D 192.168.1.0/24 [90/307200] via 192.168.4.1, 19:37:52, FastEthernet0/1
D 192.168.2.0/24 [90/2221056] via 192.168.4.1, 19:37:52, FastEthernet0/1
D 192.168.3.0/24 [90/2221056] via 192.168.5.5, 00:02:49, FastEthernet0/0
So traffic destined to 10.2.0.0/16 network will be routed to RouterE from RouterD but to RouterB
from RouterA.
This happens in EIGRP because by default EIGRP automatically summaries the networks at
classful boundaries. Which means that RouterB and RouterE by default advertise the 10.x.0.0/16
network as 10.0.0.0/8 network. This behavior of EIGRP can be changed to support discontigous
networks by using the no auto-summary command in the global configuration mode. The
followings commands disable auto summary on all routers in our network:
RouterA(config)#router eigrp 10
RouterA(config-router)#no auto-summary
RouterB(config)#router eigrp 10
RouterB(config-router)#no auto-summary
RouterC(config)#router eigrp 10
RouterC(config-router)#no auto-summary
RouterD(config)#router eigrp 10
RouterD(config-router)#no auto-summary
RouterE(config)#router eigrp 10
RouterE(config-router)#no auto-summary
The above changes will cause EIGRP to reset all adjacencies and form them again, creating a
small window when the routing tables will not be updated. After the adjacencies come back up,
the routing table on RouterA will look like the following:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
D 192.168.5.0/24 [90/307200] via 192.168.4.4, 19:57:14, FastEthernet0/1
10.0.0.0/16 is subnetted, 2 subnets
D 10.2.0.0 [90/332800] via 192.168.4.4, 00:16:05, FastEthernet0/1
D 10.1.0.0 [90/307200] via 192.168.1.2, 00:16:08, FastEthernet0/0
D 192.168.6.0/24 [90/2221056] via 192.168.4.4, 00:22:15, FastEthernet0/1
C 192.168.1.0/24 is directly connected, FastEthernet0/0
D 192.168.2.0/24 [90/2195456] via 192.168.1.2, 19:58:11, FastEthernet0/0
D 192.168.3.0/24 [90/2221056] via 192.168.1.2, 00:22:15, FastEthernet0/0
In the above output notice that there are now routing entries for both 10.1.0.0/16 and 10.2.0.0/16
networks, both pointing towards the correct next hop. Similarly, the routing table on RouterD has
entries for each of those networks:
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/1
C 192.168.5.0/24 is directly connected, FastEthernet0/0
10.0.0.0/16 is subnetted, 2 subnets
D 10.2.0.0 [90/307200] via 192.168.5.5, 00:17:27, FastEthernet0/0
D 10.1.0.0 [90/332800] via 192.168.4.1, 00:17:31, FastEthernet0/1
D 192.168.6.0/24 [90/2195456] via 192.168.5.5, 00:23:37, FastEthernet0/0
D 192.168.1.0/24 [90/307200] via 192.168.4.1, 19:58:39, FastEthernet0/1
D 192.168.2.0/24 [90/2221056] via 192.168.4.1, 19:58:39, FastEthernet0/1
D 192.168.3.0/24 [90/2221056] via 192.168.5.5, 00:23:37, FastEthernet0/0
As you saw in this section, discontiguous networks can cause routing problems but EIGRP can
support them with a little change.
EIGRP load balancing and maximum hops
Like RIP, EIGRP can load balance across a default of 4 equal cost paths. It can be configured to
load balance across a maximum of 6 paths (for older IOS versions) and 16 paths (for IOS
versions 12.2(33) and above). The difference between EIGRP and RIP load balancing is that
EIGRP can be configured to load balance across unequal cost paths also.
EIGRP load balancing can be seen in our setup on RouterC. You may have noticed that RouterC
has two paths to the 192.168.4.0/24 network and both paths use similar links. Hence RouterC will
load balance traffic destined to 192.168.4.0/24 network as can be seen in its routing table:
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
D 192.168.4.0/24 [90/2221056] via 192.168.6.5, 00:03:32, Serial0/0
[90/2221056] via 192.168.2.2, 00:03:32, Serial0/1
D 192.168.5.0/24 [90/2195456] via 192.168.6.5, 00:03:32, Serial0/0
10.0.0.0/16 is subnetted, 2 subnets
D 10.2.0.0 [90/2195456] via 192.168.6.5, 00:03:32, Serial0/0
D 10.1.0.0 [90/2195456] via 192.168.2.2, 00:03:32, Serial0/1
C 192.168.6.0/24 is directly connected, Serial0/0
D 192.168.1.0/24 [90/2195456] via 192.168.2.2, 00:03:38, Serial0/1
C 192.168.2.0/24 is directly connected, Serial0/1
C 192.168.3.0/24 is directly connected, FastEthernet0/0
To see the effect of metrics on EIGRP load balancing, I reduced the bandwidth on the s0/0
interface to 100 Kbit/sec. This caused the cost of the route to 192.168.4.0/24 via 192.168.2.2 to
increase. Since the cost increased, EIGRP will no longer load balance across that path. The
change in bandwidth and the effect on the routing table can be seen below:
RouterC(config)#int s0/0
RouterC(config-if)#bandwidth 100
RouterC(config-if)#end
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
D 192.168.4.0/24 [90/2221056] via 192.168.2.2, 00:00:08, Serial0/1
D 192.168.5.0/24 [90/2246656] via 192.168.2.2, 00:00:08, Serial0/1
10.0.0.0/16 is subnetted, 2 subnets
D 10.2.0.0 [90/2272256] via 192.168.2.2, 00:00:08, Serial0/1
D 10.1.0.0 [90/2195456] via 192.168.2.2, 00:00:08, Serial0/1
C 192.168.6.0/24 is directly connected, Serial0/0
D 192.168.1.0/24 [90/2195456] via 192.168.2.2, 00:00:08, Serial0/1
C 192.168.2.0/24 is directly connected, Serial0/1
C 192.168.3.0/24 is directly connected, FastEthernet0/0
In the above output notice that the traffic to 192.168.4.0/24 is no longer load balanced.
The maximum paths across which EIGRP can load balance can be configured using
the maximum-paths paths command as shown below:
RouterA(config)#router eigrp 10
RouterA(config-router)#maximum-paths ?
<1-16> Number of paths
RouterA(config-router)#maximum-paths 6
Unequal cost load balancing in EIGRP can be achieved using the variance command but is out of
the scope of the CCNA exam.
One of the limitations that EIGRP inherits from distance vector protocols is the maximum hop
count limitation. By default it has a maximum hop count of 100 but can be increased up to 255
using the metric maximum-hops hops command as shown below:
RouterA(config)#router eigrp 10
RouterA(config-router)#metric maximum-hops ?
<1-255> Hop count
RouterA(config-router)#metric maximum-hops 255
One important thing to remember when it comes to hop counts in EIGRP is that the count is not
used in the calculation of cost of a route, but is only used to limit the size of an autonomous
system.
Verifying & Troubleshooting EIGRP
The following three commands are used to verify and troubleshoot EIGRP:
show ip route
show ip protocols
show ip eigrp neighbors
show ip eigrp topology
debug eigrp packets and debug ip eigrp notifications
The show ip route command has been covered in the previous section and earlier in this chapter.
Eventually a complete and correct routing table across the network is the best verification of a
routing table. The other two commands are covered below.
Using show ip protocols command to verify and troubleshoot EIGRP
The show ip protocols command helps verify routing protocols running on the router. An example
of the output of this command from RouterA of our network is shown below:
RouterA#show ip protocols
Routing Protocol is “eigrp 10”
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Default networks flagged in outgoing updates
Default networks accepted from incoming updates
EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
EIGRP maximum hopcount 255
EIGRP maximum metric variance 1
Redistributing: eigrp 10
EIGRP NSF-aware route hold timer is 240s
Automatic network summarization is not in effect
Maximum path: 6
Routing for Networks:
192.168.1.0
192.168.4.0
Routing Information Sources:
Gateway Distance Last Update
192.168.4.4 90 05:17:45
192.168.1.2 90 05:17:45
Distance: internal 90 external 170
The show ip protocols commands shows the operational information for EIGRP. From the above
output you can gather that the EIGRP AS is 10 and it is advertising the 192.168.1.0 and
192.168.4.0 networks. You can also learn that it is using default metrics (K1 and K3) and the
maximum hop count has been configured as 255. The output also shows that auto summary is
disabled and maximum path is set to 6. As before, this output helps confirm the configuration of
EIGRP.
Using show ip eigrp neighbors command to verify adjacencies
It is important to see which routers EIGRP has formed adjacencies with and how stable the
adjacencies are. The show ip eigrp neighbors command helps do this. The output from RouterE
in our network is shown below:
RouterC#
*Mar 3 01:31:54.803: IP-EIGRP(Default-IP-Routing-Table:10): Callback: route_adjust
Serial0/0
*Mar 3 01:31:54.811: IP-EIGRP(Default-IP-Routing-Table:10): Callback: callbackup_routes
192.168.6.0/24
When the s0/0 interface on RouterE is brought back up, the following debugs are seen on
RouterC:
RouterC#
*Mar 3 01:32:34.799: IP-EIGRP(Default-IP-Routing-Table:10): Callback: lostroute
192.168.6.0/24
*Mar 3 01:32:34.799: IP-EIGRP(Default-IP-Routing-Table:0): Callback: redist connected
(config change) Serial0/0
*Mar 3 01:32:34.799: IP-EIGRP(Default-IP-Routing-Table:10): Callback: route_adjust
Serial0/0
You can use this debug to verify that your network is stable and there are no constant changes in
the network. No output for this debug command is good news!
Open Shortest Path First (OSPF)
Open Shortest Path First (OSPF) is the first link-state protocol that you will learn about. Apart
from being a link-state protocol, it is also an open standard protocol. What this means is that you
can run OSPF in a network consisting of multivendor devices. You may have realized that you
cannot run EIGRP in a network that consists of non-Cisco devices. This makes OSPF a very
important protocol to learn.
Compared to EIGRP, OSPF is a more complex protocol and supports all features such as
VLSM/CIDR and more. A brief summary of OSPF features is given below:
1. Works on the concept of Areas and Autonomous systems
2. Highly Scalable
3. Supports VLSM/CIDR and dis-contiguous networks
4. Does not have a hop count limit
5. Works in multivendor environment
6. Minimizes updates between neighbors.
While the above list is a very basic overview of the features of OSPF and will be expanded on in
coming sections, it is a good time to take a step back and compare the four protocols detailed in
this chapter. Table 5-2 shows a comparison of the four protocols.
Table 5-2 Comparison of routing protocols.
Features OSPF EIGRP RIPv1 RIPv2
Protocol Type Link state Hybrid Distance Distance
Vector Vector
Classful Protocol No No Yes No
VLSM Support Yes Yes No Yes
Discontiguous Yes Yes No Yes
Network Support
Hop count limit None 255 15 15
Routing Updates Event Event Triggered Periodic Periodic
Triggered
Complete Routing During new During new Periodic Periodic
table shared adjacencies adjacencies
Mechanism for Multicast Multicast and Multicast Broadcast
sharing updates unicast
Best Path Dijkstra DUAL Bellman- Bellman-
computation Form Ford
Metric used Bandwidth Bandwidth and Hop Hop
Delay (default) Count Count
Organization type Hierarchical Flat Flat Flat
Convergence Fast Very Fast Slow Slow
Auto No Yes Yes Yes
Summarization
Manual Yes Yes No No
Summarization
Peer Yes Yes Yes No
authentication
It should be noted here that OSPF has many more features that the ones listed in Table 5-2 and
than those covered in this book. One feature that really separates OSPF from other protocols is its
support of a hierarchical design. What this means is that you can divide a large internetwork into
smaller internetworks called areas. It should be noted that these areas, though separate, still lie
within a single OSPF autonomous system. This is distinctly different from the way EIGRP can be
divided into multiple autonomous systems. While in EIGRP each autonomous system functions
independent of others and a redistribution is required to share routes, in OSPF areas are
dependent on each other and routes are shared between them without redistribution.
You should also know that like EIGRP, OSPF could be divided into multiple Autonomous
Systems. Each autonomous system will be different from the rest and will require redistribution
of routes.
The hierarchical design of OSPF provides the following benefits:
Decrease routing overhead and flow of updates
Limit network problems such as instability to an area
Speed up convergence.
One disadvantage of this is that planning and configuring OSPF is more difficult than other
protocols. Figure 5-5 shows a simple OSPF hierarchical setup. In the figure notice that Area 0 is
the central area and the other two areas connect to it.
Figure 5-5 OSPF hierarchical design
This is always true in an OSPF design. All areas need to connect to Area 0. Areas that cannot
connect to area 0 physically need a logical connection it using something known as virtual
links. Virtual links are out of the scope of the CCNA exam.
Another important thing to notice in the figure is that for each area, there is a router that connects
to area 0 as well. These routers are called Area Border Routers (ABRs). In Figure 5-5,
RouterC and RouterD are ABRs because they connect to area 0 as well as another area. The way
ABRs connect different areas, routers that connect different autonomous systems are called
Autonomous System Boundary Routers (ASBRs). In Figure 5-5, if RouterE connect to another
OSPF AS or to an AS of another protocol such as EIGRP, it would be called an ASBR.
From Figure 5-5, you learned about three OSPF terms – Area, ABR and ASBR. Similarly there
are many other terms associated with OSPF that you need to be aware of before getting into how
OSPF actually works. The next section looks at some of these terms.
Building Blocks of OSPF
Each routing protocol has its own language and terminologies. In OSPF there are various terms
that you should be aware of. This section looks at the some of the important terminologies
associated with OSPF. In an attempt to make it easier to understand and remember, the
terminologies are broken into three parts here – Router level, Area level and Internetwork level.
At the Router level, when OSPF is enabled, it becomes aware of the following first:
Router ID – Router ID is the IP address that will represent the router throughout the OSPF
AS. Since a router may have multiple IP addresses (for its multiple interfaces), Cisco
routers choose the highest loopback interface IP address. (Do not worry if you do not
know what loopback interfaces are. They are covered later in the chapter). If loopback
interfaces are not present, OSPF chooses the highest physical IP address configured within
the active interfaces. Here highest literally means higher in number (Class C will be
higher than Class A because 192 is greater than 10).
Links – Simply speaking a Link is a network to which a router interface belongs. When
you define the networks that OSPF will advertise, it will match interface addresses that
belong to those networks. Each interface that matches is called a link. Each link has a
status (up or down) and an IP address associated with it.
Let’s take a simple test here. Look at Figure 5-6 and try to find the Router ID and links on each of
the routers.
Figure 5-6 RouterID and links
For RouterA, the RouterID will be 192.168.1.1 because it is the highest physical IP address
present. The three links present on RouterA are the networks 192.168.1.0/24, 10.0.0.0/8 and
172.16.0.0/16. Similarly, the Router ID of RouterB is 172.30.1.1 since that is the highest physical
IP address on the router. The three links present on RouterB are 10.0.0.0/8, 172.20.0.0/16 and
172.30.0.0/16.
Once a router is aware of the above two things, it will try to find more about its network by
seeking out other OSPF speaking routers. At that stage the following terms come into use:
Hello Packets – Similar to EIGRP hello packets, OSPF uses hello packets to discover
neighbors and maintain relationships. Hello packet contains information such as area
number that should match for a neighbor relation to be established. Hello packets are sent
to multicast address 224.0.0.5.
Neighbors – Neighbors is the term used to define two or more OSPF speaking routers
connected to the same network and configured to be in the same OSPF area. Routers use
hello packets to discover neighbors.
Neighbor Table – OSPF will maintain a list of all neighbors from which hello packets
have been received. For each neighbor various details such as RouterID and adjacency
state are stored.
Area – An OSPF area is a grouping of networks and routers. Every router in the area
shares the same area id. Routers can belong to multiple areas; therefore, area id is linked
to every interface. Routers will not exchange routing updates with routers belonging to
different areas. Area 0 is called the backbone area and all other area must connect to it
by having at least one router that belongs to both areas.
Once OSPF has discovered neighbors it will look at the network type on which it is working.
OSPF classifies networks into the following types:
Broadcast (multi-access) – Broadcast (multi-access) networks are those that allow
multiple devices to access (or connect to) the same network and also provide ability to
broadcast. You will remember that when a packet is destined to all devices in a network,
it is termed as a broadcast. Ethernet is an example of a broadcast multi-access network.
Non-Broadcast multi-access (NBMA) – Networks that allow multi-access but do not
have broadcast ability are called NBMA networks. Frame Relay networks are usually
NBMA.
Point-to-Point – Point-to-Point networks consist of direct connection between two routers
and provide a single path of communication. When routers are connected back-to-back
using serial interfaces, a point-to-point network is created. Point-to-point networks can
also exist logically across geographical locations using various WAN technologies such as
Frame Relay and PPP.
Point-to-Multipoint – Point-to-Multipoint networks consist of multiple connections
between a single interface of a router and multiple remote routers. All routers belong to
the same network but have to communicate via the central router, whose interface connects
the remote routers.
Depending on the network type that OSPF discovers on the router interfaces, it will need to
form Adjacencies. An adjacency is the relation between neighbors that allows direct exchange
of routes. Unlike EIGRP, OSPF will not form adjacency with all neighbors always. A router will
form adjacencies with a few or all neighbors depending on the network type that is discovered.
Adjacencies in each network type is discussed below:
Broadcast (multi-access) – Since multiple routers can connect to such networks, OSPF
elects a Designated Router (DR) and a Backup Designated Router (BDR). All routers
in these networks, form adjacencies only with the DR and BDR. This also means that route
updates are only shared between the routers and the DR and BDR. It is the duty of the DR
to share routing updates with the rest of the routers in the network. If a DR loses
connectivity to the network, the BDR will take its place. The election process is discussed
later in the chapter.
NBMA – Since NBMA is also a multi-access network, a DR and a BDR is elected and
routers form adjacencies only with them. The problem with NBMA networks is that since
broadcast capability and in turn multicast capability is not present, routers cannot discover
neighbors. So NBMA networks require you to manually tell OSPF about the neighbors
present in the network. Apart from this, OSPF functions as it does in a broadcast multi-
access network.
Point-to-Point – Since there are only two routers present in a point-to-point network,
there is no need to elect a DR and BDR. Both routers form adjacency with each other and
exchange routing updates. Neighbors are discovered automatically in these networks.
Point-to-multipoint – Point-to-multipoint interfaces are treat as special point-to-point
interfaces by OSPF and it does a little extra work on here that is out of scope of CCNA.
There is no DR/BDR election in such networks and neighbors are automatically
discovered.
Exam Alert: It can get confusing to remember the network types, election and adjacency
requirements. A simple way to remember it is to associate “multi-access” with DR/BDR and
“Point-to” with no election. Also associate NBMA with manually specifying neighbors.
Once OSPF has formed adjacencies, it will start exchanging routing updates. The following two
terms come to use here:
Link State Advertisements – Link State Advertisements (LSAs) are OSPF packets
containing link-state and routing information. These are exchanged between routers that
have formed adjacencies. The packets essentially tell routers in the networks about
different networks (links) that are present and how to reach them. Different types of LSAs
are discussed later in the chapter.
Topology Table – The topology table contains information on every link the router learns
about (via LSAs). The information is the topology table is used to compute the best path to
remote networks.
At the area level, the only term that gets introduced is:
Area Border Routers (ABRs) – Routers that connect an area to area 0 are called ABRs.
They have one interface belonging to area 0 and other interfaces belonging to one or more
areas. They are responsible for propagating routing updates between area 0 and other
areas.
At the internetwork level another term that gets introduced is:
Autonomous System Boundary Router (ASBR) – A router that connects an OSPF AS to
another OSPF AS or AS belonging to other routing protocols is called an Autonomous
System Boundary Router or ASBR. Route redistribution is setup between the two AS on
these routers and hence they become the gateway between the two AS.
Now that you are familiar with OSPF terminology, the rest of the sections will discuss the
working of OSPF in detail and help you better understand the terms discussed here.
Loopback Interfaces
Loopback interfaces are virtual, logical interfaces that exist in the software only. They are used
for administrative purposes such as providing a stable OSPF interface or diagnostics. Using
loopback interfaces with OSPF has the following benefits:
Provides an interface that is always active.
Provides an OSPF Router ID that is predictable and always same. Making is easier to
troubleshoot OSPF.
Router ID is a differentiator in DR/BDR election. Having a loopback interface with higher
order IP address can influence the election.
Configuring a loopback interface is easy – You need to select an interface number and enter the
interface configuration mode using the interface command in global configuration mode as shown
below:
RouterA(config)#interface loopback 0
RouterA(config-if)#
The interface number can be any number starting from 0. Once in the interface configuration
mode, use the ip address command to configure an IP address as you would on a physical
interface. An example is shown below:
Figure 5-8 SPF tree Example 2
It is important to understand that each router creates this tree only for the area it belongs to. If a
router belongs to multiple areas, it will create a separate tree for each area.
A big part of the tree is also the cost associated with each path. Cost is the metric used by OSPF
is the sum of the cost of the entire path from the router to the remote network. The OSPF RFC
defines cost as an arbitrary value, so Cisco calculates cost as 108/bandwidth. Bandwidth in this
equation is the bandwidth configured on the interface. Using this equation, an Ethernet interface
with a bandwidth of 10Mbps has a cost of 10 and a 100Mbps interface has a cost of 1. You may
have noticed that interfaces having a bandwidth of more than 100Mbps will have a cost in
fraction but Cisco does not use fractions and rounds of the value to 1 for such interfaces.
In Figure 5-8, if all interfaces are FastEthernet interfaces with a bandwidth of 100Mbps, each link
has a cost of 1. So for the path from RouterG to the 192.168.7.0/24, the total cost will be 5 and to
the network 192.168.3.0/24, the total cost will be 2.
The cost of each interface can be changed using the ip ospf cost command in the interface
configuration mode. It should be noted that since the OSPF RFC does not exactly define the metric
that makes up the cost, each vendor uses a different metric. When using OSPF in a multivendor
environment, you will need to adjust cost to ensure parity.
Link State Advertisements
The fundamental building blocks of OSPF are the link state advertisements that are sent from
every router to advertise links and their states. Given the complexity and scalability of OSPF,
different LSA types are used to keep the OSPF database updated. Out of the various LSAs, the
first five are most relevant to the limited OSPF discussion covered in this chapter and are
discussed below:
Type 1 – Router LSA – Each router in the area sends this LSA to announce its presence
and list the links to other routers and networks along with metrics to them. These LSAs do
not cross the boundary of an area.
Type 2 – Network LSA – The DR in a multi-access network sends out this LSA. It
contains a list of routers that are present in the network segment. These LSAs also do not
cross the boundary of an area.
Type 3 – Summary LSA – The ABR takes the information learned in one area (and
optionally summarizes this information) and sends it out to another area it is attached to.
This information is contained in LSA type 3 and is responsible for propagation of Inter-
area routes.
Type 4 – ASBR Summary LSA – ASBRs originate external routes (redistributed routes)
and send them throughout the network. While the external routes are listed in type 5 LSA,
the details of the ASBR themselves in listed in type 4 LSAs. This LSA is originated by the
ABR of the area where the ASBR resides.
Type 5 – External LSA – This LSA lists routes redistributed into OSPF from another
OSPF process or another routing protocol. This LSA is originated by the ASBR and
propagates across the OSPF AS.
Configuring OSPF
Configuring OSPF
In the previous section you learned about OSPF and how it works. While a lot a theory was
covered in that section, this one looks at configuring OSPF. The network shown in Figure 5-9 will
be used for this section.
Figure 5-9 OSPF Network
Just like EIGRP, OSPF configuration is divided into two parts – the global configuration and the
interface level configuration. Globally, configuring OSPF includes enabling the process and
adding networks to be advertised. To enable the OSPF process use the router
ospf process_id global configuration command. In this command, process_id is a locally
significant number and does not represent the AS. Since multiple OSPF processes can run on a
router, the process_id is used to keep the processes separate. The process id can be different on
every router. On entering the command, you will arrive at the router configuration mode when
the network command can be used to specify the networks that will be advertised. In respect to
OSPF, the network command actually identifies the interfaces on which OSPF will be enabled
and the network to which the interface belongs will be advertised. The syntax of the command is:
RouterB(config)#router ospf 1
RouterB(config-router)#network 192.168.3.2 0.0.0.0 area 0
RouterC(config)#router ospf 1
RouterC(config-router)#network 192.168.3.3 0.0.0.0 area 0
RouterC(config-router)#network 192.168.4.3 0.0.0.0 area 0
Note that 192.168.3.0 and 192.168.4.0 cannot be configured as a single block since a block of 2
or 4 will not cover them and a block of 8 will cover 192.168.5.0, which is in area 2. So we had
to use two statements with a mask of 0.0.0.0.
Area 1
Now that area 0 has been configured, other areas can be configured. RouterA has two interfaces
in area 1 and RouterB has one interface in that area.
RouterA(config)#router ospf 1
RouterA(config-router)#network 0.0.0.0 255.255.255.255 area 1
RouterB(config)#router ospf 1
RouterB(config-router)#network 192.168.2.2 0.0.0.0 area 1
Notice that on RouterA a network number of 0.0.0.0 and a wildcard mask of 255.255.255.255 are
used. This mask essentially means all networks and can be used on RouterA since both the
interfaces belong to area 1.
Area 2
The final area spans across four routers. All interfaces of RouterD, RouterE and RouterF belong
to area 2.
RouterC(config)#router ospf 1
RouterC(config-router)#network 192.168.5.3 0.0.0.0 area 2
RouterD(config)#router ospf 1
RouterD(config-router)#network 0.0.0.0 255.255.255.255 area 2
RouterE(config)#router ospf 1
RouterE(config-router)#network 192.168.5.5 0.0.0.0 area 2
RouterE(config-router)#network 192.168.6.5 0.0.0.0 area 2
RouterF(config)#router ospf 1
RouterF(config-router)#network 192.168.6.0 0.0.1.255 area 2
In the above configuration notice the three different ways wildcard has been used on RouterD,
RouterE and RouterF.
Now that OSPF configuration is complete, let us take a look at the routing table on each router to
verify the configuration.
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
O IA 192.168.4.0/24 [110/138] via 192.168.2.2, 00:04:35, Serial0/0
O IA 192.168.5.0/24 [110/138] via 192.168.2.2, 00:04:35, Serial0/0
O IA 192.168.6.0/24 [110/148] via 192.168.2.2, 00:03:50, Serial0/0
O IA 192.168.7.0/24 [110/158] via 192.168.2.2, 00:03:50, Serial0/0
C 192.168.1.0/24 is directly connected, FastEthernet0/0
C 192.168.2.0/24 is directly connected, Serial0/0
O IA 192.168.3.0/24 [110/128] via 192.168.2.2, 00:04:35, Serial0/0
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
O 192.168.4.0/24 [110/74] via 192.168.3.3, 00:04:38, Serial0/1
O IA 192.168.5.0/24 [110/74] via 192.168.3.3, 00:04:38, Serial0/1
O IA 192.168.6.0/24 [110/84] via 192.168.3.3, 00:03:53, Serial0/1
O IA 192.168.7.0/24 [110/94] via 192.168.3.3, 00:03:53, Serial0/1
O 192.168.1.0/24 [110/74] via 192.168.2.1, 00:04:38, Serial0/0
C 192.168.2.0/24 is directly connected, Serial0/0
C 192.168.3.0/24 is directly connected, Serial0/1
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
C 192.168.4.0/24 is directly connected, FastEthernet0/0
C 192.168.5.0/24 is directly connected, FastEthernet0/1
O 192.168.6.0/24 [110/20] via 192.168.5.5, 00:03:55, FastEthernet0/1
[110/20] via 192.168.5.4, 00:03:55, FastEthernet0/1
O 192.168.7.0/24 [110/30] via 192.168.5.5, 00:03:55, FastEthernet0/1
[110/30] via 192.168.5.4, 00:03:55, FastEthernet0/1
O IA 192.168.1.0/24 [110/138] via 192.168.3.2, 00:04:40, Serial0/0
O IA 192.168.2.0/24 [110/128] via 192.168.3.2, 00:04:40, Serial0/0
C 192.168.3.0/24 is directly connected, Serial0/0
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
O IA 192.168.4.0/24 [110/20] via 192.168.5.3, 00:03:57, FastEthernet0/0
C 192.168.5.0/24 is directly connected, FastEthernet0/0
C 192.168.6.0/24 is directly connected, FastEthernet0/1
O 192.168.7.0/24 [110/20] via 192.168.6.6, 00:03:57, FastEthernet0/1
O IA 192.168.1.0/24 [110/148] via 192.168.5.3, 00:03:57, FastEthernet0/0
O IA 192.168.2.0/24 [110/138] via 192.168.5.3, 00:03:57, FastEthernet0/0
O IA 192.168.3.0/24 [110/74] via 192.168.5.3, 00:03:57, FastEthernet0/0
RouterE#sh ip route
–output truncated–
Gateway of last resort is not set
O IA 192.168.4.0/24 [110/20] via 192.168.5.3, 00:03:59, FastEthernet0/0
C 192.168.5.0/24 is directly connected, FastEthernet0/0
C 192.168.6.0/24 is directly connected, FastEthernet0/1
O 192.168.7.0/24 [110/20] via 192.168.6.6, 00:03:59, FastEthernet0/1
O IA 192.168.1.0/24 [110/148] via 192.168.5.3, 00:03:59, FastEthernet0/0
O IA 192.168.2.0/24 [110/138] via 192.168.5.3, 00:03:59, FastEthernet0/0
O IA 192.168.3.0/24 [110/74] via 192.168.5.3, 00:03:59, FastEthernet0/0
RouterF#sh ip route
–output truncated–
Gateway of last resort is not set
O IA 192.168.4.0/24 [110/30] via 192.168.6.5, 00:04:01, FastEthernet0/0
[110/30] via 192.168.6.4, 00:03:51, FastEthernet0/0
O 192.168.5.0/24 [110/20] via 192.168.6.5, 00:04:01, FastEthernet0/0
[110/20] via 192.168.6.4, 00:03:51, FastEthernet0/0
C 192.168.6.0/24 is directly connected, FastEthernet0/0
C 192.168.7.0/24 is directly connected, FastEthernet0/1
O IA 192.168.1.0/24 [110/158] via 192.168.6.5, 00:04:01, FastEthernet0/0
[110/158] via 192.168.6.4, 00:03:51, FastEthernet0/0
O IA 192.168.2.0/24 [110/148] via 192.168.6.5, 00:04:01, FastEthernet0/0
[110/148] via 192.168.6.4, 00:03:51, FastEthernet0/0
O IA 192.168.3.0/24 [110/84] via 192.168.6.5, 00:04:01, FastEthernet0/0
[110/84] via 192.168.6.4, 00:03:51, FastEthernet0/0
The above outputs show that all networks are known across the internetwork. You should also
notice the following:
While an O precedes OSPF routes, inter-area routes are preceded with an IA also.
In the outputs from RouterC and RouterF notice that OSPF is load balancing across equal
cost paths.
Influencing path selection
As discussed in the earlier section, Cisco uses interface bandwidth as a metric for cost and the
sum of cost of the entire path is used to select the best route to a destination. The cost of an
interface can be manually changed using the ip ospf cost command in the interface configuration
mode.
For example, in the network shown in Figure 5-9, traffic going from RouterF to 192.168.4.0/24 is
being load balanced between the paths going through RouterD and RouterE. RouterF can be made
to route traffic only through RouterD and use RouterE has a backup path by increasing the cost
associated with interface fa0/0 on RouterE. This will cause the cost of the entire path to increase
causing RouterF to not use that path along with the other path. The following commands will
increase the cost on RouterE:
RouterE(config)#int fa0/0
RouterE(config-if)#ip ospf cost 20
The effect of this change will almost immediately be seen on the routing table of RouterF:
RouterF#sh ip route
–output truncated–
Gateway of last resort is not set
O IA 192.168.4.0/24 [110/30] via 192.168.6.4, 00:04:15, FastEthernet0/0
O 192.168.5.0/24 [110/20] via 192.168.6.4, 00:03:04, FastEthernet0/0
C 192.168.6.0/24 is directly connected, FastEthernet0/0
C 192.168.7.0/24 is directly connected, FastEthernet0/1
O IA 192.168.1.0/24 [110/158] via 192.168.6.4, 00:04:15, FastEthernet0/0
O IA 192.168.2.0/24 [110/148] via 192.168.6.4, 00:04:15, FastEthernet0/0
O IA 192.168.3.0/24 [110/84] via 192.168.6.4, 00:04:15, FastEthernet0/0
In the output above, notice that RouterF is no longer load balancing the traffic across the two
paths.
Influencing DR/BDR election
In the previous section you learned that OSPF routes do not form adjacencies with all neighbors
in a multi-access network. A DR and a BDR are elected and all other routers form adjacencies
with them. This election takes into consideration the OSPF priority and in case of a tie, the Router
ID.
For example, in the network shown in Figure 5-9, RouterE will be the DR and RouterD will be
the BDR in the Ethernet network 192.168.5.0/24 because RouterE has the highest router ID
(192.168.6.5) and RouterD has the second highest router ID (192.168.6.4). RouterC has a router
ID of 192.168.6.3. If you wanted RouterC to always be the DR, either the priority or the Router
ID would have to be increased. The easiest way to do this is to increase the priority on interface
fa0/1 of RouterC as shown below:
RouterC(config)#int fa0/1
RouterC(config-if)#ip ospf priority 10
The show ip ospf interface interface command can be used as shown below:
RouterC(config)#interface loopback 0
RouterC(config)#ip address 192.168.100.1 255.255.255.0
This would cause the router ID of RouterC to be higher than the rest.
Verifying & Troubleshooting OSPF
There are various ways to verify and troubleshooting OSPF configuration and operation. The
following are the most useful:
1. show ip protocols
2. show ip ospf
3. show ip ospf interface
4. show ip ospf neighbor
5. show ip ospf database
6. debug ip ospf packet
7. debug ip ospf hello
8. debug ip ospf adj
Using show ip protocols command to verify and troubleshoot OSPF
As with other routing protocols, show ip protocols helps verify the global configuration of OSPF.
The output of this protocol from RouterA in our network is shown below:
RouterA#sh ip protocols
Routing Protocol is “ospf 1”
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Router ID 192.168.2.1
Number of areas in this router is 1. 1 normal 0 stub 0 nssa
Maximum path: 4
Routing for Networks:
0.0.0.0 255.255.255.255 area 1
Reference bandwidth unit is 100 mbps
Routing Information Sources:
Gateway Distance Last Update
192.168.3.2 110 08:25:40
Distance: (default is 110)
The first thing to notice in the above output is the router ID and the areas configured on this router.
It also shows the networks that are added to the process. The final thing to notice is that the
adjacent router is shown as the Routing Information Source. As you can see, the output from this
command can be used to quickly verify the basic configuration and it is easy to catch any
configuration mistake in this small output.
Using show ip ospf command to verify and troubleshoot OSPF
The show ip ospf command is also useful to verify configuration. While most of the output is out
of scope of CCNA, a few things such as Router ID, Area related information, and SPF related
information is useful. The output from RouterA is shown below:
RouterA#show ip ospf
Routing Process “ospf 1” with ID 192.168.2.1
Start time: 00:00:07.616, Time elapsed: 08:36:11.732
Supports only single TOS(TOS0) routes
Supports opaque LSA
Supports Link-local Signaling (LLS)
Supports area transit capability
Router is not originating router-LSAs with maximum metric
Initial SPF schedule delay 5000 msecs
Minimum hold time between two consecutive SPFs 10000 msecs
Maximum wait time between two consecutive SPFs 10000 msecs
Incremental-SPF disabled
Minimum LSA interval 5 secs
Minimum LSA arrival 1000 msecs
LSA group pacing timer 240 secs
Interface flood pacing timer 33 msecs
Retransmission pacing timer 66 msecs
Number of external LSA 0. Checksum Sum 0x000000
Number of opaque AS LSA 0. Checksum Sum 0x000000
Number of DCbitless external and opaque AS LSA 0
Number of DoNotAge external and opaque AS LSA 0
Number of areas in this router is 1. 1 normal 0 stub 0 nssa
Number of areas transit capable is 0
External flood list length 0
IETF NSF helper support enabled
Cisco NSF helper support enabled
Area 1
Number of interfaces in this area is 2
Area has no authentication
SPF algorithm last executed 08:35:56.788 ago
SPF algorithm executed 2 times
Area ranges are
Number of LSA 7. Checksum Sum 0x046CFF
Number of opaque link LSA 0. Checksum Sum 0x000000 Number of DCbitless LSA 0
Number of indication LSA 0
Number of DoNotAge LSA 0
Flood list length 0
In the above output, notice that SPF algorithm was run twice. This means there was a change in
the network once after OSPF started.
Using show ip ospf interface command to verify and troubleshoot OSPF
One of the most important commands used to verify and troubleshoot OSPF is the show ip ospf
interface command. It can be used to see information of all interfaces participating in OSPF or
any specific interface. A sample output form RouterD is shown below:
RouterC#
OSPF: Send hello to 224.0.0.5 area 0 on Serial0/0 from 192.168.3.3
OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet0/0 from 192.168.4.3
OSPF: Send hello to 224.0.0.5 area 2 on FastEthernet0/1 from 192.168.5.3
OSPF: Rcv hello from 192.168.6.5 area 2 from FastEthernet0/1 192.168.5.5
OSPF: End of hello processing
OSPF: Rcv hello from 192.168.3.2 area 0 from Serial0/0 192.168.3.2
OSPF: End of hello processing
RouterC#
OSPF: Rcv hello from 192.168.6.4 area 2 from FastEthernet0/1 192.168.5.4
OSPF: End of hello processing
RouterC#
OSPF: Send hello to 224.0.0.5 area 0 on Serial0/0 from 192.168.3.3
OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet0/0 from 192.168.4.3
RouterC#
OSPF: Rcv hello from 192.168.3.2 area 0 from Serial0/0 192.168.3.2
OSPF: End of hello processing
OSPF: Send hello to 224.0.0.5 area 2 on FastEthernet0/1 from 192.168.5.3
OSPF: Rcv hello from 192.168.6.5 area 2 from FastEthernet0/1 192.168.5.5
OSPF: End of hello processing
RouterC#
OSPF: Rcv hello from 192.168.6.4 area 2 from FastEthernet0/1 192.168.5.4
OSPF: End of hello processing
In the above output you can see that RouterC is sending hello packets out all its interfaces and is
receiving hello packets back from all neighbors. If there is a problem with hello packets such as
an interval mismatch, the debug will show that error.
Using debug ip ospf adj to verify and troubleshoot OSPF
As mentioned earlier, adjacency formation is the most important part of OSPF operation and most
problems occur at that stage. The output from debug ip ospf adj helps identify problems related
to an adjacency. Since there are no adjacency related events in a stable network, I cleared the
ospf process on RouterB to generate the following output on RouterC:
RouterC#
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 76 LSA count 1
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 84 LSA count 2
OSPF: Cannot see ourself in hello from 192.168.3.2 on Serial0/0, state INIT
OSPF: 2 Way Communication to 192.168.3.2 on Serial0/0, state 2WAY
OSPF: Send DBD to 192.168.3.2 on Serial0/0 seq 0x685 opt 0x52 flag 0x7 len 32
OSPF: Rcv DBD from 192.168.3.2 on Serial0/0 seq 0x23C7 opt 0x52 flag 0x7 len 32 mtu 1500
state EXSTART
OSPF: First DBD and we are not SLAVE
OSPF: Rcv DBD from 192.168.3.2 on Serial0/0 seq 0x685 opt 0x52 flag 0x0 len 32 mtu 1500
state EXSTART
OSPF: NBR Negotiation Done. We are the MASTER
OSPF: Send DBD to 192.168.3.2 on Serial0/0 seq 0x686 opt 0x52 flag 0x3 len 112
OSPF: Rcv DBD from 192.168.3.2 on Serial0/0 seq 0x686 opt 0x52 flag 0x0 len 32 mtu 1500
state EXCHANGE
OSPF: Send DBD to 192.168.3.2 on Serial0/0 seq 0x687 opt 0x52 flag 0x1 len 32
OSPF: Rcv LS REQ from 192.168.3.2 on Serial0/0 length 72 LSA count 4
OSPF:
RouterC#Send UPD to 192.168.3.2 on Serial0/0 length 148 LSA count 4
OSPF: Rcv DBD from 192.168.3.2 on Serial0/0 seq 0x687 opt 0x52 flag 0x0 len 32 mtu 1500
state EXCHANGE
OSPF: Exchange Done with 192.168.3.2 on Serial0/0
OSPF: Synchronized with 192.168.3.2 on Serial0/0, state FULL
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 76 LSA count 1
RouterC#
RouterC#
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 56 LSA count 1
OSPF: Send UPD to 192.168.3.2 on Serial0/0 length 32 LSA count 1
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 56 LSA count 1
OSPF: Send UPD to 192.168.3.2 on Serial0/0 length 32 LSA count 1
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 76 LSA count 1
OSPF: Send UPD to 192.168.3.2 on Serial0/0 length 52 LSA count 1
RouterC#
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 84 LSA count 2
OSPF: Send UPD to 192.168.3.2 on Serial0/0 length 60 LSA count 2
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 76 LSA count 1
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 56 LSA count 1
OSPF: Rcv LS UPD from 192.168.3.2 on Serial0/0 length 56 LSA count 1
The above output shows the transition of the adjacency through various stages – 2WAY,
EXSTART, EXCHANGE and finally FULL. While these states are out of scope of
CNA, the output gives you an idea of how an adjacency is formed. In the EXCHANGE state, the
database is synchronized and then the FULL state is reached. Any problems during any of these
states will be seen in this debug.
EIGRP & OSPF Summary and Redistribution Routes
As you know from Chapter 2, summary routes can be used to group together various contiguous
networks into a single route. This is useful for reducing the size of routing table in the network.
For example, in the network shown in Figure 5-10, if summarization is disabled and all routers
are running EIGRP in the same AS, RouterA will have 8 EIGRP routes in its routing table.
Figure 5-10 EIGRP Summarization
The routing table of RouterA, with summarization disabled is shown below:
RouterA#sho ip route
–output truncated–
Gateway of last resort is not set
192.168.10.0/27 is subnetted, 8 subnets
D 192.168.10.96 [90/435200] via 10.0.0.2, 00:00:09, FastEthernet0/0
D 192.168.10.64 [90/307200] via 10.0.0.2, 00:00:43, FastEthernet0/0
D 192.168.10.32 [90/409600] via 10.0.0.2, 00:00:43, FastEthernet0/0
D 192.168.10.0 [90/409600] via 10.0.0.2, 00:00:43, FastEthernet0/0
D 192.168.10.224 [90/460800] via 10.0.0.2, 00:00:04, FastEthernet0/0
D 192.168.10.192 [90/460800] via 10.0.0.2, 00:00:04, FastEthernet0/0
D 192.168.10.160 [90/332800] via 10.0.0.2, 00:00:09, FastEthernet0/0
D 192.168.10.128 [90/435200] via 10.0.0.2, 00:00:09, FastEthernet0/0
C 10.0.0.0/8 is directly connected, FastEthernet0/0
You may have noticed that all these 8 networks are contiguous networks and can be summarized
into a single 192.168.10.0/24 route. In this section you will learn to configure summarization on
EIGRP and OSPF.
When configuring summarization on EIGRP, remember that by default EIGRP summarizes on
network boundaries. In the above shown network EIGRP would have summarized the
192.168.10.x network when advertising the routes from RouterB to RouterA because 10.0.0.0/8
network falls between them. Before configuring manual summarization, you should disable
automatic summarization using the no auto-summary command in under the routing protocol
configuration as shown below:
RouterB(config)#router eigrp 10
RouterB(config-router)#no auto-summary
Summarization is configured on a per-interface basis. EIGRP will summarize the routes when
advertising out the interface. The following command is used to configure summarization on the
interface:
RouterA#show ip route
–output truncated–
Gateway of last resort is not set
D 192.168.10.0/24 [90/409600] via 10.0.0.2, 00:00:10, FastEthernet0/0
C 10.0.0.0/8 is directly connected, FastEthernet0/0
We can also summarize the networks into two /25 networks using the following commands:
RouterA#show ip route
–output truncated–
Gateway of last resort is not set
192.168.10.0/25 is subnetted, 2 subnets
D 192.168.10.0 [90/409600] via 10.0.0.2, 00:00:11, FastEthernet0/0
D 192.168.10.128 [90/332800] via 10.0.0.2, 00:00:04, FastEthernet0/0
C 10.0.0.0/8 is directly connected, FastEthernet0/0
If the network shown in Figure 5-10 was running OSPF and was divided into areas as shown in
Figure 5-11, you can configure area 1 to send a summary route to area 0.
Figure 5-11 OSPF Summarization
Remember that only an ABR can summarize a route, so you will need to configure summarization
on RouterB using the following command in the OSPF configuration:
Router(config)#router ospf 1
Router(config-router)#area 1 range 192.168.10.0 255.255.255.0
Since OSPF does not summarize automatically, you do not need the no auto-summary command
here.
Redistributing Routes
In Chapter 4, you were introduced to redistribution. The CCNA exam requires you to have a keen
knowledge of redistribution. In particular you are required to know how to redistribute routes in
RIP and this sections looks at that. For this section, the network shown in Figure 5-12 will be
used.
In the network shown in Figure 5-12, RIPv2 is running on RouterA and RouterB while EIGRP is
running on RouterB and RouterC. RouterA has no route towars 172.16.0.0/16 network that is
being advertised by EIGRP.
Figure 5-12 Redistributing Routes
As you know from Chapter 14, while redistributing routes into a protocol, the metric
compatibility much be ensured. In this case, routes to 172.16.0.0/16 will have EIGRP metrics and
those tend to be large numbers. On the other hand, anything above 15 is an invalid metric for RIP.
To overcome this, RIP must be told what metric to assign to the routes redistributed from EIGRP.
To redistribute the routes, the redistribute protocol [process-id] metric metric command is used
in the routing protocol configuration mode. To redistribute EIGRP routes into RIP in the given
network, the following commands are required on RouterB:
RouterB(config)#router rip
RouterB(config)#redistribute eigrp 10 metric 2
The above command will cause RouterB to redistribute routes to 192.168.2.0/16 and
172.16.0.0/16 networks into RIP. RIP in turn will advertise these routes to RouterA with a metric
of 2. The routing table of RouterA, after redistribution will look as shown below:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
R 172.16.0.0/16 [120/2] via 192.168.1.2, 00:00:21, Serial0/0
C 10.0.0.0/8 is directly connected, FastEthernet0/0
C 192.168.1.0/24 is directly connected, Serial0/0
R 192.168.2.0/24 [120/2] via 192.168.1.2, 00:00:21, Serial0/0
In the above output notice that both the routes are learned from RIP and have a metric of 2.
Lab 5-1: RIP
You have been tasked with configuring the network shown in Figure 5-13 using RIPv2 such that:
1. All interfaces on each router are advertised in RIP
2. RouterC should not learn routes from the rest of the network. It should use a default route to
reach remote networks. All routers should learn the 172.20.0.0/16 network using RIP
3. All interfaces that do not connect to another router such not advertise RIP routes.
4. Remember that the DCE side of your DTE/DCE back to back cable should be connected to the
interface configured with clock rate.
Figure 5-13 Network Setup for Lab 5-1
Router(config)#hostname RouterA
RouterA(config)#int fa0/0
RouterA(config-if)#ip address 172.16.0.1 255.255.0.0
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/1
RouterA(config-if)#ip address 192.168.1.9 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterB
Router(config)#hostname RouterB
RouterB(config)#int fa0/0
RouterB(config-if)#ip address 192.168.1.5 255.255.255.252
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/0
RouterB(config-if)#ip add 192.168.1.2 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/1
RouterB(config-if)#ip address 192.168.1.13 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int fa0/1
RouterB(config-if)#ip address 172.18.0.2 255.255.0.0
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterC
Router(config)#hostname RouterC
RouterC(config)#int fa0/0
RouterC(config-if)#ip address 192.168.1.6 255.255.255.252
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterC(config)#int s0/0
RouterC(config-if)#ip address 172.20.0.3 255.255.0.0
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterD
Router(config)#hostname RouterD
RouterD(config)#int fa0/0
RouterD(config-if)#ip address 172.17.0.4 255.255.0.0
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/0
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#ip address 192.168.1.17 255.255.255.252
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/1
RouterD(config-if)#ip address 192.168.1.10 255.255.255.252
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterE
Router(config)#hostname RouterE
RouterE(config)#int fa0/0
RouterE(config-if)#ip address 172.19.0.5 255.255.0.0
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/0
RouterE(config-if)#ip address 192.168.1.18 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/1
RouterE(config-if)#ip address 192.168.1.14 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
Solution
First, each interface on each router needs to be added in RIP and version 2 has to be enabled:
RouterA(config)#router rip
RouterA(config-router)#version 2
RouterA(config-router)#network 192.168.1.0
RouterA(config-router)#network 172.16.0.0
RouterA(config-router)#end
RouterB(config)#router rip
RouterB(config-router)#version 2
RouterB(config-router)#network 192.168.1.0
RouterB(config-router)#network 172.18.0.0
RouterB(config-router)#end
RouterC(config)#router rip
RouterC(config-router)#version 2
RouterC(config-router)#network 192.168.1.0
RouterC(config-router)#network 172.20.0.0
RouterC(config-router)#end
RouterD(config)#router rip
RouterD(config-router)#version 2
RouterD(config-router)#network 192.168.1.0
RouterD(config-router)#network 172.17.0.0
RouterD(config-router)#end
RouterE(config)#router rip
RouterE(config-router)#version 2
RouterE(config-router)#network 192.168.1.0
RouterE(config-router)#network 172.19.0.0
RouterE(config-router)#end
The second item in the list states that RouterC should not learn any routes from the rest of the
network, while the rest of the network should learn routes originated by it. RouterC also needs to
have a default route to the rest of the network. To achieve this, RouterB’s f0/0 interface must be
made passive so that it does not advertise the routes out this interface to RouterC while it still
learns the routes advertised by RouterC. The configuration required is shown below:
RouterB(config)#router rip
RouterB(config-router)#passive-interface fa0/0
RouterC(config)#ip route 0.0.0.0 0.0.0.0 192.168.1.5
The final item in the list states that routes should not be advertised out interfaces that do not
connect to another router. This requires some interfaces on all routers to be passive:
RouterA(config)#router rip
RouterA(config-router)#passive-interface fa0/0
RouterB(config)#router rip
RouterB(config-router)#passive-interface fa0/1
RouterC(config)#router rip
RouterC(config-router)#passive-interface s0/0
RouterD(config)#router rip
RouterD(config-router)#passive-interface fa0/0
RouterE(config)#router rip
RouterE(config-router)#passive-interface fa0/0
Verification
To verify the solution, first check the routing table on each router. The routing table should
resemble the output shown below:
RouterA#sh ip route
–output truncated—
Gateway of last resort is not set
R 172.17.0.0/16 [120/1] via 192.168.1.10, 00:00:10, Serial0/1
C 172.16.0.0/16 is directly connected, FastEthernet0/0
R 172.19.0.0/16 [120/2] via 192.168.1.10, 00:00:10, Serial0/1
[120/2] via 192.168.1.2, 00:00:01, Serial0/0
R 172.18.0.0/16 [120/1] via 192.168.1.2, 00:00:01, Serial0/0
R 172.20.0.0/16 [120/2] via 192.168.1.2, 00:00:01, Serial0/0
192.168.1.0/30 is subnetted, 5 subnets
C 192.168.1.8 is directly connected, Serial0/1
R 192.168.1.12 [120/1] via 192.168.1.2, 00:00:01, Serial0/0
C 192.168.1.0 is directly connected, Serial0/0
R 192.168.1.4 [120/1] via 192.168.1.2, 00:00:01, Serial0/0
R 192.168.1.16 [120/1] via 192.168.1.10, 00:00:10, Serial0/1
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
R 172.17.0.0/16 [120/2] via 192.168.1.14, 00:00:04, Serial0/1
[120/2] via 192.168.1.1, 00:00:02, Serial0/0
R 172.16.0.0/16 [120/1] via 192.168.1.1, 00:00:02, Serial0/0
R 172.19.0.0/16 [120/1] via 192.168.1.14, 00:00:04, Serial0/1
C 172.18.0.0/16 is directly connected, FastEthernet0/1
R 172.20.0.0/16 [120/1] via 192.168.1.6, 00:00:25, FastEthernet0/0
192.168.1.0/30 is subnetted, 5 subnets
R 192.168.1.8 [120/1] via 192.168.1.1, 00:00:02, Serial0/0
C 192.168.1.12 is directly connected, Serial0/1
C 192.168.1.0 is directly connected, Serial0/0
C 192.168.1.4 is directly connected, FastEthernet0/0
R 192.168.1.16 [120/1] via 192.168.1.14, 00:00:04, Serial0/1
RouterC#sh ip route
–output truncated–
Gateway of last resort is 192.168.1.5 to network 0.0.0.0
C 172.20.0.0/16 is directly connected, Serial0/0
192.168.1.0/30 is subnetted, 1 subnets
C 192.168.1.4 is directly connected, FastEthernet0/0
S* 0.0.0.0/0 [1/0] via 192.168.1.5
RouterD#sh ip route
–output truncated—
Gateway of last resort is not set
C 172.17.0.0/16 is directly connected, FastEthernet0/0
R 172.16.0.0/16 [120/1] via 192.168.1.9, 00:00:24, Serial0/1
R 172.19.0.0/16 [120/1] via 192.168.1.18, 00:00:01, Serial0/0
R 172.18.0.0/16 [120/2] via 192.168.1.18, 00:00:01, Serial0/0
[120/2] via 192.168.1.9, 00:00:24, Serial0/1
R 172.20.0.0/16 [120/3] via 192.168.1.18, 00:00:01, Serial0/0
[120/3] via 192.168.1.9, 00:00:24, Serial0/1
192.168.1.0/30 is subnetted, 5 subnets
C 192.168.1.8 is directly connected, Serial0/1
R 192.168.1.12 [120/1] via 192.168.1.18, 00:00:01, Serial0/0
R 192.168.1.0 [120/1] via 192.168.1.9, 00:00:24, Serial0/1
R 192.168.1.4 [120/2] via 192.168.1.18, 00:00:01, Serial0/0
[120/2] via 192.168.1.9, 00:00:24, Serial0/1
C 192.168.1.16 is directly connected, Serial0/0
RouterE#sh ip route
–output truncated–
Gateway of last resort is not set
R 172.17.0.0/16 [120/1] via 192.168.1.17, 00:00:09, Serial0/0
R 172.16.0.0/16 [120/2] via 192.168.1.17, 00:00:09, Serial0/0
[120/2] via 192.168.1.13, 00:00:19, Serial0/1
C 172.19.0.0/16 is directly connected, FastEthernet0/0
R 172.18.0.0/16 [120/1] via 192.168.1.13, 00:00:19, Serial0/1
R 172.20.0.0/16 [120/2] via 192.168.1.13, 00:00:19, Serial0/1
192.168.1.0/30 is subnetted, 5 subnets
R 192.168.1.8 [120/1] via 192.168.1.17, 00:00:09, Serial0/0
C 192.168.1.12 is directly connected, Serial0/1
R 192.168.1.0 [120/1] via 192.168.1.13, 00:00:19, Serial0/1
R 192.168.1.4 [120/1] via 192.168.1.13, 00:00:19, Serial0/1
C 192.168.1.16 is directly connected, Serial0/0
In the above outputs notice that RouterC does not have any RIP routes but all other routers know
network 172.20.0.0/16.
A final verification can be done by sending a ping to 172.20.0.3 (interface s0/0 of RouterC) from
RouterD as shown below:
RouterD#ping 172.20.0.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.20.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/8/8 ms
A successful ping shown that routing is working perfectly in the network.
Lab 5-2: EIGRP
You are tasked with configuring the network shown in Figure 5-14 using EIGRP such that:
1. Each interface on every router is advertised in EIGRP AS 1
2. Traffic from 172.17.0.0/16 network destined to 172.20.0.0/16 should take the RouterD-
>RouterA->RouterB->RouterC path. If that path is not accessible, the traffic should be routed via
RouterE.
3. Ensure that EIGRP can support dis-contiguous networks.
4. RouterD sould have only a summary route for all 192.168.1.x networks not directly connected
to it.
5. Remember that the DCE side of your DTE/DCE back to back cable should be connected to the
interface configured with clock rate.
Figure 5-14 Network for Lab 5-2
The initial configuration for all routers is shown below:
RouterA
Router(config)#hostname RouterA
RouterA(config)#int fa0/0
RouterA(config-if)#ip address 172.16.0.1 255.255.0.0
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/1
RouterA(config-if)#ip address 192.168.1.9 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterB
Router(config)#hostname RouterB
RouterB(config)#int fa0/0
RouterB(config-if)#ip address 192.168.1.5 255.255.255.252
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/0
RouterB(config-if)#ip add 192.168.1.2 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/1
RouterB(config-if)#ip address 192.168.1.13 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int fa0/1
RouterB(config-if)#ip address 172.18.0.2 255.255.0.0
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterC
Router(config)#hostname RouterC
RouterC(config)#int fa0/0
RouterC(config-if)#ip address 192.168.1.6 255.255.255.252
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterC(config)#int s0/0
RouterC(config-if)#ip address 172.20.0.3 255.255.0.0
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterD
Router(config)#hostname RouterD
RouterD(config)#int fa0/0
RouterD(config-if)#ip address 172.17.0.4 255.255.0.0
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/0
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#ip address 192.168.1.17 255.255.255.252
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/1
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#ip address 192.168.1.10 255.255.255.252
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterE
Router(config)#hostname RouterE
RouterE(config)#int fa0/0
RouterE(config-if)#ip address 172.19.0.5 255.255.0.0
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/0
RouterE(config-if)#ip address 192.168.1.18 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/1
RouterE(config-if)#ip address 192.168.1.14 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
Solution
First, each interface on each router needs to be added in RIP and version 2 has to be enabled:
RouterA(config)#router rip
RouterA(config-router)#version 2
RouterA(config-router)#network 192.168.1.0
RouterA(config-router)#network 172.16.0.0
RouterA(config-router)#end
RouterB(config)#router rip
RouterB(config-router)#version 2
RouterB(config-router)#network 192.168.1.0
RouterB(config-router)#network 172.18.0.0
RouterB(config-router)#end
RouterC(config)#router rip
RouterC(config-router)#version 2
RouterC(config-router)#network 192.168.1.0
RouterC(config-router)#network 172.20.0.0
RouterC(config-router)#end
RouterD(config)#router rip
RouterD(config-router)#version 2
RouterD(config-router)#network 192.168.1.0
RouterD(config-router)#network 172.17.0.0
RouterD(config-router)#end
RouterE(config)#router rip
RouterE(config-router)#version 2
RouterE(config-router)#network 192.168.1.0
RouterE(config-router)#network 172.19.0.0
RouterE(config-router)#end
Solution
The first item requires configuring EIGRP on all routers and advertising all interfaces as shown
below:
RouterA(config)#router eigrp 1
RouterA(config-router)#network 192.1.1.0
RouterA(config-router)#network 172.16.0.0
RouterA(config-router)#end
RouterB(config)#router eigrp 1
RouterB(config-router)#network 192.1.1.0
RouterB(config-router)#network 172.18.0.0
RouterB(config-router)#end
RouterC(config)#router eigrp 1
RouterC(config-router)#network 192.1.1.0
RouterC(config-router)#network 172.20.0.0
RouterC(config-router)#end
RouterD(config)#router eigrp 1
RouterD(config-router)#network 192.1.1.0
RouterD(config-router)#network 172.17.0.0
RouterD(config-router)#end
RouterE(config)#router eigrp 1
RouterE(config-router)#network 192.1.1.0
RouterE(config-router)#network 172.19.0.0
RouterE(config-router)#end
The second item in the list requires that traffic is not load balanced from RouterD to RouterA and
RouterE. Both paths have an equal cost, so the metrics must be modified to stop EIGRP from load
balancing as shown below:
RouterB(config)#int s0/1
RouterB(config-if)#bandwidth 1000
The next item in the list requires auto summarization to be disabled on all routers as shown
below:
RouterA(config)#router eigrp 1
RouterA(config-router)#no auto-summary
RouterA(config-router)#end
RouterB(config)#router eigrp 1
RouterB(config-router)#no auto-summary
RouterB(config-router)#end
RouterC(config)#router eigrp 1
RouterC(config-router)#no auto-summary
RouterC(config-router)#end
RouterD(config-router)#router eigrp 1
RouterD(config-router)#no auto-summary
RouterD(config-router)#end
RouterE(config)#router eigrp 1
RouterE(config-router)#no auto-summary
RouterE(config-router)#end
The final item requires you to configure summarization on RouterA and RouterE. Since you can
only configure summarization in blocks, you will need to summarize the addresses in block of 32
to cover all 192.168.x.x/30 networks as shown below
RouterA(config)#int s0/1
RouterA(config-if)#ip summary-address eigrp 1 192.168.1.0 255.255.255.224
RouterE(config)#int s0/0
RouterE(config-if)#ip summary-address eigrp 1 192.168.1.0 255.255.255.224
Verification
To verify the solution, first check the routing table on all routers. They should look similar to the
ones shown below:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2195456] via 192.168.1.10, 00:07:51, Serial0/1
C 172.16.0.0/16 is directly connected, FastEthernet0/0
D 172.19.0.0/16 [90/2707456] via 192.168.1.10, 00:05:34, Serial0/1
[90/2707456] via 192.168.1.2, 00:05:34, Serial0/0
D 172.18.0.0/16 [90/2195456] via 192.168.1.2, 00:08:34, Serial0/0
D 172.20.0.0/16 [90/2707456] via 192.168.1.2, 00:08:16, Serial0/0
192.168.1.0/24 is variably subnetted, 6 subnets, 2 masks
C 192.168.1.8/30 is directly connected, Serial0/1
D 192.168.1.12/30 [90/2681856] via 192.168.1.2, 00:02:31, Serial0/0
C 192.168.1.0/30 is directly connected, Serial0/0
D 192.168.1.0/27 is a summary, 00:02:31, Null0
D 192.168.1.4/30 [90/2195456] via 192.168.1.2, 00:02:31, Serial0/0
D 192.168.1.16/30 [90/2681856] via 192.168.1.10, 00:05:34, Serial0/1
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2707456] via 192.168.1.14, 00:05:56, Serial0/1
[90/2707456] via 192.168.1.1, 00:05:56, Serial0/0
D 172.16.0.0/16 [90/2195456] via 192.168.1.1, 00:05:56, Serial0/0
D 172.19.0.0/16 [90/2195456] via 192.168.1.14, 00:05:56, Serial0/1
C 172.18.0.0/16 is directly connected, FastEthernet0/1
D 172.20.0.0/16 [90/2195456] via 192.168.1.6, 00:05:56, FastEthernet0/0
192.168.1.0/30 is subnetted, 5 subnets
D 192.168.1.8 [90/2681856] via 192.168.1.1, 00:05:56, Serial0/0
C 192.168.1.12 is directly connected, Serial0/1
C 192.168.1.0 is directly connected, Serial0/0
C 192.168.1.4 is directly connected, FastEthernet0/0
D 192.168.1.16 [90/2681856] via 192.168.1.14, 00:05:56, Serial0/1
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2733056] via 192.168.1.5, 00:08:38, FastEthernet0/0
D 172.16.0.0/16 [90/2221056] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 172.19.0.0/16 [90/2221056] via 192.168.1.5, 00:06:21, FastEthernet0/0
D 172.18.0.0/16 [90/307200] via 192.168.1.5, 00:09:07, FastEthernet0/0
C 172.20.0.0/16 is directly connected, Serial0/0
192.168.1.0/30 is subnetted, 5 subnets
D 192.168.1.8 [90/2707456] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 192.168.1.12 [90/2195456] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 192.168.1.0 [90/2195456] via 192.168.1.5, 00:09:07, FastEthernet0/0
C 192.168.1.4 is directly connected, FastEthernet0/0
D 192.168.1.16 [90/2707456] via 192.168.1.5, 00:06:21, FastEthernet0/0
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 172.17.0.0/16 is directly connected, FastEthernet0/0
D 172.16.0.0/16 [90/2195456] via 192.168.1.9, 00:09:00, Serial0/1
D 172.19.0.0/16 [90/2195456] via 192.168.1.18, 00:06:41, Serial0/0
D 172.18.0.0/16 [90/2707456] via 192.168.1.9, 00:06:39, Serial0/1
D 172.20.0.0/16 [90/3219456] via 192.168.1.9, 00:06:39, Serial0/1
192.168.1.0/24 is variably subnetted, 3 subnets, 2 masks
C 192.168.1.8/30 is directly connected, Serial0/1
D 192.168.1.0/27 [90/2681856] via 192.168.1.18, 00:03:20, Serial0/0
[90/2681856] via 192.168.1.9, 00:03:20, Serial0/1
C 192.168.1.16/30 is directly connected, Serial0/0
RouterE#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2195456] via 192.168.1.17, 00:07:06, Serial0/0
D 172.16.0.0/16 [90/2707456] via 192.168.1.17, 00:07:04, Serial0/0
C 172.19.0.0/16 is directly connected, FastEthernet0/0
D 172.18.0.0/16 [90/3097600] via 192.168.1.13, 00:07:04, Serial0/1
D 172.20.0.0/16 [90/3609600] via 192.168.1.13, 00:07:04, Serial0/1
192.168.1.0/24 is variably subnetted, 6 subnets, 2 masks
D 192.168.1.8/30 [90/2681856] via 192.168.1.17, 00:07:04, Serial0/0
C 192.168.1.12/30 is directly connected, Serial0/1
D 192.168.1.0/30 [90/3584000] via 192.168.1.13, 00:04:01, Serial0/1
D 192.168.1.0/27 is a summary, 00:03:45, Null0
D 192.168.1.4/30 [90/3097600] via 192.168.1.13, 00:07:04, Serial0/1
C 192.168.1.16/30 is directly connected, Serial0/0
In the routing table of RouterD, notice that it has only a single route to 172.20.0.0/16 network
whose next hop is RouterA. Also notice that there is a summary route on it for all the 192.168.1.x
networks.
Finally, ping the 172.20.0.3 network from RouterD to verify that routing is working correctly:
RouterD#ping 172.20.0.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.20.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/8/12 ms
Lab 5-3: OSPF
You are tasked with configuring the network shown in Figure 5-14 using EIGRP such that:
1. Each interface on every router is advertised in EIGRP AS 1
2. Traffic from 172.17.0.0/16 network destined to 172.20.0.0/16 should take the RouterD-
>RouterA->RouterB->RouterC path. If that path is not accessible, the traffic should be routed via
RouterE.
3. Ensure that EIGRP can support dis-contiguous networks.
4. RouterD sould have only a summary route for all 192.168.1.x networks not directly connected
to it.
5. Remember that the DCE side of your DTE/DCE back to back cable should be connected to the
interface configured with clock rate.
Figure 5-14 Network for Lab 5-2
The initial configuration for all routers is shown below:
RouterA
Router(config)#hostname RouterA
RouterA(config)#int fa0/0
RouterA(config-if)#ip address 172.16.0.1 255.255.0.0
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterA(config)#int s0/1
RouterA(config-if)#ip address 192.168.1.9 255.255.255.252
RouterA(config-if)#no shut
RouterA(config-if)#exit
RouterB
Router(config)#hostname RouterB
RouterB(config)#int fa0/0
RouterB(config-if)#ip address 192.168.1.5 255.255.255.252
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/0
RouterB(config-if)#ip add 192.168.1.2 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int s0/1
RouterB(config-if)#ip address 192.168.1.13 255.255.255.252
RouterB(config-if)#clock rate 2000000
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterB(config)#int fa0/1
RouterB(config-if)#ip address 172.18.0.2 255.255.0.0
RouterB(config-if)#no shut
RouterB(config-if)#exit
RouterC
Router(config)#hostname RouterC
RouterC(config)#int fa0/0
RouterC(config-if)#ip address 192.168.1.6 255.255.255.252
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterC(config)#int s0/0
RouterC(config-if)#ip address 172.20.0.3 255.255.0.0
RouterC(config-if)#no shut
RouterC(config-if)#exit
RouterD
Router(config)#hostname RouterD
RouterD(config)#int fa0/0
RouterD(config-if)#ip address 172.17.0.4 255.255.0.0
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/0
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#ip address 192.168.1.17 255.255.255.252
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterD(config)#int s0/1
RouterD(config-if)#clock rate 2000000
RouterD(config-if)#ip address 192.168.1.10 255.255.255.252
RouterD(config-if)#no shut
RouterD(config-if)#exit
RouterE
Router(config)#hostname RouterE
RouterE(config)#int fa0/0
RouterE(config-if)#ip address 172.19.0.5 255.255.0.0
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/0
RouterE(config-if)#ip address 192.168.1.18 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
RouterE(config)#int s0/1
RouterE(config-if)#ip address 192.168.1.14 255.255.255.252
RouterE(config-if)#no shut
RouterE(config-if)#exit
Solution
First, each interface on each router needs to be added in RIP and version 2 has to be enabled:
RouterA(config)#router rip
RouterA(config-router)#version 2
RouterA(config-router)#network 192.168.1.0
RouterA(config-router)#network 172.16.0.0
RouterA(config-router)#end
RouterB(config)#router rip
RouterB(config-router)#version 2
RouterB(config-router)#network 192.168.1.0
RouterB(config-router)#network 172.18.0.0
RouterB(config-router)#end
RouterC(config)#router rip
RouterC(config-router)#version 2
RouterC(config-router)#network 192.168.1.0
RouterC(config-router)#network 172.20.0.0
RouterC(config-router)#end
RouterD(config)#router rip
RouterD(config-router)#version 2
RouterD(config-router)#network 192.168.1.0
RouterD(config-router)#network 172.17.0.0
RouterD(config-router)#end
RouterE(config)#router rip
RouterE(config-router)#version 2
RouterE(config-router)#network 192.168.1.0
RouterE(config-router)#network 172.19.0.0
RouterE(config-router)#end
Solution
The first item requires configuring EIGRP on all routers and advertising all interfaces as shown
below:
RouterA(config)#router eigrp 1
RouterA(config-router)#network 192.1.1.0
RouterA(config-router)#network 172.16.0.0
RouterA(config-router)#end
RouterB(config)#router eigrp 1
RouterB(config-router)#network 192.1.1.0
RouterB(config-router)#network 172.18.0.0
RouterB(config-router)#end
RouterC(config)#router eigrp 1
RouterC(config-router)#network 192.1.1.0
RouterC(config-router)#network 172.20.0.0
RouterC(config-router)#end
RouterD(config)#router eigrp 1
RouterD(config-router)#network 192.1.1.0
RouterD(config-router)#network 172.17.0.0
RouterD(config-router)#end
RouterE(config)#router eigrp 1
RouterE(config-router)#network 192.1.1.0
RouterE(config-router)#network 172.19.0.0
RouterE(config-router)#end
The second item in the list requires that traffic is not load balanced from RouterD to RouterA and
RouterE. Both paths have an equal cost, so the metrics must be modified to stop EIGRP from load
balancing as shown below:
RouterB(config)#int s0/1
RouterB(config-if)#bandwidth 1000
The next item in the list requires auto summarization to be disabled on all routers as shown
below:
RouterA(config)#router eigrp 1
RouterA(config-router)#no auto-summary
RouterA(config-router)#end
RouterB(config)#router eigrp 1
RouterB(config-router)#no auto-summary
RouterB(config-router)#end
RouterC(config)#router eigrp 1
RouterC(config-router)#no auto-summary
RouterC(config-router)#end
RouterD(config-router)#router eigrp 1
RouterD(config-router)#no auto-summary
RouterD(config-router)#end
RouterE(config)#router eigrp 1
RouterE(config-router)#no auto-summary
RouterE(config-router)#end
The final item requires you to configure summarization on RouterA and RouterE. Since you can
only configure summarization in blocks, you will need to summarize the addresses in block of 32
to cover all 192.168.x.x/30 networks as shown below
RouterA(config)#int s0/1
RouterA(config-if)#ip summary-address eigrp 1 192.168.1.0 255.255.255.224
RouterE(config)#int s0/0
RouterE(config-if)#ip summary-address eigrp 1 192.168.1.0 255.255.255.224
Verification
To verify the solution, first check the routing table on all routers. They should look similar to the
ones shown below:
RouterA#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2195456] via 192.168.1.10, 00:07:51, Serial0/1
C 172.16.0.0/16 is directly connected, FastEthernet0/0
D 172.19.0.0/16 [90/2707456] via 192.168.1.10, 00:05:34, Serial0/1
[90/2707456] via 192.168.1.2, 00:05:34, Serial0/0
D 172.18.0.0/16 [90/2195456] via 192.168.1.2, 00:08:34, Serial0/0
D 172.20.0.0/16 [90/2707456] via 192.168.1.2, 00:08:16, Serial0/0
192.168.1.0/24 is variably subnetted, 6 subnets, 2 masks
C 192.168.1.8/30 is directly connected, Serial0/1
D 192.168.1.12/30 [90/2681856] via 192.168.1.2, 00:02:31, Serial0/0
C 192.168.1.0/30 is directly connected, Serial0/0
D 192.168.1.0/27 is a summary, 00:02:31, Null0
D 192.168.1.4/30 [90/2195456] via 192.168.1.2, 00:02:31, Serial0/0
D 192.168.1.16/30 [90/2681856] via 192.168.1.10, 00:05:34, Serial0/1
RouterB#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2707456] via 192.168.1.14, 00:05:56, Serial0/1
[90/2707456] via 192.168.1.1, 00:05:56, Serial0/0
D 172.16.0.0/16 [90/2195456] via 192.168.1.1, 00:05:56, Serial0/0
D 172.19.0.0/16 [90/2195456] via 192.168.1.14, 00:05:56, Serial0/1
C 172.18.0.0/16 is directly connected, FastEthernet0/1
D 172.20.0.0/16 [90/2195456] via 192.168.1.6, 00:05:56, FastEthernet0/0
192.168.1.0/30 is subnetted, 5 subnets
D 192.168.1.8 [90/2681856] via 192.168.1.1, 00:05:56, Serial0/0
C 192.168.1.12 is directly connected, Serial0/1
C 192.168.1.0 is directly connected, Serial0/0
C 192.168.1.4 is directly connected, FastEthernet0/0
D 192.168.1.16 [90/2681856] via 192.168.1.14, 00:05:56, Serial0/1
RouterC#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2733056] via 192.168.1.5, 00:08:38, FastEthernet0/0
D 172.16.0.0/16 [90/2221056] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 172.19.0.0/16 [90/2221056] via 192.168.1.5, 00:06:21, FastEthernet0/0
D 172.18.0.0/16 [90/307200] via 192.168.1.5, 00:09:07, FastEthernet0/0
C 172.20.0.0/16 is directly connected, Serial0/0
192.168.1.0/30 is subnetted, 5 subnets
D 192.168.1.8 [90/2707456] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 192.168.1.12 [90/2195456] via 192.168.1.5, 00:09:07, FastEthernet0/0
D 192.168.1.0 [90/2195456] via 192.168.1.5, 00:09:07, FastEthernet0/0
C 192.168.1.4 is directly connected, FastEthernet0/0
D 192.168.1.16 [90/2707456] via 192.168.1.5, 00:06:21, FastEthernet0/0
RouterD#sh ip route
–output truncated–
Gateway of last resort is not set
C 172.17.0.0/16 is directly connected, FastEthernet0/0
D 172.16.0.0/16 [90/2195456] via 192.168.1.9, 00:09:00, Serial0/1
D 172.19.0.0/16 [90/2195456] via 192.168.1.18, 00:06:41, Serial0/0
D 172.18.0.0/16 [90/2707456] via 192.168.1.9, 00:06:39, Serial0/1
D 172.20.0.0/16 [90/3219456] via 192.168.1.9, 00:06:39, Serial0/1
192.168.1.0/24 is variably subnetted, 3 subnets, 2 masks
C 192.168.1.8/30 is directly connected, Serial0/1
D 192.168.1.0/27 [90/2681856] via 192.168.1.18, 00:03:20, Serial0/0
[90/2681856] via 192.168.1.9, 00:03:20, Serial0/1
C 192.168.1.16/30 is directly connected, Serial0/0
RouterE#sh ip route
–output truncated–
Gateway of last resort is not set
D 172.17.0.0/16 [90/2195456] via 192.168.1.17, 00:07:06, Serial0/0
D 172.16.0.0/16 [90/2707456] via 192.168.1.17, 00:07:04, Serial0/0
C 172.19.0.0/16 is directly connected, FastEthernet0/0
D 172.18.0.0/16 [90/3097600] via 192.168.1.13, 00:07:04, Serial0/1
D 172.20.0.0/16 [90/3609600] via 192.168.1.13, 00:07:04, Serial0/1
192.168.1.0/24 is variably subnetted, 6 subnets, 2 masks
D 192.168.1.8/30 [90/2681856] via 192.168.1.17, 00:07:04, Serial0/0
C 192.168.1.12/30 is directly connected, Serial0/1
D 192.168.1.0/30 [90/3584000] via 192.168.1.13, 00:04:01, Serial0/1
D 192.168.1.0/27 is a summary, 00:03:45, Null0
D 192.168.1.4/30 [90/3097600] via 192.168.1.13, 00:07:04, Serial0/1
C 192.168.1.16/30 is directly connected, Serial0/0
In the routing table of RouterD, notice that it has only a single route to 172.20.0.0/16 network
whose next hop is RouterA. Also notice that there is a summary route on it for all the 192.168.1.x
networks.
Finally, ping the 172.20.0.3 network from RouterD to verify that routing is working correctly:
RouterD#ping 172.20.0.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.20.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/8/12 ms
Summary
Phew! This was a big chapter and with a good reason too! As I mentioned earlier in the book, the
CCNA certification is mostly about the network and the data-link layer. This chapter covered the
most important aspect of the network layer – routing protocols. Here you got to know what binds
a network together.
In this chapter you were introduced to all three routing protocol – RIP, EIGRP and OSPF. You
learned the how each one operates and how to configure them. You also learned the difference
between how each of these protocol works. I cannot stress enough the importance of this chapter.
I strongly suggest re-reading the chapter and practicing configuring, verifying and troubleshooting
them before moving to the next chapter because the next two chapters look at the data-link layer
that is very different from the network layer.
Chapter 6: Switching & Spanning Tree Protocol (STP)
In the first chapter, you were introduced to bit of switching and switches. You already know that
the layer 2 of the OSI model deals with switching frames in the local network and that switches
work at this layer. You also know that switches break collision domains to provide a faster and
collision free network. In this chapter, we take a deeper look at how switches work.
From Chapter 4, you will remember that routing protocols are prone to loops. Similarly,
redundant links in layer 2 can cause loops. By now you are aware that loops of any kind are bad.
Hence, the Spanning Tree Protocol (STP) was developed to keep layer 2 networks loop free. STP
is discussed in depth in this chapter.
6-1 Understanding Switching and Switches
6-2 Initial Configuration of a Catalyst Switch
6-3 Spanning Tree Protocol (STP)
6-4 Cisco’s additions to STP (Portfast, BPDUGuard, BPDUFilter, UplinkFast,
BackboneFast)
6-5 Rapid Spanning Tree Protocol (RSTP) – 802.1w
6-6 Per-VLAN Spanning Tree Plus (PVST+) and Per-VLAN RSTP (Rapid-PVST)
6-7 EtherChannel
6-8 Lab 6-1 – Port Security
6-9 Lab 6-2 – STP
6-10 Summary
Understanding Switches & Switching
History of Switching
To understand the importance of current day switches, you need to understand how networks used
to work before switches were invented. During mid to late 1980s 10Base2 Ethernet was the
dominant 10Mbp/s Ethernet standard. This standard used thin coaxial cables with a maximum
length of 185 meters with a maximum of 30 hosts connected to the cable. Hosts were connected to
the cable using a T-connector. Most of the hosts at that time were either dumb terminals or early
PCs that connected to a mainframe for accessing services.
When Novell became very popular in the late 80s and early 90s, NetWare servers replaced the
then popular OS/2 and LAN Manager servers. This made Ethernet more popular, because Novell
servers used it to communicate between clients and the server. Increasing dependence on Ethernet
and the fact that 10Base2 technology was costly and slow lead to rapid development on Ethernet.
Hubs were added to networks so that the 10Base5 standard could be used with one host and a
Hub port connected on each cable. This led to collapsed backbone networks such as the one
shown in Figure 6-1.
As you already know, networks made of only Hubs suffer from problems such as broadcast
storms and become slow and sluggish. The networks of late 80s and early 90s suffered from the
same problem. Meanwhile, the dependence on networks and services available grew rapidly. The
corporate network became huge and very slow since most of the services were available on it
and remote offices depended on these services. Segmenting the networks and increasing their
bandwidth became a priority. With the introduction of devices called bridges, some segmentation
was introduced. Bridges broke up collision domain but were limited by the number of ports
available and the fact that they could not do much apart from breaking up the collision domains.
Figure 6-1 Collapsed Backbone Network
To overcome the limitations of bridges, switches were invented. Switches were multiport bridges
that broke the collision domains on each port and could provide many more services than bridges.
The problem with the early switches was that they were very costly. This prohibited connecting
each individual host to a switch port. So after introduction of switches, the networks came to look
like the one shown in Figure 6-2.
Figure 6-2 Early Switched Networks
In these networks, each hub was connected to a Switch port. This changed increased the network
performance greatly since each hub now had its own collision domain instead of the entire local
network being a single collision domain. Such networks, though vastly better than what existed
earlier, still forced hosts connected to hubs to share a collision domain. This prevented the
networks from attaining their potential. With the drop in prices of switches, this final barrier was
also brought down. Cheaper switches meant that each host could finally be connected to a switch
port thereby, providing a separate collision domain for each host. Networks came to look like the
one shown in Figure 6-3. Such networks practically had no collision.
Figure 6-3 Switched networks
Overall, using switches to segment networks and provide connectivity to hosts results in very fast
and efficient network with each host getting the full bandwidth.
While switches increase the efficiency of the network, they still have the limitations discussed
below:
1. While switches break collision domains, they do not break broadcast domains. The entire
layer 2 network still remains a single broadcast domain. This makes the network
susceptible to broadcast storms and related problems. Routers have to be used to break
the broadcast domains.
2. When redundancy is introduced in the switched network, the possibility of loops becomes
very high. Dedicated protocols need to be run to ensure that the network remains loop
free. This increases burden on the switches. The convergence time of these protocols is
also a concern since the network will not be useable during convergence.
Due to the above limitations, routers cannot be eliminated from the network. To design a good
switched or bridged network, the following two important points must be considered:
1. Collision domain should be broken as much as possible.
2. The users should spend 80 percent of their time on the local segment.
Bridging vs switching
While switches are just multiport bridges, there are many differences between them:
1. Bridges are software based while switches are hardware based since they use ASICs for
building and maintaining their tables.
2. Switches have higher number of ports than bridges
3. Bridges have a single spanning tree instance while switches can be multiple instances.
(Spanning tree will be covered later in the chapter).
While different in some aspects, switches and bridges share the following characteristics:
1. Both look at hardware address of the frame to make a decision.
2. Both learn MAC address from frames received.
3. Both forward layer two broadcasts.
Three functions of a switch
A switch at layer 2 has the following three distinct functions:
1. Learning MAC addresses
2. Filtering and forwarding frames
3. Preventing loops on the network
It is important to understand and remember each of these three functions. The following sections
explain these three functions in depth.
Learning MAC Addresses
When a switch is first powered up it is not aware of the location of any host on the network. In a
very short time, as hosts transmit data to other hosts, it learns the MAC address from the received
frame and remembers which hosts are connected to which port.
If the switch receives a frame destined to an unknown address, it will send a broadcast message
out of each port except the port that the request was received on, and then when the switch
receives a reply it will add the address and source port to its database. When another frame
destined to this address is received, the switch does not need to send a broadcast since it already
knows where the destination address is located.
You can see how switches differ from hubs. A hub will never remember which hosts are
connected to which ports and will always flood traffic out of each and every port.
The table in which the addresses are stored is known as CAM (Content-addressable memory)
table. To further understand how the switch populates the CAM table, consider the following
example:
1. A switch boots up and has an empty CAM table.
2. HostA with a MAC address of a3bc.4a59.4109 sends a frame to HostB whose address is
a3bc.4a59.4001.
3. The switch receives the frame on interface fa0/1 and saves the MAC address of HostA
(a3bc.4a59.4109) in its CAM table and associates it with interface fa0/1.
4. Since the destination address is not known, the switch will broadcast the frame out all
interfaces except fa0/1.
5. HostB receives the frame and replies back.
6. The switch receives the reply on interface fa0/2 and saves the MAC address of HostB
(a3bc.4a59.4001) in its CAM table and associates it with interface fa0/2.
7. The switch forwards the frame out interface fa0/1 since the destination MAC address
(a3bc.4a59.4109) is present in the CAM table and associated with interface fa0/1.
8. HostA replies back to HostB.
9. The switch receives the frame and forwards it out interface fa0/2 because it has the
destination MAC address (a3bc.4a59.4001) associated with interface fa0/2 in the CAM
table.
The above exchange is illustrated in Figure 6-5.
Figure 6-5 Switch learning MAC addresses
The switch will store a MAC addresses in the CAM table for a limited amount of time. If no
traffic is heard from that port for a predefined period of time then the entry is purged from
memory. This is to free up memory space on the switch and also prevent entries from becoming
out of date and inaccurate. This time is known as the MAC address aging time. On Cisco 2950
this time is 300 seconds by default and can be configured to be between 10 and 1000000 seconds.
The switch can also be configured so as to not purge the addresses ever.
The command to see the CAM table of a Switch is “show mac address-table”. Here is an
example of how the CAM table of a Switch:
Filtering and Forwarding frames
When a frame arrives at a switch port, the switch examines its database of MAC addresses. If the
destination address is in the database the frame will only be sent out of the interface the
destination host is attached to. This process is known as frame filtering. Frame filtering helps
preserve the bandwidth since the frame is only sent out the interface on which the destination
MAC address is connected. This also adds a layer of security since no other host will ever
receive the frame.
On the other hand, if the switch does not know the destination MAC address, it will flood the
frame out all active interfaces expect the interface where the frame was received on. Another
situation where the switch will flood out a frame is when a host sends a broadcast message.
Remember that a switched network is a single broadcast domain.
Let’s take two examples to understand frame filtering. A switch’s CAM table is shown below:
Preventing loops in the network
Having redundant links between switches can be very useful. If one path breaks, the traffic can
take an alternative path. Though redundant paths are extremely useful, they often cause a lot of
problems. Some of the problems associated with such loops are broadcast storms, endless
looping, duplicate frames and faulty CAM tables. Let’s take a look at each of these problems in
detail:
Broadcast Storms – Without loop avoidance techniques in place, switches can endlessly
flood a broadcast in the network. To understand how this can happen, consider the
network shown in Figure 6-7.
Figure 6-7 Broadcast Storms
In the network shown in Figure 6-7, consider a situation where HostA send out a broadcast. The
following sequence of events will then happen:
SwitchA will forward the frame out all interface except the one connected to HostA.
HostB will receive a copy of this broadcast. Notice that the frame would have gone out of
interface fa0/1 and fa0/2 also. For ease of understanding lets call the frame going out of
fa0/1 as frame1 while the frame going out of fa0/2 frame2.
When SwitchB receives frame1, it will flood it out all interfaces including fa0/2. When it
receives frame2, it will flood it out all interfaces including fa0/1. HostC and HostD would
receive both the frames, which actually means they receive two copies of the same frame.
Meanwhile one frame was each sent out fa0/2 and fa0/1 towards SwitchA! Let’s call these
frames frame3 and frame4.
When SwitchA receives frame3, it will flood it out all interfaces including fa0/1 and when
it receives frame4 it will flood it out all interface including fa0/2. This means, HostB and
HostA both receive two broadcasts. Remember that HostA was the original source of the
broadcast while HostB has already receive one copy! But the worse part is that two more
frames went out to SwitchB. Now the previous and the current step will continue endlessly
and the four hosts will be continuously get the broadcast.
If multiple broadcasts are sent out to this network, each of them will endlessly be sent to every
host in the network thereby causing what is known as a broadcast storm.
Endless looping – Similar to what happens in a broadcast storm, consider a situation
where HostA in Figure 6-7 sends a unicast destined to a host which does not exist in the
network. SwitchA will receive the frame and will see that it does not know the destination
address. It will forward it out all interface except the one where HostA is connected.
SwitchB will receive two copies of this frame and will flood them out all its interfaces
since it does not know the destination address. Since SwitchB will flood the frames out
fa0/1 and fa0/2, SwitchA will receive the frames and the endless loop will continue.
Duplicate frames – In the network shown in Figure 6-7, consider a situation where HostA
sends a frame destined to HostD. When switch A receives this frame, it will not know
where HostD is and will flood it out all the interfaces. SwitchB will receive one copy
each from both fa0/1 and fa0/2 interfaces. It will check the destination address and send
both the packets to HostD. In effect, HostD would have received a duplicate packet. This
might cause problems with protocols using UDP and especially with voice packets.
Faulty CAM table – Consider the situation where HostA sends a frame destined to
HostC. When SwitchA receives the frame, it does not know where HostC resides, so it
will flood out the frame. SwitchB will receive the frame on both fa0/1 and fa0/2. It will
read the source address and store it in its CAM table. Now it has two destination
interfaces for a single address! Now a switch cannot have two entries for a single address,
so it will keep overwriting each entry with new information as frames are received on
multiple interfaces. This can cause the switch to get overwhelmed and it might stop
forwarding traffic.
All of these problems can cause a switched network to come crashing down. They should be
entirely avoided or at least fixed. Hence, the Spanning Tree Protocol was created to keep the
network loop free. We will be discussing STP shortly.
Initial Configuration Of A Catalyst Switch
The process to connect to the CLI of a catalyst switch and the initial configuration was covered in
detail in Chapter 3. I would recommend reading that chapter again to get familiar with the CLI of
a switch. The list below briefly covers some initial configuration steps to get you started.
Hostname – You can set the name of the device with the hostname command in the global
configuration mode. Setting the name of the device does not have any impact on the
functions of the switch. It will continue to perform normally respective of the name but it
is easier to manage and troubleshoot your network when you give the devices a meaningful
name. The example below shows how you can change the hostname. Notice the immediate
change in prompt after the command is executed.
Switch(config)#hostname SwitchA
SwitchA(config)#hostname SwitchB
SwitchB(config)#
Clock – You can set the date and time on the switch with the clock command in the
privileged exec mode. Setting the correct date and time is a requirement for some
advanced configuration but it helps when troubleshooting the device. The syntax of the
command is clock set hh:mm:ss day month. An example is shown below:
SwitchB(config)#line con 0
SwitchB(config-line)#password mypass
SwitchB(config-line)#exit
SwitchB(config-line)#line vty 0 4
SwitchB(config-line)#password mypass
One thing you must remember is that the interface configuration on a switch differs greatly from
the interface configuration of a router because switch interfaces are layer 2 interfaces (called
switchports) unlike router interfaces which are layer 3 interfaces. Chapter 6 and Chapter 7 cover
various interface level configuration for the Switch. The command to enter the interface
configuration mode remains the same on the router as shown below:
SwitchB(config)#interface fa0/1
SwitchB(config-if)#
Port Security
Typically, the Switch will learn the MAC address of the device directly connected to a particular
port and allow traffic through. This behavior can be a huge security risk if an intruder manages to
connect a host to your switchport. At some stage (and in CCNA!) you will need to restrict who
can connect to the switched network. This is where port security can assist us. Cisco switches
allow us to control which devices can connect to a switch port or how many of them can connect
to it (such as when a hub or another switch is connected to the port).
Port security is disabled by default. Before configuring the Port Security, we have to enable it. It
can be enabled using the switchport port-security command. Here’s how to do it:
Switch#config terminal
Switch(config)#interface fa0/1
Switch(config-if)#switchport port-security
As soon as port security is enabled, it will apply the default values, which is one host permitted
to connect at a time. If this rule is violated the port will shutdown.
Using the port security feature we can specify:
1. Who can connect to the Switchport
2. How many can connect to the Switchport
3. Violation Action
Let’s take a look at all the three options:
Who can connect – If you know that only a particular host should be connecting to a switchport,
then you can restrict access on that port to the MAC address of that host. This will ensure that no
one can unplug the authorized host and connect another one. This is a good option for secure
locations. This is done using the following command:
Switch#config terminal
Switch(config)#interface fa0/1
Switch(config-if)#switchport port-security
Switch(config-if)#switchport port-security mac-address 0001.14ac.3298
You have to remember that this command will not add the MAC address to the CAM table. When
a host connects to this port and sends the first frame, the source address of the frame is checked
against the configured MAC address. If a match is found that the address is added to the CAM
table.
So do we have to provide each host’s MAC address manually? That’s a huge task considering
thousands of hosts that a network can have! Well, not really. Port security provides something
called a sticky address. The Switch will use the MAC-address of the first host connected to the
port as a static MAC-address and only that host will be able to connect to the port subsequently.
The command required is:
Switch#
00:55:59: %PM-4-ERR_DISABLE: psecure-violation error detected on Fa0/2, putting Fa0/2 in
err-disable state
Switch#
00:55:59: %PORT_SECURITY-2-PSECURE_VIOLATION: Security violation occurred, caused
by MAC address 1234.5678.489d on port FastEthernet0/2.
Switch#
00:56:00: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/2, changed
state to down
00:56:01: %LINK-3-UPDOWN: Interface FastEthernet0/2, changed state to dow
Another important command is “show port-security” command. This command provides an
overview of all the ports that have port security configured:
Switch#show port-security
Secure Port MaxSecureAddr CurrentAddr SecurityViolation Security Action
(Count) (Count) (Count)
——————————————————————————————————–
Fa0/1 8 7 0 Shutdown
Fa0/2 15 5 0 Restrict
Fa0/3 5 4 0 Protect
——————————————————————————————————–
Spanning Tree Protocol (STP)
Figure 6-8 shows a full mesh network. A good redundant setup where, if one link fails, there
would be two more links for traffic to go through. However, could this lead to any problems?
Let’s say a host is connected to port fa0/1 on SwitchA (not shown) and this switch sends a
broadcast out to the network. SwitchA has to forward this frame out every port except fa0/1. A
part of what happens next is shown below:
1. SwitchB receives the packet on fa0/10 and sends it out on every port except that one.
2. SwitchD receives the packet on fa0/10 and sends it out on every port except that one but
including fa0/11.
3. SwitchA receives the packet on fa0/11 and sends it out on every port except fa0/11 but
including fa0/1 and fa0/10!
What we see here is that not only the original source received the frame back but now SwitchA
has to send the packet back out fa0/10 also. Back to the step one to three which goes on forever.
Figure 6-8 Full mesh switched network
As you already know, what we have just seen is a loop and such loops can bring a network to a
grinding halt. Layer 2 LAN protocols have no method to stop traffic endlessly travelling around
possibly carrying inaccurate information. At layer 3 we can make packets expire after a certain
amount of time or after they have traveled a certain distance (using route poisoning for example –
see the routing module for more info).
As layer 2 networks grew, it quickly became evident that a system to prevent loops was needed if
LANs were to continue to function. Digital Equipment Corporation created a protocol
called Spanning Tree Protocol (STP) to prevent broadcast storms and network loops at layer 2.
The IEEE under standard 802.1d now regulates STP.
STP allows bridges and switches to communicate with each other so they can create a loop free
topology. Each bridge runs the Spanning Tree Algorithm that calculates how a loop can be
prevented. When STP is applied to a looped LAN topology, all segments will be reachable but
any open ports that would create a traffic loop are blocked. When it sees a loop in the network it
blocks one or more redundant paths preventing a loop from forming. STP continually monitors the
network always looking for failures on switch ports or changes in the network topology. If a
change on the LAN is detected, STP can quickly make redundant ports available and close other
ports to ensure the network continues to function normally.
Before we learn further about STP, we need to understand some of the common terms associated
with it.
Bridge ID: This is a unique identification number of each switch in the network. It consists of
bridge priority and the base MAC Address of the switch. The default bridge priority of a Cisco
Switch is 32768. This is a configurable value between 0 to 61440 but the value has to be in
increments of 4096. 4096, 8192, 12288, so on and so forth are acceptable values. Priority plays a
very big role in STP and how well the network will function.
Root Bridge: All switches in the network elect the root of the tree. Thereon all decisions such as
which redundant path to block and which to open are taken from the perspective of the root switch
(commonly called the Root Bridge). The switch with the lowest Bridge ID wins the election.
Switches that do not become Root Bridge are called NonRoot Bridges.
BPDU: Bridge Protocol Data Unit (BPDU) is the information exchanged between switches to
select the Root Bridge as well as configure the network after that. A decision on which port to
block is taken after examining BPDUs from the neighbors. Cisco Switches send BPDUs every 2
seconds by default. This value can be configured from 1 second to 10 seconds.
Root Port: Each switch has to have a path to the Root Bridge, if not directly connected. Root port
is the directly connected link or the fastest path to the Root Bridge from a NonRoot bridge.
Port Cost: Each port has a cost that is determined by the bandwidth of the link. Port cost
determines which of the redundant links will not be blocked. The lower the cost, the better it is.
Port Cost also determines which port will become the root port if multiple paths to the root
bridge exist. Default port costs are shown below.
Table 6-1 Default STP cost
Designated Port: The bridges on a network segment collectively determine which bridge has the
least-cost path from the network segment to the root. The port connecting this bridge to the
network segment is then the designated port for the segment. Ports that are not selected Designated
Ports are called Non-Designated Ports.
Port States in Spanning Tree
Switch ports running STP can be in one of five states.
Blocked
Listening
Learning
Forwarding
Disabled
STP port states are very important. You should remember these states and what they mean. Each
of them is discussed below.
Blocked
None of the ports will transmit or receive any data, but they will listen to BPDUs. The BPDU
carries various pieces of information that are used by STP to determine what state the ports
should be in and what the STP topology should be.
Listening
The switch listens for frames but doesn’t learn or act on them. The switch does receive the frames
but discards them before any action is taken. MAC addresses are not placed into the CAM table
while the port is listening.
Learning
The switch will start to learn MAC addresses it can see and will populate its CAM table with the
addresses and the ports on which they were found. In this state, the switch will start to transmit its
own BPDUs.
Forwarding
The switch has learned MAC addresses and corresponding ports and populates its CAM table
with this. The switch can now forward traffic.
Disabled
In the Disabled state, the port will receive BPDUs but will not forward them to the switch
processor. It discards all incoming frames from both the port and other forwarding ports on the
switch.
The port states are transitional and allow other BPDUs to arrive in good time from other
switches. Port transition times are typically:
Initialization to blocking
Blocking to listening (20 secs)
Listening to learning (15 secs)
Learning to forwarding (15 secs)
Forwarding to disabled (if there is a failure)
All ports start at the blocking state (there are a few exceptions discussed later). After STP
convergence, some ports will transition to listening, learning, and finally forwarding while the
rest would remain in a blocked state. Thus the time needed to transition from one stage to another;
we find that a layer 2 network running STP takes 50 seconds to start switching data! This is
known as the convergence time.
STP Convergence
Remember that Spanning tree works by selecting a root bridge on the LAN. It is selected by
comparing Bridge ID of each switch.
STP is be considered to be converged after three steps have taken place:
Elect root bridge
Elect root ports
Elect designated ports
Each of the above three steps are discussed in detail below. The network shown in figure 6-9 will
be used to explain the STP convergence process.
Elect Root Bridge
The bridge with the lowest Bridge ID (BID) becomes the root bridge. The BID consists of two
values in an 8-byte field. The bridge priority (32,768 by default) makes up two bytes and the
MAC address of the backplane or supervisor module (depending upon the model of switch)
makes up the rest of the six bytes.
The root bridge on a LAN is selected by an election. Each switch running STP passes information
in a format known as bridge protocol data units (BPDUs). BPDUs are multicast frames that can be
thought of as hello messages between STP enabled switches and they are sent out every two
seconds from every port. This is necessary to maintain a loop free topology. When the switch or
bridge priorities combined with its MAC address are all exchanged; the bridge with the lowest
ID is selected as the root bridge.
Figure 6-9 STP Convergence
All ports on the root bridge are set as designated and thus are always set to a forwarding state.
In our network, the priority of all the switches has been left at the default value. So the switch
with the lowest MAC address will be selected the root bridge. In this case it will be SwitchA.
To verify this we issue the “show spanning tree vlan (vlan#)” command on SwitchA :
Notice that fa0/20 has a role of designated port with state as forwarding. The election of
designated port can be influence by changing the cost of the port This concludes a basic overview
of STP. STP can be difficult to understand and the following sections look deeper into various
aspects of it. Hence, I strongly suggest you take a break and re-read this section to get a firm grasp
of STP before continuing.
Cisco’s Additions To STP (Portfast, BPDUGuard, BPDUFilter,
UplinkFast, BackboneFast)
STP as we know it, keeps the network loop free but at what cost? The exact cost to you and I is
50 seconds! That is a long, long time in networking terms. For almost a minute data cannot flow
across the network. In most cases this is a critical issue, especially for important network
services.
To deal with this issue (before the industry standard was ratified) Cisco added the following
features to STP implementation on its switches:
PortFast, BPDUGuard and BPDUFilter
UplinkFast
BackboneFast
Portfast
If you have a laptop or a server connected to a switchport then you know that:
It will not need to listen to BPDUs because it is not a layer 2 device
It will not create loops because it has a single link to the layer 2 network
Therefore, you can safely disable Spanning Tree on such ports. It is very important to ensure that
such ports never have a STP enabled layer 2 device connected on them (Think port security!) or
else a loop or a breakdown of the network is quite possible. You will even get a warning message
on certain switches stating this when you enable portfast on a switchport!
When you configure a switchport as portfast, STP will be disabled on that port and it will
transition to forwarding state when it comes up and will never be blocked.
The command to configure portfast is spanning-tree portfast:
SwitchA(config)#int fastEthernet0/44
SwitchA(config-if)#spanning-tree portfast
%Warning: portfast should only be enabled on ports connected to a single
host. Connecting hubs, concentrators, switches, bridges, etc… to this
interface when portfast is enabled, can cause temporary bridging loops.
Use with CAUTION
%Portfast has been configured on FastEthernet0/44 but will only
have effect when the interface is in a non-trunking mode.
As we learned, Portfast disables STP on a switchport but an important fact is that a Portfast
switchport will keep listening for BDPUs. If someone adds a switch to a port which has been
configured as Portfast, the consequences will be unpredictable and is some cases disasterous.
To guard against this situation, Cisco provides the BPDUGuard and BPDUFilter features.
BPDUGuard
If a switch is plugged into a switchport configured as Portfast, it could change the STP topology
without the administrator knowing and could even bring down the network. To prevent this,
BPDUGuard can be configured on the switchport. With this configured, if a BPDU is received on
a switchport, it will be put into an error disabled mode and an administrator will have to bring
the port up. This can be configured on the port using the “spanning-tree bpduguard enable”
command.
BPDUFilter
When BPDUFilter is configured on a switchport which has been configured as Portfast, it will
cause the port to lose the Portfast status if a BPDU is received on it. This will force the port to
participate in STP convergence. This is unlike the behavior seen with BPDUGuard where the port
is put into an error disabled mode. BPDUFilter can be enabled on the switchport using the
“spanning-tree bpdufilter enable” command.
UplinkFast
To understand how UplinkFast helps speed up the convergence, consider the network shown in
Figure 6-10. SwitchA is the Root Bridge in the network.
Figure 6-10 UplinkFast
Now consider the following output from SwitchB
SwitchB(config-if)#shutdown
*Mar 2 22:14:30.504: STP: VLAN0005 new root port Fa0/15, cost 19
*Mar 2 22:14:30.504: STP: VLAN0005 Fa0/15 -> listening
*Mar 2 22:14:30.504: STP: UFAST: removing prev root port Fa0/14 VLAN0005 port-id 800E
*Mar 2 22:14:32.420: %LINK-5-CHANGED: Interface FastEthernet0/14, changed state to
administratively down
*Mar 2 22:14:32.504: STP: VLAN0005 sent Topology Change Notice on Fa0/15
*Mar 2 22:14:33.420: %LINEPROTO-5-UPDOWN: Line protocol on Interface
FastEthernet0/14, changed state to down
*Mar 2 22:14:45.504: STP: VLAN0005 Fa0/15 -> learning
*Mar 2 22:15:00.504: STP: VLAN0005 Fa0/15 -> forwarding
Note the time taken for fa0/15 to transition to forwarding state is 30 seconds. This is faster than
the expected 50 seconds because listening and learning time were short in this P2P link between
switches and no other hosts/switches are connected here.
Let’s enable UplinkFast on SwitchB and repeat the process:
SwitchB(config)#spanning-tree uplinkfast
SwitchB#show spanning-tree vlan 5
–output truncated–
Uplinkfast enabled
Interface Role Sts Cost Prio.Nbr Type
——————- —- — ——— ——– ——————————–
Fa0/14 Root FWD 3019 128.14 P2p
Fa0/15 Altn BLK 3019 128.15 P2p
SwitchB(config)#int fa0/14
SwitchB(config-if)#shutdown
*Mar 2 22:28:23.300: STP: VLAN0005 new root port Fa0/15, cost 3019
*Mar 2 22:28:23.300: STP FAST: UPLINKFAST: make_forwarding on VLAN0005
FastEthernet0/15 root port id new: 128.15 prev: 128.14
*Mar 2 22:28:23.300: %SPANTREE_FAST-7-PORT_FWD_UPLINK: VLAN0005
FastEthernet0/15 moved to Forwarding (UplinkFast).
*Mar 2 22:28:23.300: STP: UFAST: removing prev root port Fa0/14 VLAN0005 port-id 800E
*Mar 2 22:28:25.216: %LINK-5-CHANGED: Interface FastEthernet0/14, changed state to
administratively down
*Mar 2 22:28:25.300: STP: VLAN0005 sent Topology Change Notice on Fa0/15
*Mar 2 22:28:26.216: %LINEPROTO-5-UPDOWN: Line protocol on Interface
FastEthernet0/14, changed state to down
SwitchB(config-if)#do show spanning-tree vlan 5
— output truncated–
Uplinkfast enabled
Interface Role Sts Cost Prio.Nbr Type
——————- —- — ——— ——– ——————————–
Fa0/15 Root FWD 3019 128.15 P2p
Note the time taken for fa0/15 to transition to forwarding is less than a second! From 30 seconds
downtime to less than a second with UplinkFast enabled. Now that you have seen the difference it
makes, let us define what exactly it does.
If a switch has multiple links towards the root bridge, then UplinkFast marks the redundant link as
an Alternate Port and brings it up quickly in case the Root Port fails. This is possible because
blocked ports keep listening for BDPUs.
Cisco recommends caution when using UplinkFast. You should enable it only on switches that
have blocked ports.
BackboneFast
UplinkFast works by finding alternate ports for directly connected links. Similarly BackboneFast
works on finding an alternate path when an indirect link to the root port goes down. To understand
how BackboneFast works, consider the network shown in Figure 6-11. SwitchA is the Root
Bridge here and Fa0/20 on SwitchD in the root port.
If SwitchC looses connection to SwitchA, it will advertise itself as the root bridge to SwitchD.
SwitchD will compare previous known information with the new information and will learn that
SwitchC has lost connection with SwitchA. Since the new BPDU states that a designated switch
(SwitchC) is now the root bridge, this BDPU is known as inferior BDPU.
Eventually SwitchD will receive a BDPU from SwitchB stating the SwitchA is still the Root
Bridge and SwitchD will now mark fa0/17 as the root port instead of fa0/20. This is because the
information from SwitchB matches the exisiting information on SwitchD. BackboneFast ensure a
quick failover as soon as the inferior BPDU is received. It saves roughly 20 seconds out of the 50
seconds of convergence time.
The spanning-tree backbonefast command can be used in the global configuration mode to
enable BackBoneFast as shown below:
Switch#configure terminal
Switch(config)#spanning-tree backbonefast
Figure 6-11 BackboneFast
Rapid Spanning Tree Protocol (RSTP) – 802.1w
The features discussed in the previous section – PortFast, UplinkFast and BackboneFast were
added by Cisco and because of this they worked only on Cisco switches. IEEE added these
features in a new STP protocol called Rapid Spanning Tree Protocol (RSTP) under the 802.1w
standard.
One big different between 802.1D STP and 802.1w RSTP is that there are lesser number of port
states. As you know, there are five states in 802.1D. RSTP only has 3 states. The disabled,
blocking and learning states have been combined into a new discarding state in RSTP. Table 6-2
shows a comparison of the port states.
Table 6-2 STP and RSTP port states comparison
802.1D Port 802.1w Port Is port MAC addresses
states states active? learned?
Disabled Discarding No No
Blocking Discarding No No
Listening Discarding Yes No
Learning Learning Yes Yes
Forwarding Forwarding Yes Yes
Similar to “traditional” spanning tree, RSTP will also elect a root bridge using the same
parameters as STP and ports will be elected as root and designated ports. In addition to the
standard root and designated ports, RSTP ports can have one of the following roles:
Alternate Port – This is a port that provides an alternative path to the root bridge. This
path is less desirable that the path provided by the root port but will be used if the path
from the root port goes down.
Backup Port – This is a port that provides a redundant path to a network segment but this
path is less desirable than the one provided by the designated port. This path will be used
if the path provided by the designated port goes down.
Figure 6-12 shows an example of a network with all port roles in RSTP.
Figure 6-12 RSTP Port Roles
RSTP is backward compatible with 802.1D STP. If a switch with STP is discovered, the new
features such as UplinkFast and BackboneFast will not be used.
Changing from 8021.D to 802.1w RSTP requires a single command on the switch – spanning-
tree mode rapid-pvst. This is a global configuration mode command and will cause the switch to
change to RSTP. Remember that this can cause the network to be temporarily unavailable. An
example is shown below:
SwitchD has two ways to paths to reach SwitchA. In any implementation of STP (Per-VLAN or a
single STP), one of the interfaces will be blocked. Let us assume that fa0/17 is blocked in this
network. This works well in an environment where the whole network is one single big network.
Now consider a situation where the network is divided into two smaller networks using VLANs.
If both the VLANs spanned all the four switches, would it not be useful to have fa0/17 blocked
for one VLAN and fa0/20 blocked for the other VLAN? This way traffic in both VLANs can be
load balanced across both paths!
To achieve this, Cisco added the Per-VLAN Spanning Tree Plus (PVST+) feature on its switches.
With this feature, Cisco switches ran one STP instance for every VLAN.
When IEEE introduced 802.1w it still did not accommodate multiple Spanning Tree instances on
a switch. Cisco introduced the Per-VLAN Rapid Spanning Tree (PVRST) to support Rapid
Spanning Tree instances on each VLAN on the switch. PVST+ and PVRST both provide the same
functionality across both 802.1D and 802.1w standards.
Remember that PVST+ and PVRST both add the VLAN number to the bridge ID of every switch.
That is the reason you earlier saw the priority as 8197 in VLAN 5 even though you had configured
the priority as 8192.
To enable RSTP for each VLAN in our switched network, we use the following command:
SW2#conf t
SW2(config)#int fast 0/11
SW2(config-if)#shutdown
3w0d: %LINK-5-CHANGED: Interface FastEthernet0/11, changed state to administratively down
SW2#show spanning vlan 10
VLAN0010
Spanning tree enabled protocol ieee
Interface Role Sts Cost Prio.Nbr Type
Po1 Desg FWD 19 128.65 P2p
SW2#show interface trunk
Port Mode Encapsulation Status Native vlan
Po1 desirable 802.1q trunking 1
In the above outputs notice that Po1 is still in forwarding mode and the trunk is still active. The
status of a physical interface in an Etherchannel does not effect STP or Trunking. This is true as
long as a single physical interface remains active in the etherchannel. If all physical interfaces go
down, the channel will go down also and effect STP and Trunking.
Lab 6-1: Port Security
In the network shown in Figure 6-15, a hub has been connected to interface fa0/1 of SwitchA. It is
an 8-port hub but only 5 hosts are allowed to connect to it. The administrator wants to ensure that
only 5 hosts can connect. HostA and HostB along with any 3 other hosts can connect to the hub.
Configure port security on switchport fa0/1 to fulfill this requirement. In case of a violation, the
port should not be put in an error disabled mode but the administrator should be informed.
Figure 6-15 Lab 6-1
Solution
The lab requires configuring port security such that a maximum of 5 hosts can connect at a time.
The MAC addresses of the two hosts also need to be added to port security and the violation
mode must be changed to restrict. The configuration required is shown below:
SwitchA#configure terminal
SwitchA(config)#interface fa0/1
SwitchA(config-if)#switchport port-security
SwitchA(config-if)#switchport port-security maximum 5
SwitchA(config-if)#switchport port-security mac-address 0014.bc1e.76ab
SwitchA(config-if)#switchport port-security mac-address 0014.911e.742f
SwitchA(config-if)#switchport port-security violation restrict
You can verify the configuration using the show port-security command as shown below:
NAC-Main-L3#show port-security
Secure Port MaxSecureAddr CurrentAddr SecurityViolation Security Action
(Count) (Count) (Count)
——————————————————————————————————-
fa0/1 5 2 0 Restrict
——————————————————————————————————–
Lab 6-2: STP
In the network shown in Figure 6-16, 802.1d STP is being used on VLAN 5. The administrator
wants to ensure that SwitchA is always the root bridge. They also want to ensure that interface
fa0/16 is always the root port on SwitchB and SwitchC. In addition to that, interface fa0/1 on all
switches should transition to forwarding as soon as a host connects to it.
Figure 6-16 Lab 6-2
Solution
To ensure that SwitchA is always the root bridge, change the priority on it as shown below:
SwitchB(config)#int fa0/16
SwitchB(config-if)#spanning-tree cost 1
SwitchC(config)#int fa0/16
SwitchC(config-if)#spanning-tree cost 1
Finally, to ensure that fa0/1 transitions to forwarding as soon as host connects, enable portfast on
these ports as shown below:
SwitchA(config)#int fastEthernet0/1
SwitchA(config-if)#spanning-tree portfast
SwitchB(config)#int fastEthernet0/1
SwitchB(config-if)#spanning-tree portfast
SwitchC(config)#int fastEthernet0/1
SwitchC(config-if)#spanning-tree portfast
To verify that SwitchA is the root bridge, use the show spanning-tree vlan 5 command on
SwitchA as shown below:
Broadcast
The switch sends a copy of the frame out all interfaces, except the interface on which the frame
was received, identically to unknown unicasts. This switch behavior is also called frame
flooding.
Figure 7-3 describes how broadcasts are propagated in a switched network. Host A sends a
broadcast frame with the broadcast destination MAC address of FFFF.FFFF.FFFF and the frame
is propagated to all hosts in the network even those connected to other switches.
Figure 7-3 Broadcast Propagation
Multicast
The switch floods frame identically to unknown unicasts and broadcasts, unless certain multicast
optimizations are configured.
There are some problems with the way switches forward different types of frames by default,
especially in larger switched networks. First, there is no isolation between hosts and any host can
communicate with any other host totally unchecked. This is not a very desirable situation for you
as a network administrator as there is no security from malicious software or users. Second, a
broadcast sent by any host would reach every other host on the network which is neither
bandwidth efficient nor secure. A malfunctioning Network Interface Card (NIC) or a piece of
malicious software on a host can generate excessive broadcasts consuming all the bandwidth
available and starving legitimate applications. These problems can be greatly alleviated by using
virtual LANs or VLANs.
Virtual LANs (VLANs)
In order to appreciate the need for virtual LANs, let’s consider how LANs would be built
without switches using hubs only. As you are aware hubs are layer 1 devices without any
intelligence and they typically relay the frame received on one port to all other ports regardless of
the type of frame. As a matter of fact they don’t care what the content of the frame is and the same
treatment is given to unicast, multicast, and broadcast frames. As a result, the set of devices
connected to a hub are in the same collision domain which means two devices connected to a hub
cannot transmit at the same time without causing a collision. The Carrier Sense Multiple Access /
Collision Detection (CSMA/CD) mechanism of Ethernet is at work in networks built with hubs.
As all devices are in the same collision domain there is performance degradation as more and
more devices are connected to the same hub and more collisions start to take place.
Let’s assume we have five physical LANs in our organization: Engineering, Finance,
Management, Marketing, and Sales each belonging to one department that need to be connected to
the same router which provides Wide Area Network (WAN) connectivity. Here is how this
network can be built using hubs alone.
Figure 7-4 Physical LANs
Please note that enterprise networks do not use hubs any more and are built exclusively with
switches. But analyzing the above network built with hubs would enable us to appreciate the
benefits switches bring. First there is one hub for each physical LAN and all devices in that
physical LAN are cabled to the same hub while each hub itself is connected to a separate
interface on the router. Also, there is one IP subnet for each physical LAN and any device that is
connected to that LAN has to have an IP address in that IP subnet. The router interface on a certain
physical LAN also has an IP address in the IP subnet for that LAN. The hosts in the physical LAN
have the router’s IP address set as their default gateway. In this design, if you need to add another
device in a physical LAN say Engineering, you simple connect it to the Engineering hub and
assign it an IP address from the IP subnet for Engineering LAN.
There are some shortcomings in this design. First there are limitations on where you can
physically place devices on a certain LAN due to the limited maximum cable length supported by
Ethernet cabling standards. Let’s assume there is a new employee in the Engineering department
that needs to be connected to the Engineering LAN but there is no physical space in the
Engineering department to make room for the new employee. There is plenty of space in the Sales
department and the new Engineering employee is made to sit in the Sales department instead.
Now, the Sales department is located in another corner of the building and it is not possible to
connect the new Engineering employee to the Engineering hub due to the simple fact that the
distance exceeds the maximum cable length permissible. The new Engineering employee is
instead connected to the Sales hub but this has some undesirable side effects. The Engineering
employee is now on the Sales LAN and he can access all resources on the Sales LAN like servers
which are meant to be visible only to the Sales people. It is a security issue as organization
policies may prevent employees from other departments to have access to Sales documents and
data. Also the new Engineering employee would be cut off from resources on the Engineering
LAN which may prevent him from effectively doing his job. In the coming sections we will see
how virtual LANs in networks built with switches instead of hubs provide means to prevent
problems like this yet enabling physical mobility. The design I just described, though obsolete
today, has worked well for several years despite its limitations.
In an Ethernet LAN, a set of devices that receive a broadcast sent by any other device is called
a broadcast domain. We just learnt in the last section, a switch simply forwards all broadcasts
out all interfaces, except the interface on which it received the frame. As a result, all the
interfaces on an individual switch are in the same broadcast domain. Also, if a switch connects to
other switches too, the interfaces on those switches are also in the same broadcast domain. On
switches that have no concept of virtual LANs (VLANs), the whole switched network is one large
flat network comprising a single broadcast domain.
A VLAN is simply a subset of switch ports that are configured to be in the same broadcast
domain. Switch ports can be grouped into different VLANs on a single switch, and on multiple
interconnected switches as well. By creating multiple VLANs, the switches create multiple
broadcast domains. By doing so, a broadcast sent by a device in one VLAN is forwarded to all
other devices in that same VLAN; however the broadcast is not forwarded to devices in the other
VLANs. VLANs provide bandwidth efficiency because broadcasts, multicasts and unknown
unicasts are restricted to individual VLANs and also provide security as a host on one VLAN
cannot directly communicate with a host on another VLAN.
Exam Concept – A VLAN simply is a set of administratively defined switch ports that are in the
same broadcast domain. The new CCNA exam has a ton of questions on VLAN concepts so
understand them inside and out.
Because a trunk link can transport many VLANs, a switch must identify frames with their
associated VLANs as they are sent and received over a trunk link. Frame identification assigns a
unique user-defined number to each frame transported over a trunk link. This VLAN number is
also called VLAN ID and as each frame is transported over a trunk link, such unique identifier is
placed in the frame header. As each switch along the way receives these frames, the identifier is
examined to determine to which VLAN the frames belong and then is removed. VLAN ID field
contains a 15-bit value and as such the range of possible VLAN IDs is 0 – 4095. The VLAN IDs
of 0 and 4095 are not used and the usable range of VLAN IDs hence is 1 – 4094. By default, all
ports on a Cisco switch are assigned to VLAN 1. VLAN 1 is also called the management VLAN
and control plane traffic belongs to VLAN 1.
Best practices recommended by Cisco dictate using a separate IP subnet for each VLAN. Simply
put, devices in a single VLAN are typically also in the same IP subnet. Layer 2 switches forward
frames between devices on the same VLAN, but they do not forward frames between devices in
different VLANs. In order to be able to forward frames between two different VLANs, you need a
multilayer switch or a router. We will cover this in more detail in a later section of the chapter.
Now, let’s see how we can build the same network using switches instead of hubs and what
benefits switches bring to us. Table 7-1 lists VLAN IDs corresponding to organizational
departments and IP subnets associated with each VLAN. There is a different IP subnet assigned to
each VLAN and if you look carefully the third octet of the IP subnet number is the same as the
VLAN ID. It is just an arbitrary number to make the design easily understandable and let us focus
more on concepts rather than specific numbers used. The router has a Wide Area Network (WAN)
connection and it provides two important functions: WAN connectivity and inter-VLAN routing to
enable communication between two different departments or VLANs. As you can see in Figure 7-
6, devices belonging to VLANs 20 and 40 are connected to more than one switch. VLANs remove
the restriction of having to connect devices belonging to the same LAN to the same device while
still providing traffic isolation at layer 2. In yet other words, VLANs can span multiple switches
with switch ports belonging to same VLAN existing on different switches. This is one of the
several benefits switches bring to local area networks. Also if we want to add another device to
say the Engineering VLAN and we want to locate the new user at a location different than the
Engineering department, it can simply be accomplished by connecting the new device to the
nearest switch and assigning the switch port to the Engineering VLAN. Compare it with the
similar situation in the network built with hubs and you would be able to appreciate that network
becomes more flexible without sacrificing security or traffic isolation. The router provides inter-
VLAN routing in this scenario but the same functionality can also be achieved by using a layer 3
switch like the Cisco 3560.
Table 7-1 VLANs Corresponding to Organizational Departments
Exam Concept – Control plane traffic such as VTP, CDP, DTP, and PAgP protocols is always
sent in VLAN 1 across a trunk link between two Cisco switches.
Types Of Switch Ports
A switch port can be in one of two modes: access and trunk. There are two ways a switch port
can settle down into one of these two modes: static and dynamic. You can manually configure a
switch port to be in the access or trunk mode in the static method. You can also let Dynamic
Trunking Protocol (DTP) run on an interface to negotiate trunking in the dynamic method. Cisco
switches exchange DTP messages to dynamically learn whether the device at the other end of the
link wants to perform trunking and, if so, which trunking protocol (ISL or 802.1Q) to use.
Access Ports
A switch port in access modes belongs to one specific VLAN and sends and receives regular
Ethernet frames in untagged form. The switch interfaces connected to devices such as desktops,
laptops, printers etc. are typically configured as access ports. By default, a Cisco switch port is
assigned to the default VLAN 1 in access mode. You can explicitly set the switch port to access
mode using command switchport mode access in interface configuration mode. The VLAN that
certain switch port is assigned to can be changed using command switchport access vlan vlan-id,
in interface configuration mode.
SW1>en
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#int fa0/1
SW1(config-if)#switchport mode ?
access Set trunking mode to ACCESS unconditionally
dot1q-tunnel Set trunking mode to TUNNEL unconditionally
dynamic Set trunking mode to dynamically negotiate access or trunk mode
private-vlan Set private-vlan mode
trunk Set trunking mode to TRUNK unconditionally
SW1(config-if)#switchport mode access
SW1(config-if)#switchport access vlan 10
We just configured interface FastEthernet 0/1 of switch SW1 in access mode assigning it to
VLAN 10.
Trunk Ports
The distinguishing feature of trunk ports is that they carry traffic from multiple VLANs at the same
time. Such interfaces are most commonly configured between two switches but they can also be
configured between a switch and a router, and even between a server and a switch. The range of
VLAN IDs that can be configured on a Cisco switch is 1 to 4094 which is divided into normal-
range VLAN IDs of 1 to 1005 and extended-range VLAN IDs of 1006 to 4094.
In fact trunking is a great feature because a single physical link is shared by multiple VLANs
while still allowing traffic isolation between different VLANs. In the absence of such feature we
would have required one inter-switch link per VLAN which would simply not scale to a large
number of VLANs. By default the full range of VLAN IDs 1 to 4094 is allowed on a trunk port
which means traffic belonging to all VLANs can be carried across the trunk port. It is also
possible to allow only a subset of the full range of VLAN IDs on the trunk while blocking the
others. Trunking allows a VLAN to span multiple switches with access ports belonging to the
VLAN spread across multiple switches in different parts of the switched network. This provides
great flexibility when creating VLANs and a host can be assigned to a VLAN regardless of its
physical location on the switched network.
Exam Concept – A trunk link must operate at 100 Mbps or greater speeds. This is a common
CCNA question.
A switch port can be configured as trunk using command switchport mode trunk in interface
configuration mode.
SW1>enable
SW1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface FastEthernet 0/1
SW1(config-if)#switchport mode trunk
We will learn more about trunking protocols ISL and 802.1Q in a later section.
Voice Access Ports
Voice access ports are a special case of access ports with modified behavior suited for
connecting IP phones. Most corporate users these days use two network devices: a desktop or
laptop computer and an IP phone. Typically just one LAN cable runs from the desk or cubicle to
the switch that carries data traffic from the computer and voice traffic from the IP phone. Voice
access ports allow you to add a second VLAN to an access port on a switch for your voice traffic
which is called the voice VLAN. In fact a voice access port is like a hybrid of an access port and
a trunk port carrying some characteristics of each type, but it is still considered an access port
that can be configured for both data and voice VLANs. So what we get is the ability to use the
same physical interface and the same physical cable run for both data and voice traffic yet
compartmentalizing each type of traffic in its own VLAN.
VLAN Trunking: ISL & 802.1Q
As you learned, VLAN trunking allows switches to send Ethernet frames for multiple VLANs
across a single link. A trunk interface needs a way to distinguish between Ethernet frames that
belong to different VLANs. If frames from different VLANs are sent unaltered across the trunk
interface, the switch at the other end would have no way of knowing which VLAN certain frame
belongs to. This leads us naturally to the idea of frame tagging. Frame tagging is simply adding
some additional information to regular frames before sending them out a trunk interface so that the
device at the other end of the trunk interface would identify the VLAN the frame belongs to.
VLAN IDs are associated with only those frames that traverse a trunk link. When a frame enters
or exits the switch on an access switch port, no VLAN ID is present. The Application Specific
Integrated Circuits (ASICs) on the switch port physically assign the VLAN ID to a frame as it is
placed on a trunk link and also strips off the VLAN ID if the frame exits an access switch port.
When we speak of ASICs we are in the realm of hardware architecture of the switch, performing
frame tagging in hardware which allows us to match wire speeds.
There are two different ways to tag frames: ISL and 802.1Q. Although the basic concept of frame
tagging is the same with both methods, there are differences in how it is accomplished. If two
devices are to perform trunking, they must agree to use either ISL or 802.1Q as there are several
differences between the two.
Table 7-2 Comparison of ISL and 802.1Q
Feature ISL 802.1Q
Normal and extended Normal and extended
Supported VLANs
range range
Protocol defined by Cisco IEEE
Eapsulates original
Encapsulates Inserts tag
frame or inserts tag
Native VLAN support No Yes
ISL and 802.1Q Concepts
Inter-switch Link (ISL) is a Cisco proprietary protocol that maintains VLAN information in
Ethernet frames by encapsulating the whole Ethernet frame. In the case of ISL, the tag is external
to the Ethernet frame, which is the same as encapsulating the Ethernet frame. ISL adds a 26-byte
header (containing a 15-bit VLAN identifier) and a 4-byte CRC trailer to the frame. ISL is
supported only on Cisco switches and even some newer Cisco switches don’t support it any
more. ISL cannot be used to connect a Cisco switch to a switch by another vendor like HP and its
use is being depreciated even by Cisco in favor of IEEE 802.1q which happens to be the more
popular choice among trunking protocols.
IEEE 802.1q is a standard developed by the Institute of Electrical and Electronics Engineers
(IEEE) to carry traffic belonging to multiple VLANs across a trunk. In contrast to ISL, 802.1Q
does not actually encapsulate the original frame. Instead, it adds a 32-bit field between the source
MAC address and the Ether Type/Length fields of the original frame. This 32-bit field carries the
information used to deterministically identify the VLAN the Ethernet frame belongs to.
The extra VLAN header used by both ISL and 802.1Q uses the VLAN identifier or VLAN ID field
to identify the VLAN the frame belongs to. VLAN ID is a 12-bit field specifying the VLAN to
which the frame belongs. The range of hexadecimal values is from 0x000 to 0xFFF for a 12-bit
number. The hexadecimal values of 0x000 and 0xFFF are reserved while all other values in the
range can be used as VLAN identifiers, allowing up to 4,094 VLANs. Please see the graphic to
understand how IEEE 802.1Q tag is inserted in a regular Ethernet frame.
Figure 7-7 IEEE 802.1Q Tag Insertion
The IEEE 802.1Q standard can create a very interesting scenario with Ethernet frames of
maximum size. Please recall that the maximum size of an Ethernet frame is 1518 bytes as
specified by IEEE 802.3 standard. Now, if such frame gets tagged the resulting frame size will be
1522 bytes, a number that exceeds the maximum size specified in IEEE 802.3 standard. In order
to resolve this issue the maximum Ethernet frame size was extended to 1522 bytes by
the 802.3ac subgroup of the IEEE 802.3 committee. Still some network devices that do not
support the larger frame size will process the frame successfully but may report these larger
frames as baby giant.
IEEE 802.1Q and ISL are used to multiplex VLANs over single link by adding VLAN tags for
identification. However, it is possible to send Ethernet frames either tagged or untagged across an
IEEE 802.1Q trunk. Cisco uses the concept of native VLAN to help explain which frames will be
sent with or without tags. An IEEE 802.1Q trunk port sends and receives tagged frames for all
VLANs, except the native VLAN if one is configured. Frames belonging to the native VLAN do
NOT carry VLAN tags when sent over the trunk. Similarly, if an untagged frame is received on a
trunk port, the frame is associated with the native VLAN configured on that port. The concept of
native VLAN is not important for ISL as all frames including the ones for native VLAN are
tagged. The default native VLAN on Cisco switches is 1. Also please note that the native VLAN
is specific to a single trunk port and not to the whole switch. In fact different trunk ports on a
Cisco switch can have different native VLANs. Both the trunk ports at the two ends of a trunk
should have the same native VLAN configured.
On a side note, many Network Interface Cards (NICs) for PCs and printers are not 802.1Q
compliant. If they receive a tagged frame, they will not understand the VLAN tag and will drop
the frame. From a practical standpoint, a PC should get one and only one VLAN so it does not
matter if your PC NIC supports dot1Q or not. However NICs on server machines may support
802.1Q and there are situations where this capability is useful. You may provide access to
applications on server to different VLANs still providing traffic isolation. As the server NIC is
802.1Q capable it can receive traffic from different VLANs on the same physical interface by
establishing an 802.1Q trunk link with the switch it is directly connected to.
VLAN Trunking Protocol (VTP)
VLAN Trunking Protocol (VTP) was developed by Cisco to reduce VLAN administration effort
in a switched network, making it a Cisco proprietary protocol. The comparable IEEE standard in
use by other manufacturers is GARP VLAN Registration Protocol (GVRP), and more
recently Multiple VLAN Registration Protocol (MVRP). As you know by now, there are two
important tasks to be performed when creating VLANs in a switched network: creating VLANs
and assigning switch ports to VLANs. The first task requires the network administrator to define
all the VLANs on each switch in a switched network. If performed manually by logging into each
switch, this can be a tedious task on a large network involving a large number of switches and is
also prone to error. VLANs can be created on only a single switch and this VLAN information is
propagated through VTP messages to all switches in the network. This not only greatly reduces
the effort involved but also minimizes the chance of an error. VTP allows you to add, delete, and
rename VLANs on a single switch and this information is then propagated to all other switches in
the VTP domain.
On a side note, the name VLAN Trunking Protocol (VTP) may be a bit misleading as the protocol
does not have much to do with trunking. VTP just makes it easier to define VLANs by doing it on
one central switch and propagating that information to the whole switched network through VTP
messages. In this manner, VTP allows for more consistent VLAN configuration, and accurate
tracking and monitoring of VLANs by central administration. In other words, a switch can only
share VLAN information with other switches over VTP if they are configured into the same VTP
domain. VTP information is sent only over trunk ports whereas no VTP information is sent over
access ports. Switches not only advertise all known VLANs with any specific parameters but also
VTP management domain information and configuration revision number.
VTP Modes of Operation
A switch can operate in one of three different modes of operation within a VTP domain:
Server This is the default mode on all Cisco Catalyst switches. The switch in VTP server mode
is needed to propagate VLAN information throughout the VTP domain. Also, a switch must be in
VTP server mode to be able to create, modify, and delete VLANs. VTP information should be
changed on the switch operating in server mode and any change made to a switch in server mode
will be propagated throughout the VTP domain via VTP advertisements forwarded on trunks.
Also, VLAN configurations are saved in NVRAM for switch in VTP server mode.
Client A switch in VTP client mode receives information from VTP servers, but it also sends
and receives VTP updates just like VTP servers. But, in contrast to VTP server, a VTP client
cannot create, modify, or delete VLANs. Also, you cannot assign a port on a VTP client to a
VLAN before the VTP server notifies the client of the new VLAN. Also, a VTP client
does not store the VLAN information it receives from a VTP server in NVRAM. This means that
if the switch loses power or is reloaded, the VLAN information it has learnt would be gone and it
would have to re-learn the information from a VTP server. So basically, switches that are in VTP
client mode will just learn and pass along VTP information.
Transparent Switches in VTP transparent mode receive VTP advertisements and forwards them
over any configured trunk links, but that’s all. They do not update their own VLAN database with
the VTP information they receive and pass along. Also, they can crate, modify, and delete VLANs
in their own VLAN database but this database is kept isolated from the rest of the VTP domain
and is not advertised at all. Practically, switches in VTP transparent mode do not participate in
the VTP domain and act just as relay agents receiving VTP advertisements and passing them
along. The utility of VTP transparent mode is to enable VTP servers and clients synchronise their
VLAN databases even if they are connected via switches that are not supposed to have the same
VLANs.
A switch can be configured in VTP transparent mode to receive and forward VTP information
through trunk ports but not to update their VLAN databse. In other words, switches in transport
mode only relay VTP information without updating their own VLAN databases.
Exam Concept Typically you will see questions on the CCNA exam about VTP modes. Know
that a switch has to be in VTP server or transparent mode in order to make any VLAN changes
locally.
VTP Domains Cisco switches participating in VLAN Trunking Protocol (VTP) are
organized into management domains, or areas with similar VLAN requirements. A switch can be
part of one and only one VTP domain and can share VLAN information with other switches in the
same domain. Switches in different VTP domains do not share VTP information. If a switch
receives a VTP advertisement from a switch in a different VTP domain, it will ignore such
advertisement. Mismatched VTP domain names are a common cause why all switches in your
network do not share VLAN information and should be one of the first things you should check
when troubleshooting VTP issues.
The concept of a VTP management domain is somewhat analogous to the concept of autnomous
system (AS) in Border Gateway Protocol (BGP). A switch can belong to only one VTP domain
just like a BGP router can belong to a single AS.
Exam Concept – You will see a CCNA exam question asking what happens if a switch receives
a VTP advertisement with a different management domain name. Know it simply ignores such an
advertisement.
Switches in a VTP domain advertise several attributes to their VTP domain neighbors. Each
advertisement contains several parameters including VTP management domain, VTP revision
number, known VLANs, and specific VLAN parameters. When a new VLAN is added to a switch
in a VTP domain, other switches are notified of the new VLAN through VTP advertisements. In
this way, all switches in a domain can prepare to receive traffic on their trunk ports using the new
VLAN.
VTP Advertisements The VLAN Trunking Protocol (VTP) uses Layer 2 frames to
communicate VLAN information among a group of switches. These special frames are sent only
out trunk links leading to neighboring switches. Each VTP advertisement contains a VTP header
and a VTP message. The format of the VTP header can vary, based on the type of VTP message,
but all VTP packets contain these fields in the header:
VTP protocol version: 1, 2, or 3
VTP message type
VTP management domain name length
VTP management domain name
VTP configuration revision number
In addition to these parameters, each Cisco switch participating in VTP also advertises VLANs
and VLAN parameters on its trunk ports to notify other switches in the domain. VTP
advertisements are sent as multicast frames out trunk links. The receiving switch intercepts frames
sent to the VTP multicast address and processes them.
VTP switches use an index called the VTP configuration revision number to keep track of the most
recent VLAN information. Each switch participating in a VTP domain stores the configuration
revision number that is last heard from a VTP advertisement. The VTP advertisement process
always starts with configuration revision number zero (0). When subsequent changes are made on
a VTP server, like addition or deletion of VLANs, the revision number is incremented before the
advertisements are sent. When listening switches in the same domain receive an advertisement
with a greater revision number than is stored locally, the advertisement overwrites any stored
VLAN information. Because of this, it is very important to always force any newly added network
switches to have revision number 0 before being attached to the network. Otherwise, a switch
might have stored a revision number that is greater than the value currently in use in the domain,
and all existing VLAN information in the domain might inadvertantly be overwritten.
VTP Message Types There are three types of VTP messages:
Summary Advertisements By default, Cisco switches issue summary
advertisements at five minute intervals. Summary advertisements inform adjacent switches
of the current VTP domain name and the configuration revision number. When a switch
receives a summary advertisement frame, it compares the VTP domain name to its own
VTP domain name. If the name is different, the switch simply ignores the packet. If the
name is the same, the switch then compares the configuration revision to its own revision.
If its own configuration revision is higher or equal, the packet is ignored. If it is lower, an
advertisement request is sent.
Subset Advertisement When we add, delete, or modify a VLAN in a Cisco switch, the
VTP server where changes are made increments the configuration revision number and
issues a summary advertisement. One or several subset advertisements follow the
summary advertisement. A subset advertisement contains a list of VLAN information. If
there are several VLANs, more than one subset advertisement can be required to advertise
all VLANs.
Advertisement Requests A switch needs a VTP advertisement request when the
switch has been reset, the VTP domain name has been changed, or the switch has received
a VTP summary advertisement with a higher configuration revision number than its own.
Upon receiving an advertisement request, a VTP switch sends a summary advertisement.
One or more subset advertisements also follow the summary advertisements.
Exam Concept – It is likely to see on the CCNA exam a question regarding VTP revision
numbers. Know this concept!
VTP Password If a password is configured for VTP, it must be configured on all
switches in the VTP domain. The password is case sensitive and must be the same on all switches
in the VTP management domain. The VTP password gets converted into a 16-byte value by the
MD5 hashing algorithm and is carried in all summary-advertisement VTP packets.
Please keep in mind that VTP domain name and password are both case sensitive,
so CertificationKits and certificationkits are different VTP domain names. A switch accepts
VLAN information only from switches in its own domain. In large switched networks, you should
consider dividing the network into multiple VTP domains. Dividing the network into multiple
domains reduces the amount of VLAN information each switch must maintain. VTP domains are
loosely analogous to autonomous systems in a routed network where a group of routers share
common administrative policies. Multiple VTP domains are recommended only on large
networks. On small and medium-sized networks, a single VTP domain is sufficient and infact
more desirable as it minimizes problems.
As you already know the full range of VLANs is 1 to 4094, where normal-range VLANs have
VLAN IDs 1 to 1005, and extended-range VLANs have VLAN IDs 1006 to 4094. VTP only
propagates normal-range VLANs and a switch must be in VTP transparent mode when you create
extended-range VLANs. Also, VLAN IDs 1 and 1002 to 1005 are automatically created on all
Cisco Catalyst switches and cannot be removed.
VTP Pruning
Switches are intelligent devices as they try to learn which MAC addresses are connected to
which switch ports by passively gleaning source MAC addresses from user frames. A host
connected to a switch port must send at least one frame before the switch can learn its MAC
address and associate it with its switch port in the MAC address table. MAC address table is a
local database maintained by switches to map MAC addresses of connected hosts to their switch
ports. But unknown unicasts, or unicasts to destination MAC addresses that the switch has not yet
learned are treated just as broadcasts. Thus unknown unicasts are forwarded out all switch ports
other than the one on which they are received. As such these unknown unicasts reach all corners
of a large switched network even to those switches which do not have any ports assigned to the
VLAN. The same applies to broadcasts which are propagated to all switches in the network even
if they don’t have any ports assigned to the VLAN. This is not an efficient use of available
bandwidth, but fortunately VTP provides a way to preserve bandwidth by configuring it to reduce
the volume of broadcasts, multicasts, and unknown unicast frames. This is called pruning which
literally means cutting away dead or overgrown branches or stems from a tree, shrub, or bush.
VTP pruning enables a switched network to send unknown unicasts, multicasts, and broadcasts to
only those trunk links that actually have some ports downstream that may need that information.
For example, if Switch 1 does not have any ports assigned to VLAN 100 and a broadcast is sent
throughout VLAN 100, that broadcast would not traverse the trunk links connected to Switch 1.
By default, VTP pruning is disabled on all switches.
Inter-VLAN Routing
VLANs provide traffic separation at layer 2 of the OSI model. Hosts in a VLAN can communicate
freely and directly with other hosts on the same VLAN and it includes unicasts, multicasts, and
broadcasts. All three types of frames can flow freely and directly between any two hosts that are
on the same VLAN regardless of their physical location on a switched network. But what if hosts
on two different VLANs need to communicate? In such situation you need a layer 3 device, either
a router or a layer 3 switch. Such communication is simply not possible within the bounds of a
layer 2 only network.
Have a look at Figure 7-7, where our switched network has two VLANs: VLAN 1 and VLAN 2.
Hosts in VLAN 1 need to communicate with hosts in VLAN 2. We know that this kind of
communication is not possible in our layer 2 only switched network and we need a layer 3
device. One possible solution to achieve communication between the two VLANs can be to
introduce a router into the picture such that the router has two LAN interfaces Fa0/0 and Fa0/1
one for each VLAN. These two interfaces are connected to two access switch ports Fa0/1 and
Fa0/2 in VLANs 1 and 2 respectively. The router interfaces connected to these switch ports each
have an IP address configured in the subnet corresponding to the associated VLAN. From the
standpoint of the router, the two VLANs are merely two different subnets connected to two
different router interfaces and the router essentially performs routing to move traffic between the
two VLANs. Please remember that best practices dictate using a separate IP subnet for each
VLAN. As you can see, this scheme requires one dedicated interface on the router for each VLAN
in your switched network. You can imagine the solution does not scale well when you have
several VLANs in your switched network.
Figure 7-8 Router with Separate Interface for Each VLAN
Now, have a look at Figure 7-8 below, which is an alternate and more efficient way of achieving
routing between different VLANs using a router. Here we have only one router interface Fa0/0
connected to the switch port Fa0/1. The link is configured as an 802.1Q carrying traffic for both
VLAN 1 and VLAN 2. There is one sub-interface per VLAN configured on the router with IP
addresses configured on subinterfaces rather than the physical interface. This is the key difference
that we have only one physical connection from the router to the switch regardless of the number
of VLANs. This solution, also called router-on-a-stick, is more efficient and scalable to a large
number of VLANs. Most switches today are not just layer 2 devices but are multilayer switches
and inter-VLAN routing can be achieved using switches alone without involving a router at all.
You will see this concept on the CCNA exam and you must remember that the link from the switch
to the router must be a trunked link and the router’s interface must be at least a Fast Ethernet
interface. These two things very important and will be on the exam!
Figure 7-9 Router on a Stick
VLAN Configuration
VLAN concepts may be a bit overwhelming at first, but surprisingly the actual configuration of
VLANs in a network of Cisco switches requires just a few simple steps:
Step 1 Create the VLAN.
Step 2 Assign switch ports to that VLAN.
In Example 7-2, we will create several new VLANs on Switch1 and also assign names to them
according to Table 7-1.
Table 7-2 VLANs to be created
Now we create new VLAN IDs 10, 20, 30, 40, and 50 with names Engineering, Finance,
Management, Marketing, and Sales respectively. Please note that we have assigned names to
VLANs but this step is optional. If names are not explicitly assigned to VLANs, a Cisco switch
automatically creates a VLAN name for each VLAN created which is drawn from the VLAN ID
itself.
Switch1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Switch1(config)#vlan 10
Switch1(config-vlan)#name Engineering
Switch1(config-vlan)#vlan 20
Switch1(config-vlan)#name Finance
Switch1(config-vlan)#vlan 30
Switch1(config-vlan)#name Management
Switch1(config-vlan)#vlan 40
Switch1(config-vlan)#name Marketing
Switch1(config-vlan)#vlan 50
Switch1(config-vlan)#name Sales
Switch1(config-vlan)#exit
Switch1(config)#exit
Switch1#
Now we display VLANs again just like we did at the start to verify that new VLANs have been
created using the show vlan brief command.
You can see that five new VLANs have been successfully created but also note carefully in the
Ports column that now switch ports are yet assigned to these newly created LANs.
Now we will proceed to assigne switch ports to VLANs.
Table 7-3 Ports to be Assigned to VLANs
Switch1#configure terminal
Switch1(config)#interface FastEthernet 0/1
Switch1(config-if)#switchport access vlan 10
Switch1(config-if)#switchport mode access
Switch1(config)#interface FastEthernet 0/2
Switch1(config-if)#switchport access vlan 10
Switch1(config-if)#switchport mode access
Switch1(config)#interface FastEthernet 0/3
Switch1(config-if)#switchport access vlan 10
Switch1(config-if)#switchport mode access
Now assign switch ports FastEthernet 0/4 and FastEthernet 0/5 to VLAN 20.
This completes our configuration and verification for this section but you may have noticed that
there is quite a bit of repetition when it comes to assigning several switch ports to the same
VLAN. There is a shortcut Cisco IOS provides to accomplish this with fewer commands by
applying those commands to a range of switch ports. Lets again assign switch ports FastEthernet
0/1 to FastEthernet 0/3 to VLAN 10 using the new method.
Switch1#configure terminal
Switch1(config)#interface range FastEthernet 0/1 – 3
Switch1(config-if)#switchport access vlan 10
Switch1(config-if)#switchport mode access
As you can see it greatly reduces the effort needed to apply the same configuration to multiple
switch ports. This method can be used to easily apply any commands from the interface
configuration mode to a range of interfaces.
Please keep in mind that Cisco IOS switches keep VTP and VLAN information in a file
named vlan.dat which is stored in the flash memory. Even if you erase the startup configuration
and reload the device, VLAN information persists because it is saved in vlan.dat file. You must
manually delete the vlan.dat file in addition to erasing the startup configuration if you want to get
rid of all VLAN information on the switch.
Real World Concept – Before deleting a VLAN, re-assign the ports belonging to that VLAN to
another VLAN in order to avoid making the ports inoperable.
VLAN Management Policy Server (VMPS)
Cisco switches also support a dynamic method of assigning devices to VLANs, based on the
device’s MAC addresses, using a tool called VLAN Management Policy Server (VMPS). A
VLAN Management Policy Server or VMPS is simply a Cisco switch that maintains device
information to VLAN mapping. With VMPS, a switch administrator can dynamically assign a
network device to a particular VLAN. This technology ties VLAN membership to the end device
rather than the switch port and is useful in sites that contain a large number of mobile users.
You can use the VLAN Management Policy Server (VMPS) service to set up a database of MAC
addresses to be used for the dynamic addressing of your VLANs. The VMPS database
automatically maps MAC addresses to VLANs. A dynamic access port can belong to one VLAN
anywhere in the range 1-4094 and is dynamically assigned by the VMPS. Lower end switches
like the Catalyst 2960 can be a VMPS client only.
Trunk Port Configuration
You can manually configure trunk links on Cisco switches but Cisco has also implemented a
proprietary, point-to-point protocol called Dynamic Trunking Protocol (DTP) that negotiates a
common trunking mode between two neighboring switches. The negotiation covers the
encapsulation (ISL or 802.1Q) and whether the link becomes a trunk at all. This allows trunk links
to be used without much manual configuration or administration.
Now that you understand the two types of trunk interfaces, let’s see how to configure each type.
The following list describes the different options available to you when configuring a switch
interface:
switchport mode access This command entered in interface configuration mode puts the interface
into permanent non trunking mode and also negotiates to convert the link into a non trunk link. The
interface becomes a non-trunk interface regardless of whether the neighboring interface is also a
non-trunk interface. Such interface would be a dedicated layer 2 interface.
switchport mode dynamic auto This interface configuration mode command makes the interface
able to convert the link to a trunk link dynamically only if the neighboring switch initiates DTP
negotiation. The interface becomes a trunk interface if the neighboring interface is set
to trunk or dynamic desirable mode. This is also the factory default mode on Cisco switch
interfaces.
switchport mode dynamic desirable This interface configuration mode command makes the
interface able to convert the link to a trunk link dynamically by actively initiating DTP
negotiation. The interface becomes a trunk interface if the neighboring interface is set
to trunk, dynamic desirable, or dynamic auto mode.
switchport mode dynamic desirable This command puts the interface into permanent trunking
mode also negotiating to convert the neighboring interface into trunking mode. The interface
becomes a trunk interface even if the neighboring interface is not a trunk interface.
switchport nonegotiate This interface configuration mode command prevents the interface from
generating DTP frames to negotiate trunking. You can use this command only when the interface is
configured with switchport mode trunk or switchport mode access. This command is not
compatible with dynamic auto and dynamic desirable modes.
Dynamic Trunking Protocol (DTP) is not only used to negotiate trunking on a link between two
devices but also to negotiate the encapsulation type of either 802.1Q or ISL. When we decide to
make a link access or trunk using relative configuration commands, it is a good practice to disable
DTP on the link by using switchport nonegotiate command to prevent unnecessary DTP traffic on
the link.
Table 7-4 Trunk Configuration Options
Meaning Configuration on
Configuration Short
Other Side to
Command Name
Trunk
Trunk Always trunks; On, desirable, auto
sends DTP
switchport mode
messages to help
trunk
other side choose to
trunk
switchport mode Trunk (with Always trunks; does On
trunk;switchport nonegotiate) not send DTP
nonegotiate messages
Desirable Sends DTP On, desirable, auto
switchport mode messages and trunks
dynamic desirable if negotiation
succeeds
Auto Replies to DTP On, desirable
switchport mode messages and trunks
dynamic auto if negotiation
succeeds
Access Never trunks; sends Never trunks
DTP messages to
switchport mode
help the other side
access
reach the same
conclusion
switchport mode Access Never trunks; does Never trunks
access;switchport (with not send DTP
nonegotiate nonegotiate) messages
Please see below diagram where SW1 is connected to SW2, SW3, and SW4 via interface
Fa0/13, Fa0/16, and Fa0/19 respectively. Also note that SW2, SW3, and SW4 each has its
interface Fa0/13 connected to an interface on SW1. This is how we are going to configure our
switched network here:
SW1 – SW2 configured as ISL trunk
SW1 – SW3 configured as 802.1Q trunk
SW1 – SW4 configured dynamically by DTP
Figure 7-10 Trunking Configuration Reference for Example
We start out by configuring interface FastEthernet 0/13 of SW1 as trunk and setting the
encapsulation to ISL.
Finally, let’s move to SW2 and see how trunks have formed.
That was one doozy of an example, but we learnt how to create and verify trunking using different
methods. There is some additional fine tuning that can be done to the trunk links as shown below:
SW1(config-if)#switchport trunk ?
allowed Set allowed VLAN characteristics when interface is in trunking mode
encapsulation Set trunking encapsulation when interface is in trunking mode
native Set trunking native characteristics when interface is in trunking mode
pruning Set pruning VLAN characteristics when interface is in trunking mode
Defining the Allowed VLANs on a Trunk
By default the full range of VLANs 1 to 4094 are allowed on a trunk link. But you can selectively
allow VLANs on a trunk while disallowing others using command switchport trunk allowed
vlan:
SW1(config)#interface Fa0/16
SW1(config-if)#switchport trunk allowed vlan ?
WORD VLAN IDs of the allowed VLANs when this port is in trunking mode
add add VLANs to the current list
all all VLANs
except all VLANs except the following
none no VLANs
remove remove VLANs from the current list
SW1(config-if)#switchport trunk allowed vlan 1,10,20,30,40,50
The above command will only allow VLANs 1,10, 20, 30, 40 and 50 on the trunk while
disallowing all others. The configuration can be verified using command show interface trunk:
Modifying the Trunk Native VLAN
The native VLAN is the one VLAN whose frames are not tagged with 802.1Q encapsulation
before sending out an 802.1Q trunk. The native VLAN should match on both ends of a trunk link
because the receiving end would interpret any frame received untagged on an 802.1Q trunk as
belonging to the native VLAN. You can change the native VLAN using command switchport
trunk native vlan.
We change the native vlan first on Fa0/16 of SW1 and then on Fa0/13 of SW2 to complete the
configuration.
SW1(config)#interface Fa0/16
SW1(config-if)#switchport trunk native vlan 10
SW2(config)#interface Fa0/13
SW2(config-if)#switchport trunk native vlan 10
You can verify the configuration using the good old show interface trunk command on SW1 and
SW2. The output on SW1 looks something like:
Inter-VLAN Routing Configuration
By default, a host can communicate with only those hosts that are members of the same VLAN. In
order to change this default behavior and allow communication between different VLANs, you
need a router or a layer 3 switch. We will learn both approaches starting with the router
approach.
The router has to support ISL or 802.1Q trunking on a FastEthernet or GigabitEthernet interface in
order to perform routing between different VLANs. The router’s interface is divided into logical
interfaces called subinterfaces, one for each VLAN. From a FastEthernet or GigabitEthernet
interface on the router, you can set the interface to perform trunking with
the encapsulation command:
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0.10
R1(config-subif)#encapsulation ?
dot1Q IEEE 802.1Q Virtual LAN
R1(config-subif)#encapsulation dot1Q ?
<1-4094> IEEE 802.1Q VLAN ID
R1(config-subif)#encapsulation dot1Q 10
Please note that the Cisco 2811 router named R1 supports only 802.1Q trunking. As we learned
earlier in the chapter that Cisco is moving away from ISL and newer hardware like the Cisco
2800 series Integrated Services Router (ISR) does not even support ISL.
We have used subinterface number 10 which happens to be the same as the VLAN ID associated
with the subinterface. It is common practice to make the subinterface number match the VLAN ID
which makes the configuration more predictable and helps in configuration and troubleshooting.
But it is just an arbitration and subinterface number and VLAN ID don’t have to necessarily
match. Remember that the subinterface number is only locally significant, and it does not matter
which subinterface numbers are configured on the router.
Another important fact about VLANs is that each VLAN also is a separate IP subnet. Although it
is not an absolute requirement to have a one-to-one mapping between VLANs and IP subnets but it
really is a good idea to configure your VLANs as separate subnets, so better stick to this best
practice.
In order to make sure you are fully prepared to configure inter-VLAN routing, we will go through
two different configuration examples in detail.
Let’s start by looking at the figure that follows and reading the router and switch configuration
given for the figure.
Figure 7-11 Inter-VLAN Routing Example1
R1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface Fa0/0.10
R1(config-subif)#encapsulation dot1q 10
R1(config-subif)#ip address 192.168.10.1 255.255.255.0
R1(config-subif)#interface Fa0/0.20
R1(config-subif)#encapsulation dot1q 20
R1(config-subif)#ip address 192.168.20.1 255.255.255.0
Having come this far in your CCNA studies, you should be able to figure out which IP subnets are
being used by looking at the router configuration. You can see that we are using 192.168.10.0/24
with VLAN 10 and 192.168.20.0/24 with VLAN 20. And by looking at the switch configuration,
you can see that interfaces FastEthernet0/2 and FastEthernet0/3 are in VLAN 10 and interface
FastEthernet0/4 is in VLAN 20. This means that Host A and Host B are in VLAN 10 and Host C
is in VLAN 20.
SW1>enable
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface Fa0/2
SW1(config-if)#switchport access vlan 10
SW1(config-if)#switchport mode access
SW1(config-if)#interface Fa0/3
SW1(config-if)#switchport access vlan 10
SW1(config-if)#switchport mode access
SW1(config-if)#interface Fa0/4
SW1(config-if)#switchport access vlan 20
SW1(config-if)#switchport mode access
We are configuring the IP addresses on hosts manually or statically as below:
Host A
IP Address 192.168.10.2
Subnet Mask 255.255.255.0
Default Gateway 192.168.10.1
Host B
IP Address 192.168.10.3
Subnet Mask 255.255.255.0
Default Gateway 192.168.10.1
Host C
IP Address 192.168.20.2
Subnet Mask 255.255.255.0
Default Gateway 192.168.20.1
The hosts can have any IP address in the subnet range but I just chose the first available IP
addresses after the default gateway address to make the configuration simpler and predictable.
Always keep in mind that easier to read and predict configurations are always easier to maintain
and troubleshoot as well from a practical standpoint.
Now again using the figure as reference, let’s go through the commands necessary to configure
switch interface Fa0/1 to establish a link with the router and provide inter-VLAN communication
using IEEE 802.1q encapsulation. Please note that I have used a Cisco 3560 switch here and the
commands can vary slightly depending on what switch model you are working with.
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface fa0/1
SW1(config-if)#switchport trunk encapsulation ?
dot1q Interface uses only 802.1q trunking encapsulation when trunking
isl Interface uses only ISL trunking encapsulation when trunking
negotiate Device will negotiate trunking encapsulation with peer on
interface
SW1(config-if)#switchport trunk encapsulation dot1q
SW1(config-if)#switchport mode trunk
As you can see, our Cisco 3560 switch supports both IEEE 802.1Q and ISL encapsulation in
addition to negotiate mode that allows encapsulation to be negotiated through
dynamic Trunking Protocol (DTP). We specified 802.1Q as the trunking protocol in order to
successfully perform trunking with the router. Also keep in mind that when we create a trunk link
like the one we just created, all VLANs 1 to 4094 are allowed to pass data by default. However,
it is possible to allow only a subset of the range of VLANs while blocking others.
Let’s move on to our second and final configuration example for inter-VLAN routing involving a
somewhat more complex scenario as shown in the figure below:
Figure 7-12 Inter-VLAN Routing Example 2
This figure shows three VLANs 1, 2, and 3 with two hosts in each of them. The router is
connected to the switch using subinterfaces on port Fa0/1 on the switch. The switch port
connecting to the router is a trunk port. The switch ports connecting to the clients are all access
ports, not trunk ports. The configuration of the switch would look something like this:
SW1>en
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface fa0/1
SW1(config-if)#switchport mode trunk
SW1(config-if)#int fa0/2
SW1(config-if)#switchport access vlan 1
SW1(config-if)#switchport mode access
SW1(config-if)#int fa0/3
SW1(config-if)#switchport access vlan 1
SW1(config-if)#switchport mode access
SW1(config-if)#int fa0/4
SW1(config-if)#switchport access vlan 2
SW1(config-if)#switchport mode access
SW1(config-if)#int fa0/5
SW1(config-if)#switchport access vlan 2
SW1(config-if)#switchport mode access
SW1(config-if)#int fa0/6
SW1(config-if)#switchport access vlan 3
SW1(config-if)#switchport mode access
SW1(config-if)#int fa0/7
SW1(config-if)#switchport access vlan 3
SW1(config-if)#switchport mode access
Before we configure the router, we need to know the IP subnets assigned to VLANs:
VLAN 1: 172.16.1.16/28
VLAN 2: 172.16.1.32/28
VLAN 3: 172.16.1.48/28
The configuration of the router would then look something like this:
R1>en
R1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#int fa0/0
R1(config-if)#no ip address
R1(config-if)#no shutdown
R1(config-if)#int fa0/0.1
R1(config-subif)#encapsulation dot1q 1
R1(config-subif)#ip address 172.16.1.17 255.255.255.240
R1(config-subif)#int fa0/0.2
R1(config-subif)#encapsulation dot1q 2
R1(config-subif)#ip address 172.16.1.33 255.255.255.240
R1(config-subif)#int fa0/0.3
R1(config-subif)#encapsulation dot1q 3
R1(config-subif)#ip address 172.16.1.49 255.255.255.240
The hosts in each VLAN would be assigned an IP address from the IP subnets associated with the
VLAN, and the default gateway would be the IP address assigned to the
outer’s subinterface in that VLAN.
VTP Configuration
In this section, we will configure VLAN Trunking Protocol (VTP) for the switched network
shown in the diagram:
Figure 7-13 VTP Configuration Example
Cisco switches are configured to be in VTP server mode by default. The first step in configuring
VTP would be to set the VTP domain name you want to use. VTP domain name can be any string
of characters which must be configured on all switches that are to exchange VLAN information
over VTP with each other.
When you create the VTP domain, there are quite a few options you can set including the domain
name, password, mode, and pruning. You can set all of these options using the vtp command in
global configuration mode. In the following example, we will set switch SW1 to
VTP server mode, the VTP domain to CertificationKits, and the VTP password to cisco:
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#vtp mode server
Device mode already VTP SERVER.
SW1(config)#vtp domain CertificationKits
Changing VTP domain name from null to CertificationKits
SW1(config)#vtp password cisco
Setting device VLAN database password to cisco
SW1(config)#exit
SW1#
We are done with configuring various VTP options but we have to find a way to verify that
configuration. There are two very useful commands to verify VTP configuration and they
are show vtp status and show vtp password:
SW2#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW2(config)#vtp mode client
Setting device to VTP CLIENT mode.
SW2(config)#vtp domain CertificationKits
Changing VTP domain name from null to CertificationKits
SW2(config)#vtp password cisco
Setting device VLAN database password to cisco
SW2(config)#exit
SW2#show vtp status
VTP Version : running VTP1 (VTP2 capable)
Configuration Revision : 0
Maximum VLANs supported locally : 1005
Number of existing VLANs : 9
VTP Operating Mode : Client
VTP Domain Name : CertificationKits
VTP Pruning Mode : Disabled
VTP V2 Mode : Disabled
VTP Traps Generation : Disabled
MD5 digest : 0x60 0xDE 0xD6 0xAC 0x3F 0x23 0xF6
0xC6
Configuration last modified by 10.0.1.1 at 3-1-93 02:07:25
Local updater ID is 0.0.0.0 (no valid interface found)
SW2#show vtp password
VTP Password: cisco
SW2#
You can repeat the same configuration on SW3 to complete the configuration on all three
switches. Now that all our switches are set to the same VTP domain and password, it’s time to
test if our VTP configuration achieves what it is supposed to achieve. You may recall that the
primary goal of VTP is to be able to create VLANs only on the VTP server and let that VLAN
information propagate to VTP clients through VTP advertisements. We created a few VLANs on
SW1 earlier and they should be advertised to the VTP client switches SW2 and SW3 if VTP is
working as expected. This can easily be verified by using the good old show vlan brief command
on switches SW2 and SW3:
As you can see five new VLANs are present on SW2 though we never did any VLAN
configuration on SW2. These VLANs have been learned from the VTP server SW1 through VTP
advertisements. You may have noticed in the above output that though SW2 has learnt new VLANs
through VTP, no switch ports are yet assigned to these new VLANs. Keep it very clear in your
mind that VTP only advertises VLAN information; it cannot advertise VLAN port assignments.
Individual switch ports must be manually assigned to desired VLANs on all switches.
VTP Pruning
VLANs are an efficient way to preserve bandwidth by localizing broadcasts, multicasts, and
unicast frames. VLAN Trunking Protocols serves the basic purpose of making VLAN management
centralized and more efficient. But VTP has a small nifty feature that gives us a way to preserver
bandwidth even further within a VLAN. This feature is called pruning. VTP pruning enabled
switches send broadcasts to only those trunk links that actually must have the information. Let’s
explain it a bit: If SW1 does not have any ports assigned to VLAN 2 and there is a broadcast
generated in VLAN 2, that broadcast would not traverse the trunk link from a connected switch to
SW1. In other words, other switches connected to SW1 would not send any broadcasts generated
in a specific VLAN to SW1 if SW1 has no port assigned to that VLAN, if VTP pruning is enabled.
When we enable pruning on a VTP server, you effectively enable it for the entire VTP domain. By
default, only VLANs 2 through 1001 are pruning eligible, but VLAN 1 cannot be pruned because
it is the default administrative VLAN. VTP pruning is disabled by default but it is a good idea to
enable it to save some bandwidth. And you know what, the configuration is surprisingly simple:
SW1#conf term
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#vtp pruning
Pruning switched on
SW1(config)#exit
SW1#
The command show vtp status can yet be used to find VTP pruning state currently configured:
It is disturbing to see that SW2 has not learned any of the VLANs we created on the VTP server.
There is something wrong with our configuration which we need to troubleshoot. We are not
going to do a show running-config here but rather use the show and debug commands. In fact, you
should try the show commands first and debug commands are to be used as a last resort. Most
problems can be isolated using show commands alone.
A good starting point is to run show vtp status and show vtp password on SW2 and check for the
VTP domain name, password and mode.
You may notice that the VTP password is set as Cisco rather than cisco which seems to be the
source of problem. We can fix this:
SW2#conf term
SW2(config)#vtp password cisco
SW2(config)#exit
SW2#show vlan brief
VTP Password: Cisco
After correcting the VTP password SW2 would learn VLAN information from the VTP server
which can be verified by running show vlan brief command.
In brief, most VTP synchronization problems are caused by a misconfiguration of domain name,
password, mode, or version and can be diagnosed by show vtp status and show vtp
password commands on all switches in the VTP domain.
Also, keep in mind that a mismatched domain name has another unwanted side effect that Dynamic
Trunking Protocol (DTP) is not able to negotiate trunking. If you ever find yourself in a situation
where trunking is not successfully negotiated while the configuration seems correct, do check that
the VTP domain name matches on the two switches.
Voice VLAN Configuration
We briefly covered voice access ports earlier in the chapter also mentioning voice VLANs. It is
time now to dig a bit deeper into voice VLANs and do a little configuration as well.
The voice VLAN is an ingenious feature that enables access ports to carry voice traffic from an IP
phone. Cisco IP phones connect to the IP network using Ethernet to send Voice over IP (VoIP)
packets. The Voice over IP framework is made up of several components including IP phones, call
managers, and voice gateways. A detailed coverage of these components is beyond the scope of
this book and your Cisco Certified Network Associate (CCNA) exam. The VoIP communication
takes place over the same shared network infrastructure made up of switches and routers which is
used for data communication.
Each desk or cubicle in a modern enterprise is likely to have both an IP phone and a PC on it. One
way of connecting the IP phone to switch may have been to use a separate Ethernet cable and a
separate switch port. But Cisco came up with the idea of including a small LAN switch built
inside each Cisco IP phone. This small switch allows one cable to run from the LAN switch to the
desk to connect to the switch built into the IP phone. Then the PC can connect to the switch inside
the IP phone over a short straight-through Ethernet cable from the PC to the bottom of the IP
phone. If you have access to a Cisco IP Phone, turn it upside down and you would find two
Ethernet ports at its bottom. One port is to be connected to the LAN switch, the second port is to
be connected to the PC and the third port is internal which connects to the IP phone circuitry
inside. This is the simple three port switch built into all Cisco IP phones. In this way, a Cisco IP
phone provides a data connection for a user’s PC, in addition to its own voice data stream. Please
see figure below for a graphical representation of the concept just described.
Figure 7-14 Built-in Switch of the Cisco IP Phone
As you can see in the diagram, the link between the phone and switch should use 802.1Q trunking,
and the phone and PC should be in different VLANs and hence in different IP subnets. This design
is per Cisco recommended guidelines and has several advantages. First, by placing IP phones in
one VLAN, and the PCs connected to phones in a different VLAN, you can more easily manage
the IP address space, apply Quality of Service (QoS), and provide better security by isolating the
data and voice traffic.
Figure 7-15 How to Connect an IP Phone and PC to LAN Switch
On a relatively quiet, underutilized network, a switch can generally forward frames as soon as
they are received. However, if a network is congested, packets cannot always be delivered in a
timely manner. Different types of applications have different requirements for how their data
should be sent end to end. For example, it might be acceptable to wait a short time for a Web page
to be displayed after a user has requested it. Also, an FTP download may continue at a variable
rate without issues as user can use the file once it is fully downloaded. But it is probably not
tolerable to face the same delays in receiving packets that belong to a streaming video
presentation or a telephone call. Video streaming is very popular these days and typically
multicast traffic over UDP as the transport protocol is used to transmit the video stream from a
server to several clients. Any loss or delay in packet delivery would ruin the purpose of these
applications due to their real-time or interactive nature.
Traditionally network congestion has been handled by increasing link bandwidths and enhancing
switching hardware performance. This approach is not cost effective or efficient and it does
nothing to address how one type of traffic can be preferred over another. Quality of Service
(QoS) can be used to protect and prioritize time-critical traffic like voice and video. Keep in
mind that the most important aspect of transporting voice traffic across a switched network is
maintaining the proper Quality of Service level. Voice packets must be delivered in the most
timely manner possible, with minimum jitter, loss, and delay.
As a matter of fact, layer 2 frames have no means to indicate the priority or importance of their
contents for the purpose of prioritization or QoS. One frame looks just as important as any other
frame. However, when frames are carried from switch to switch, an opportunity for classification
occurs. We understand that a trunk is used to carry frames from multiple VLANs between
switches. The trunk does this by encapsulating the frames and adding a tag indicating the source
VLAN number. The encapsulation also includes a field that can mark the class of service (CoS) of
each frame. This marking can be used at switch boundaries to make QoS decision and prioritize
traffic according to importance. Cisco switches typically perform QoS implementation or traffic
prioritization in hardware and the actual mechanisms may vary from platform to platform.
The LAN used for voice traffic from the IP phone is called the voice VLAN and the VLAN used
for data is called the data or access VLAN. For the LAN switch to forward traffic correctly, it
needs to know the VLAN ID of the voice VLAN as well as the data VLAN. The data or access
VLAN is configured just as a regular access VLAN is configured using the switchport access
vlan vlan-id command. The voice VLAN is configured using the switchport voice vlan vlan-
id interface configuration mode command. Referring to the diagram, the switch would need both
the switchport access vlan 5 and switchport voice vlan 15 commands in interface configuration
mode.
SW1#conf term
SW1(config)#interface Fa0/1
SW1(config-if)#switchport access vlan 5
SW1(config-if)#switchport voice vlan 15
Summary
This chapter introduced to you a number of enhanced switching technologies and described how
you can configure them on Cisco switches. We started with talking about virtual LANs (VLANs)
and how they break up broadcast domains in a switched network and provide traffic isolation at
layer 2. This fact is very important because layer 2 switches without VLANs only break up
collision domains and your switched network is one large broadcast domain. We learned what
access links are and also went over how trunked VLANs work across a Fast Ethernet of Gigabit
Ethernet link.
Trunking is an important and critical technology to understand as most of the enhanced switching
technologies described in this chapter invlove trunking one way or the other. We went into great
detail describing VLAN Trunking Protocol (VTP) and learned how it sends VLAN information to
all switches in the network over trunked links. We also leaernt how to configure and troubleshoot
VTP in case things don’t work as you expect them to work.
Finally we covered Voice VLANs which can be used to allow IP phones to run along with regular
desktop or laptop computers over your access switch ports. We finished off the chapter with
detailed configuration and troubleshooting examples for almost all technologies covered in the
chapter.
Chapter 8: Network Security
Vint Cerf who is recognized as one of the fathers of the Internet, once said, “The wonderful thing
about the Internet is that you’re connected to everyone else. The terrible thing about the Internet is
that you’re connected to everyone else.” There has been an explosion in the size and scope of the
Internet and today anyone with a computer can connect to almost anyone else. Most companies too
are now permanently connected to the Internet, a network through which others could also attempt
to illegally access their networks. Network security has become one of the hottest topics in
networking and the trend is likely to continue in the near future and why Cisco has developed the
CCNA Security certification specialty.
8-1 Network Security
8-2 Cisco Firewalls
8-3 Layer 2 Security
8-4 AAA Security Services
8-5 Secure Device Management
8-6 Secure Communications
8-7 Summary
Network Security
How to Approach Network Security
While the Internet and networks are growing rapidly, they are also becoming more complex and
mission critical. This brings new challenges to the folks who run and manage today’s networks.
There has been an integration of network infrastructure that now supports voice, video, and data
but at the same time new security concerns are also introduced.
As a matter of fact, no computer system in the world can be completely secure no matter how
good the security measures are. Probably the only way to fully secure a computer is to isolate it
completely, restricting all physical and virtual access to it. Such a system would not be connected
to any network and would probably be stored in a secured vault somewhere with no physical
access. Though this compuer system would be completely secure, it would also be completely
useless. Usefulness of computers stems from the ability to connect to them and use the resources
offered by them. So, the goal of network security is to provide continued access to those
resources and, at the same time, preventing any un-authorized or malicious activity from taking
place.
Cisco IOS software running on Cisco routers has several built-in security tools that can be used
as part of a good overall security strategy. Probably the most important security tool in Cisco IOS
software are access control lists (ACL). ACLs can be used to define rules to prevent some
packets from flowing through the network. In this chapter, you will learn how you can protect the
posterior of your network by deterring the most common threats with features available in Cisco
IOS itself.
Cisco also produces an array of specialized security appliances such as the Adaptive Security
Appliance (ASA) that companies can use for securing their networks.
The CIA Model
A security model is a framework that that provides guiding principles to make systems secure
also meeting industry best practices and regulations. A widely applicable model of network
security is the confidentiality, integrity, and availability (CIA) triad. CIA is more like a set of
three guiding principles that can be used to secure systems. A breach of any of these three
principles can have security consequences.
Figure 8-5 CIA Security Model
Confidentiality
Confidentiality means preventing sensitive information from being seen by anyone who is not
authorized to see it. It is the capability to ensure that the required level of secrecy is enforced and
information is concealed from unauthorized users. Infromation is a very valuable asset and
keeping sensitive information secure is critical for enterprises. That’s why confidentiality is the
aspect of security that comes under attack most often by those who want to steal information for
their own interests. Encryption is a common technique used to ensure confidentiality of data
transferred from one computer to another. For example, when a user is performing an online
banking transaction, sensitive information such as account statements, credit card numbers, and
passwords must remain protected. Encryption techniques ensure that information is not seen as it
is being sent back and forth between the user’s computer and the online bank.
Integrity
Integrity prevents any unauthorized modification of data to make sure information stays accurate.
If your data has integrity, you can be sure that it is the actual unchanged representation of the
original information and hence can be trusted. A commond type of security attack that
compromises the integrity of data is the man-in-the-middle attack. In this kind of attack, the
attacker intercepts data as it is in transit and makes changes to it without letting the two
communicating entities realize that.
Availability
Availability prevents the loss of access to information and resources and ensure that they are
ready for use when they are needed. It is a must to make sure that information is readily available
at all times so that requests by authorized users could be fulfilled whenever they come. Denial of
service (DoS) is one of several types of security attacks that attempts to prevent legitimate access
to information and resources hence compromising the availability of affected systems.
Table 8-1 CIA Model
Goal Defined Example Methodology
Availability Keeping your DoS Attacks Auto patch
network services up updatesRate
and running limiting
Integrity Prevent data Man in the Middle Hashing
modification Attack
Confidentiality Secure Data from Packet capture and Encryption
eavesdropping replaying
The Secured Enterprise Network
In a medium to large enterprise, the typical secured network is built around a recipe of a
perimeter router, a firewall device, and an internal router.
Perimeter Router The perimeter router is the border crossing or the demarcation point
between enterprise network resources and the public network, such as the Internet. Therefore,
traffic originating from the outside destined for the trusted network or the DMZ must transit
through the perimeter router. This router should provide basic security and traffic filtering for
both the DMZ and the trusted network.
Firewall The firewall can be a router running the Cisco IOS firewall feature set or a
sepcialized device like the Cisco Adaptive Security Applinace (ASA). The firewall can be
configured to provide sophisticated controls over traffic flowing between the trusted network,
DMZ, and the untrusted network.
Internal Router The internal router provides additional security by providing a
point where you can apply further controls to traffic going to or coming from various parts of the
trusted network.
Figure 8-1 Secured Enterprise Network
You should do a detailed examination of Figure 8-1 and identify clearly the three distinguishable
parts of the network: trusted network, untrusted network, and the demilitarized zone (DMZ).
Trusted Network The trusted network is the internal enterprise network or the corporate local
area network (LAN).
Untrusted Network The untrusted network refers to the universe beyond the perimeter router.
Typically, the Internet is the untrusted network and is considered highly hostile.
Demilitarized Zone (DMZ) The term DMZ, like many other network security terms, was
borrowed from military terminology. In military terms, a demilitarize zone (DMZ) is an area,
usually the frontier or boundary between two or more military powers, where military activity is
not permitted, usually by peace treaty or other similar agreement. In computer networking, the
DMZ likewise provides a buffer zone that separates an internal trusted network from the untrusted
hostile territory of the Internet. DMZ is not as secured as the internal network, but because it is
behind a firewall, neither is it as non-secure as the Internet. Typically, DMZ hosts services to
which access is required from the untrusted network. This includes Web, DNS, email, and other
corporate servers that have to be reachable from the Internet.
Classes of Attackers
In the context of this chapter, an attacker refers to someone who attempts to gain unauthorized
access to a network or computer system. It is useful to identify different types of attackers and
understand their motives in order to be able to characterize attacks and track down such
individuals. There are a variety of groups into which attackers are classified and sometimes
conflicting views are held by members of the networking community about the definitions of these
classifications. Here, I would mention three broad categories:
Hackers Hackers are those individuals who break into computer systems and networks to
learn about them, or just to prove their prowess. Some hackers usually mean no harm and do not
seek financial gain.
Crackers Crackers are criminal hackers who intend to harm information systems. Crackers
usually work for financial gain and are also known as black hat hackers.
Script Kiddies Script kiddies think of themselves as hackers but do not have the needed
knowledge and skills. They cannot write their own code; instead, they run scripts written by
others to attack systems and networks. As a matter of fact, very sophisticated software tools have
become freely available on the Internet which allow novices to execute attacks with point-and-
click ease. Today, a very large percentage of wannabe hackers fall in this category.
Vulnerabilities, Threats, and Exploits
Security attacks vary considerably in their sophistication and ability to do damage. As you would
learn more about protocols that run today’s networks, you would realize that most security threats
are a result of some weakness or inadequacy in the design of the underlying protocol itself. When
the Internet was formed, it linked various government entities and universities to one another with
the sole purpose of facilitating learning and research. The original architects of the Internet had
never anticipated the kind of widespread adoption the Internet has achieved today. As a result, in
the early days of networking, security was not designed into network protocol specifications. For
this reason most implementations of TCP/IP are inherently insecure. That is a big reason why
security is such a burning issue today and in the absence of built-in security mechanisms, we have
to rely on additional security measures to make communications secure.
Vulnerability A vulnerability is a weakness in a system or its design that can be exploited
by a threat. Vulnerabilities are found in operating systems, applications, and even in network
protocols themselves.
Threat A threat is an external danger to the system having a vulnerability.
Exploit An exploit is said to exist when computer code is actually developed to take
advantage of a vulnerability. Suppose that a vulnerability exists in a piece of software but nobody
has yet developed computer code to abuse it. Because there is no exploit, there is no real problem
yet though the vulnerability exists theoretically.
Classes of Attacks
The three major types of network attacks, each having its own specific goal, are as follows:
Reconnaissance Attacks
Reconnaisance literally means the military observation of a region to locate an enemy or to
establish strategic features of the region. A reconnaisance attack is not meant to inflict immediate
damage to a system or network but only to gather information about the network to prepare for a
later attack. It is used to map out the network and discover which IP address ranges are used,
which systems are running, and which services or applications reside on those systems. The
attacker has to be able to reach a system or network to some extent to perform reconnaisance, but
normally no damage is caused at that time. The more common reconnaisance attacks include ping
sweeps, port scans, and DNS queries. Here are a few examples of reconnaisance attacks:
Infromation Lookup A network intruder can use tools such as the nslookup and whois in
order to determine the IP address space assigned to an organization. Finding a target IP address is
one of the first steps in reconnaisance. Once an IP address range is known, an intruder can look
for hosts that are alive using ping sweeps. Finally, port scanning can be used to find out which
services or applications are running on those live hosts.
Ping Sweeps A ping sweep is a scanning technique used in the reconnaisance phase of the
attack, to determine live hosts or computers in a network. A ping sweep sends ICMP echo
requests to multiple hosts one after the other. If a certain address is live, it will return an ICMP
echo reply confirming its existence.
Port Scans Port scanning is a method used to enumerate what services and applications are
running on a system. An intruder sends random requests on different ports and if the host responds
to the request, the intruder gets confirmation that the port is active and the associated service or
application is listening. The attacker can then proceed to exploit any vulnerabilities by targeting
active services. A port scanner is a piece of software designed to search a network host for open
ports. Ping sweeps and port scans are two primary reconnaisance techniques used to discover
hosts and services that can be exploited.
Packet Sniffers A packet sniffer is a software program that uses a wired or wireless network
interface card (NIC) in promiscuous mode to capture all network packets that are sent across a
particular collision domain. Promiscuous mode is a mode in which the network interface card
sends all packets received on the network to an application for processing. You may recall that, a
network interface card would normally send only frames addressed to the MAC address of the
card or broadcast / multicast frames to an application while all other frames are simply ignored.
There are legitimate applications of network sniffers in troubleshooting and network traffic
analysis. However, there are several network applications like Telnet, FTP, SMTP and HTTP that
send data in clear text. A packet sniffer can capture all data these applications send including
sensitive information, such as user names and passowrds. Packet sniffing is essentially
eavesdropping and the information gathered can be used to execute other attacks.
Access Attacks
An access attack is meant to exploit a vulnerability and gain unauthorized access to a system on
the network. The information gathered by reconnaisance attacks is used to execute an access
attack. When unauthorized access is gained, the attacker can retrieve, modify, or destroy data as
well as network resources including user access. Even worse, the attacker can plant other
exploits on the compromised system that can be used later to gain access to the system or network
with relative ease. Some examples of access attacks are detailed below.
Password Cracking Password cracking is very attractive for attackers as passwords are
used to protect all kinds of information including online bank accounts. Password attacks can be
accomplished using several methods, including brute forcers, Trojans, IP spoofing, and packet
sniffers.
Man-in-the-middle Attacks The man-in-the-middle (MITM) attack, also known as TCP
hijacking, occurs when an intruder intercepts communication between two points and can even
modify or control the TCP session without the knowledge of either party. TCP hijacking affects
TCP based applications such as Telnet, FTP, SMTP (email), or HTTP (Web) sessions.
Trojans A Trojan or Trojan horse is a malicious program that is hidden inside another useful
application. Trojans are seemingly harmless programs that hide malicious programs such as a key
logger that could capture all keystrokes including passwords, without the knowledge of the user.
The term Trojan Horse has originated from the hollow wooden statue of a horse in which a
number of Greeks are said to have concealed themselves in order to enter and conquer the ancient
city of Troy.
Key Logger A key logger is a tool designed to log or record every single keystroke on
the target computer, in a covert mannger so that the person using the keyboard is unaware that
their actions are being monitored. All kind of infromation including sensitive information like
password has to be basically typed on a computer. Key loggers can log and store all such
information on the same computer which can either be retrieved manually or sent as an automated
email by the key logger itself. Keyloggers can be both software and hardware based. Several
financial institutions use on-screen keyboards for online access to customer accounts as a
precaution against keyloggers.
Trust Exploitation The goal of trust exploitation attack is to compromise a trusted host,
so that it could be used to stage attacks on other hosts in a network. Typically hosts inside the
network of an enterprise are protected by a firewall placed at network boundary. So it is difficult
to attack these internal hosts from outside. But these hosts are sometimes made accessible to a
trusted host outside the firewall for legitimate purposes. If this trust outside host is compromised,
it can be used to attack the inside hosts with relative ease.
Port Redirection A port redirection attack is a kind of trust exploitation attack that uses a
compromised but trusted host to pass traffic through a firewall that would otherwise be blocked.
Outside hosts can legitimately reach the DMZ and hosts in the DMZ can legitimately reach both
inside and outside hosts. If an attacker is able to compromise a host in the DMZ, he could install
software to redirect traffic from the outside host directly to the inside host. This would result in
outside host gaining illegitemate access to inside hosts without violating the rules implemented in
the firewall. An example of a utility that can provide this type of access is netcat.
Rootkits The term rootkit is made up of the word root which is the traditional name of the
privileged account on Unix / Linux operating systems, and the word kit which refers to the
software components that implement the tool. When a malicious software providing unauthorized
access is installed on a system, it is also important to hide the existence of such software to
enable continued privileged access. A rootkit is designed to do just that: hiding the existence of
certain processes or programs from normal methods of detection. An attacker can install a rootkit
when they have obtained root or administrator access to the target system as a result of a direct
attack.
Viruses A virus is a malicious software program or code that can cause damage to data or
other programs on the target system.
Worms A worm is similar to a virus but it is capable of self-replication increasing the
scope of its damage. Worms actually are viruses that can reside in the active memory of a system
and can self-replicate and self-propagate from one computer to another over the network.
Buffer Overflows Buffers are locations in computer memory that are used to
temporarily hold data and code. A buffer overflow occurs when a program attempts to store data
in a buffer, but data is larger than the size of the allocated buffer.
IP Spoofing IP spoofing happens when an intruder attempts to disguise itself by
pretending to have a source IP address of a trusted host in order to gain access to resources on a
trusted network. Using an IP address of another known host or known network, the attcker
attempts to send and receive traffic on the network. The attacker is then able to use network
resource that are associated with that sepcific IP address. Once the attcker has got access with IP
spoofing, he can use this access for many purposes.
Address Resolution Protocol (ARP) Spoofing ARP spoofing occurse when an attacker
tries to disguise its source MAC address to impersonate a trusted host. Address Resolution
Protocol (ARP) is used to map IP addresses to MAC addresses residing on one LAN segment.
When a host sends out a broadcast ARP request to find the MAC address of a particular host with
known IP address, an ARP response comes fronm the host whose IP address matches the request.
The ARP response is stored by the requesting host. An attacker can abuse this mechanism by
responding as though they are the requested host.
Denial of Service (DoS) Attacks
A Denial of Service (DoS) attack is designed just to cause an interruption to a system or network
temporarily, denying access to legitimate users. This interruption in turn can cause loss of money
and reputation by preventing customer access to online services. These attacks usually target
specific services and attempt to overwhelm them by making numerous requests concurrently. If a
system is not protected to react to a DoS attack, it can be easily brough down by running scripts
that generate a very large number of requests. Some examples of Denial of Service (DoS) attacks
are detailed below.
Distributed Denial of Service (DDoS) It is possible to greatly increase the impact of a
DoS attack by launching the attack from multiple systems (botnets) against a single target. This
scaled up version of DoS is referred to as a distributed DoS (DDoD) attack. Web servers are a
popular target of DDoS attacks and DDoS attacks against companies like online retailers and Web
portals keep making news headlines from time to time.
TCP SYN Attack Transmission Control Protocol (TCP) is a popular transport
protocol used by several applications including Web based services. TCP is a connection
oriented protocol that uses a three-way handshake to establish a TCP connection before
application data exchange starts to take place. TCP SYN attack occurs when a host sends a large
number of TCP/SYN packets to the target system. Each TCP SYN packet is handled like a
connection request, causing the server to send back a TCP/SYN-ACK to acknowledge the
connection request also maintaining the state of this connection. The server now waits for the
third packet in TCP handshake from the host initiating the connection. However, because it is not
a legitimate host that initiated the connection, the third packet needed to complete the TCP
handshake never arrives. These half-open connections exhausts the resources at the server,
keeping it from responding to connection requests from legitimate users.
Smurf Attack A smurf attack occurs when the broadcast address of a network is used to
send packets to all hosts on that network. If network devices are not configured properly they
allow such packets to be forwarded till they reach the target network. In such an attack, the
attacker will send a large number of IP packets with spoofed IP address of one of the legitimate
hosts on the target network. As a result all hosts on the network receiving the broadcast would
respond by a unicast sent to the spoofed local IP address. This would cause a large number of
packets sent to that hosts essentially resulting in a DoS attack on that host.
Security Threat Mitigation
Several vendors such as Check Point, Juniper, Palo Alto Networks, McAfee, Fortinet and last but
not least, Cisco provide hardware and software solutions to mitigate security threats. Cisco offers
a specialized yet versatile security product called the Adaptive Security Appliance or ASA,
which I believe is one of the best products in its class. A Cisco ASA device is a standalone
hardware security appliance. Depending on model, they are quite expensive too. Currently they
are beyond the scope of the CCNA Routing and Switching exam.
Fortunately, Cisco Integrated Services Routers (ISRs) like 800, 1800, 2800, and 3800 series and
the second generation (G2) of ISRs like 1900, 2900, and 3900 series also have many of the same
features that are available on the Cisco ASA devices. These features are bundled as feature sets
in the Cisco IOS Software that runs on these routers and include IOS Firewall, IPSec VPN,
Intrusion Prevention System (IPS), and Content Filtering to mention a few. For small businesses
and enterprise branch offices, where customers are not willing to invest in a dedicated security
appliance, Cisco IOS can provide much of the same functionality without additional cost.
Another basic but very powerful security tool available on Cisco IOS are the access control lists
(ACLs). We will take the time to cover ACLs in great depth, in the relevant chapter of this book,
learning how to create and use them to mitigate security threats.
Physical and Administrative Security Measures
The facility or physical location where devices are housed is in most cases the first and last
barrier encountered by an intruder. Physical security prevents intruders from gaining physical
access to the devices, and this means hands-on contact. Physical security is even more important
that network security but is often overlooked by network administrators. Despite all the high level
security measures, a compromise in physical access will almost always result in a complete
compromise. Having a secured physical facility that is accessibl only to authorized personnel is
extremely important.
While trying to secure a network environment with technical measures, it is equally important to
put physical and administrative security measures in place. Some examples of physical security
measures are:
Locks
Biometric access systems
Security guards
Intruder detection systems
Safes
Racks
Uninterruptible power supplies (UPS)
Fire suppression systems
Positive air-flow systems
Many security incidents emerge from the inside of the enterprise caused by employees either
deliberately or un-knowingly. Policy and procedure driven administrative security measures can
be effective against these threats. These administrative controls that help with information
security are usually documented in the human resources (HR) department. Some of these measures
are:
Security awareness training
Security policies and standards
Change control mechanisms
Security audits and tests
Good hiring practices
Background checks for employees and contractors
For example, if an organization has strict hiring practices that require drug testing and criminal
background checks for all employees, the organization will likely hire fewer individuals of
dubious character. With fewer people of dubious character working for the company, it is likely
that there will be fewer internal security incidents. These administrative measures do not ensure
that no security incidents would take place, but they are an important part of an information
security program are often utilized especially by large organization.
Cisco Firewalls
Firewalls are a very important component of any network security framework, and it is no
surprise that Cisco offers firewall solutions in different shapes and forms:
Cisco IOS Firewalls
Cisco PIX 500 Series of Firewalls
Cisco ASA 5500 series Adaptive Security Appliances
Cisco Firewall Services Module
The following sections describe these platforms in more detail.
Cisco IOS Firewalls
A Cisco IOS firewall is a specialized feature of Cisco IOS Software that runs on Cisco routers. It
is a firewall product that is meant for small and medium-sized businesses as well as enterprise
branch offices.
The earlier Cisco IOS firewall feature was called Context-Based Access Control (CBAC), which
applied policies through inspect statements and configured access control lists (ACL) between
interfaces. The Zone-Based Policy Firewall (ZBPFW) is the newer Cisco implementation of a
router-based firewall that runs in Cisco IOS Software. It was introduced in IOS Release 12.4(6)T
and takes advantage of many new features that make the configuration and implementation of a
firewall easire than was available previously. The following are some of the important features
of a Cisco IOS Firewall:
Zone-based policy framework for easy to understand policy management
Controlling traffic for Web, email, and other applications
Instant messenger and peer-to-peer application filtering
Controlling traffic for Voice over IP (VoIP) protocol
Wireless integration
Support for local URL whitelist and blacklist
A firewall is basically used to enforce an access policy between different security domains. With
the ZBPFW feature, these different security domains are called security zones. With the earlier
Context-Based Access Control (CBAC) feature, these security domains were simply router
interfaces. So, one of the main differences between a firewall using CBAC and ZBPFW is the use
of security zones. These zones separate the specific security areas within a network. A typical
example would be a firewall that divides its universe into three main security zones:
Internal: Internal or private enterprise network
DMZ: Where publicly accessible servers are located
External: Includes all outside destinations
Figure 8-2 describes the three primary security zones.
Figure 8-2 Basic Zones in a Zone-Based Firewall
Cisco PIX 500 Series Security Appliances
The Cisco PIX 500 series family of security appliances is an older series which consists of five
models: the PIX 501, 506E, 515E, 525, and 535. These different models are designed to meet a
range of requirements and network sizes. The Cisco PIX 500 series security appliance provides
robust policy enforcement for users and applications, secure connectivity, and multivector attack
protection. These appliances provide the following integrated security and networking services:
Firewall services with advanced application awareness
Voice over IP (VoIP) and multimedia security
Site-to-site and remote-access IPsec VPN connectivity
Intelligent networking services and flexible management model
In January 2008, Cisco announced the End-of-Life for the PIX products. However, there is a large
install base and Cisco will still be supporting this product until July 2013.
Cisco ASA 5500 Series Security Appliances
Cisco ASA 5500 series Adaptive Security Appliances integrate firewall, Cisco Unified
Communications (voice and video) security, Secure Sockets Layer (SSL) and IPsec VPN,
Intrusion Prevention System (IPS), and content security services in a flexible, modular product
family. The ASA 5500 series appliances provide intelliget threat defense are secure
communications services that stop attacks before they affect business continuity.
The Cisco ASA 5500 series appliances are available in five models: the Cisco ASA 5505, 5510,
5520, 5540, and 5550 in order to provide a scalable security solution to meet a range of
requirements and network sizes.
Cisco Firewall Services Module
The Cisco Firewall Services Module (FWSM) is an integrated firewall module for high-end
Cisco Catalyst 6500 switches and Cisco 7600 series routers used by large enterprises and
service providers. You can install up to four FWSMs in a single switch chassis. Cisco FWSM is
based on Cisco PIX firewall technology, and offers unmatched security, reliability, and
performance.
Firewall Best Practices
Best practice documents are a useful resource as they put together the composite effort and
experiences of practitioners. Here is a generic list of best practices for your firewall security
policy, which you can use as a starting point:
Firewalls are a core security device, but you should not rely only on a firewall for
security.
Firewalls should be placed at key security boundaries.
Your firewall policy should deny all traffic by default and services that are needed should
be explicitly permitted.
All physical access to the firewall device should be tightly controlled.
Firewall logs should be regularly monitored accordingly to a schedule to make sure
anomalies are detected.
Proper change management procedures should be followed for firewall configuration
changes, to ensure all changes are documented and no unauthorized changes take place to
firewall configuration.
A firewall primarily is a perimeter device protecting from attacks originating from the
outside. It cannot protect from attacks emanating from the inside.
Cisco Security Appliances & Applications
In addition to various flavors of firewalls we covered in the last few sections, Cisco also
produces some other security appliances and applications to meet specific enterprise security
needs.
Cisco IronPort Security Appliance
Cisco NAC Security Appliance
Cisco Security Agent
Cisco IronPort Security Appliances
Cisco IronPort security appliances protect enterprises against internet threats, with a focus on
email and web security, which happen to be two of the main sources of endpoint threats.
The three major IronPort security appliances are:
IronPort C-series: Email security appliances
IronPort S-series: Web security appliance
IronPort M-series: Security management appliance
Cisco NAC Security Appliances
The purpose of Cisco Network Access Control (NAC) is to allow only authorized and compliant
systems to access the network and to enforce network security policy. In this way, Cisco NAC
helps maintain network stability. NAC provides four key features:
Authentication and authorization
Evaluation of an incoming device against network policies
Isolating or quarantining non-compliant systems
Remeiation of non-compliant systems
The Cisco NAC appliance condenses the four key NAC functions just described into a single
appliance form and provides a turnkey solution to control network access. This solution is a
natural fir for medium-scale networks that require a self-contained, ready-to-use solution. Cisco
NAC appliance is especially ideal for organizations that need simplified and integrated tracking
of operating system and antivirus patches and vulnerability updates. Cisco NAC appliance does
not require a Cisco network to operate.
The goal of Cisco NAC appliance is to admit to the network only those hosts that are
authenticated and have had their security posture examined and approved. The net result of such a
thorough examination before allowing connectivity is a tremendous reduction in total cost of
ownership (TCO) because only known, secure machines are allowed to connecte. Therefore,
laptops that have been on the road for weeks and have possibly been infected or were unable to
receive current security updates cannot connect into the network and unleash a Denial of Service
(DoS) attack.
Cisco NAC Appliance extends NAC to all network access methods, including access through
LANs, remote-access gateways, and wireless access points. The Cisco NAC Appliance also
supports posture assessment for guest users.
Cisco NAC Appliance provides the following benefits:
It recognizes users, their devices, and their roles in the network. This occurs at the point of
authentication, before malicious code can cause damage.
It evaluates whether machines are compliant with security policies. Security policies can
include specific antivirus or antispyware software, operating system updates, or patches.
The Cisco NAC Appliance supports policies that vary by user type, device type, or
operating system.
It enforces security policies by blocking and isolating non-compliant machines. A network
administrator will be advised of the non-compliance and will proceed to repait the host.
Non-compliant machines are redirected into a quarantine area, where remediation occurs at the
discretion of the administrator.
Cisco Security Agent
Cisco Security Agent is a host intrusion prevention system (HIPS) product. It is software that is
installed on a server, desktop, or point-of service computing systems and provides endpoint
security by its threat protection capabilities. A single management console of Cisco Security
Agent can support upto 100,000 agents, so it is a highly scalable solution.
The Cisco Security Agent architecture consists of two components:
Management Center for Cisco Security Agents: Management Center for Cisco Security
Agent enables you to divide network hosts into groups by function and security
requirements, and then configure security policies for those groups. Management Center
for Cisco Security Agent can maintain a log of security violations and send alerts by
email.
Cisco Security Agent: The Cisco Security Agent component is installed on the host
system and continuously monitors local system activity and analyzes the operations of that
system. Cisco Security Agents takes proactive action to block attempted malicious activity
and polls the Management Center for Cisco Security Agent at configurable intervals for
policy updates. Obviously, the Management Center should also run CSA.
When an application needs access to system resources, it makes an operating system call to the
kernel. Cisco Security Agent intercepts these operating system calls and compares them with the
cached security policy. If the request does not violate the policy, it is passed to the kernel for
execution.
However, if the request does violate the security policy, Cisco Security Agent blocks the request
and takes the following actions:
An appropriate error message is passed back to the application.
An alert is generated and sent to the Management Center for Cisco Security Agent.
Cisco Security Agent correlates this particular operating system call with the other calls made by
that application or process, and correlates these events to detect malicious activity.
Layer 2 Security
Network security is only as strong as the weakest link, because a single weak point if exploited
successfully would be enough for an intruder. That weak link can be the data link layer or layer 2
of the OSI reference model. We can secure the posterior of our network protecting it from external
threats but it is equally important to secure the interior of the network as several threats actually
originate from the inside. Like routers, Cisco switches too have their own set of network security
requirements. As a matter of fact switches may turn out to be that weak area if not properly
secured. Access to switches can be a convenient entry point for attackers who want to gain access
to a corporate network. With access to a switch, an attacker can launch all types of attacks from
within the network. The security mechanisms that are meant to protect network perimeter would
not be enough to stop these attacks simply because they originate from inside the network. For
example attackers can spoof the MAC and IP addresses of critical servers to do a great deal of
damage. The can even set up rogue wireless access points to provide continued access.
Port Security
You can use the port security feature on Cisco switches to restrict who can access the network by
connecting to a switch port. This feature is used to limit and identify the MAC addresses of the
systems that are allowed to access the port. You can configure a switch port to be secure and can
also specify which MAC addresses are allowed to access the port. The secure switch port does
not forward frames with source MAC addresses outside the group of defined MAC addresses for
that port.
Port security allows you to manually specify MAC addresses for a port or permit the switch to
dynamically learn a limited number of MAC addresses from incoming frames. By limiting the
number of permitted MAC addresses on a port to just one, you can you can make sure that just one
system can connect to the port, preventing any unauthorized exapansion of the network by
attaching a hub or switch.
When a secure port receives a frame, the source MAC address of the frame is compared to the list
of secure MAC addresses associated with the port. These secure MAC addresses are either
manually configured or auto-configured or learned on the port. If the source MAC address of a
frame differs from the list of secure addresses, the port either shuts down or the port drops
incoming frames from unauthorized host. The default behavior of a secure port is to shut down
until it is administratively enabled. The behavior of the port depends on how you configure it to
respond to a security violation.
In Figure 8-3, switch port Fa0/1 will only allow those incoming frames that have source MAC
address of MAC A. This port will block traffic with source MAC address of MAC C or any other
frame having a source MAC address other than MAC A. Similarly, port Fa0/2 will allow traffic
with source MAC address of MAC B only. This port will block all other source MAC addresses
including MAC A. Despite the fact that MAC A is allowed on port Fa0/1, it is blocked on port
Fa0/2 because secure (allowed) MAC addresses are specific to individual switch ports.
Figure 8-3 Port Security
I would strongly recommend configuring the port security feature to shut down a port instead of
just dropping packets from hosts with unauthorized addresses. If port security does not shut down
a port, it is still possible that the port will be disabled due to too much traffic load from an attack.
Port security is a useful feature as it protects against too many MAC addresses per ports and can
dictate which MAC address is allowed to connecte against which port. However, if the hacker
knows which MAC address is permitted on that port, he will gain access to the network by
spoofing the MAC address. Port security also prevents unauthorized extension of the LAN in case
a user decides to attach a hub to connect additional hosts. You have to allow only a single MAC
address on the secure port to prevent this sort of extension. Also, if you are concerned about
spoofed MAC addresses to bypass port security, then consider implementing IEEE 802.1X
authentication mechanism.
Let’s see how we can configure a switch port with one specific secure MAC address. If any other
device plugs into this interface not using that specific MAC address, the port will go into an err-
disabled state that must be cleared by an administrator.
Switch#configure terminal
Switch(config)#interface Fa0/0
Switch(config-if)#switchport mode access
Switch(config-if)#switchport port-security
Switch(config-if)#switchport port-security mac-address 1234.5678.9ABC
VLAN Hopping
VLANs simplfy network maintenance, improve performance, and provide security by isolating
traffic from different VLANs. You may recall, that inter-VLAN communication is not possible
without going through a router. But, a technique known as VLAN hopping allows traffic from one
VLAN to be seen by another VLAN without first crossing a router. In some situations, attackers
can even sniff data and obtain passwords and other sensitive information. The attack works by
taking wrongful advantage of an incorrectly configured trunk port. As you learnt, trunk ports pass
traffic from all VLANs (1 – 4094) across the same physical link, generally between switches.
The data frames moving across these trunk links are encapsulated with IEEE 802.1Q or ISL to
identify which VLAN a frame belongs to.
We will discuss a basic VLAN hopping attack that uses a rogue trunk link, as shown in Figure 8-
4. In this attack, the attcker takes advantage of the default automatic trunking configuration found
on most switches. The attacker first gets access to a vacant port on a switch and then configures a
system, most likely a laptop computer, present itself as a switch. It is possible to do so if the
system is fitted with an 802.1Q or ISL capable NIC, using appropriate software that usually
comes with the NIC itself.
Figure 8-4 VLAN Hopping Attack
The attacker communicates with the switch with Dynamic Trunking Protocol (DTP) messages,
trying to trick the switch into thinking it is another switch that needs to trunk. If a trunk is
successfully established between the attacker’s system and the switch, the attacker can gain
access to all the VLANs allowed on the trunk port. In order to succeed, this attack requires a
switch port that supports trunking such as desirable or auto. The end result is that the attacker is a
member of all the VLANs that are trunked on the switch and can hop on all those VLANs, sending
and receiving traffic.
This sort of simple but effective VLAN hopping attack can be launched in one of two ways:
Generating DTP messages from the attacking host to cause the switch to establish a trunk
with the host. Once a trunk is established, the attacker can send and receive traffic tagged
with the target VLAN to reach any other host like a server in that VLAN, because the
switch then delivers packets to the destination.
Introducing an actual rogue switch and turning trunking on can also establish a trunk with
the victim switch. The attacker can then access all the VLANs on the target switch from the
rogue switch.
The best way to prevent a basic VLAN hopping attack is to turn off trunking all switch ports
except the ones that specifically require trunking. All user ports should be configured with the
following commands:
switchport mode access: This command permanently sets the switch port to a non-
trunking mode and is just good enough for all switch ports supposed to be connected to
user PCs.
switchport nonegotiate This command can additionally be used to disable generation of
DTP messages. Though, a port configured with switchport mode access can never
become a trunk, yet disabling DTP reduces unwanted DTP frames on the link.
On switch ports that do require trunking, DTP should be disabled using command switchport
nonegotiate and trunking should be manually configured using command switchport mode
trunk in interface configuration mode.
AAA Security Services
AAA is an acronym that stands for authentication, authorization, and accounting:
Authentication: Who is the user? Authentication is used to verify the identity of a user.
Authorization: What can the user do? Authorization is used to determine what services
the user can use.
Accounting: What did the user do? Accounting performs an audit of what a user is
actually doing.
AAA is a security framework that can be used to set up access control on Cisco routers, switches,
firewalls, and other network appliances. AAA provides the ability to to control who is allowed
to access network devices and what services the user should be allowed to access. AAA services
are commonly used to control telnet or console access to network devices.
AAA uses RADIUS, TACACS+, and Kerberos as authentication protocols to administer its
security functions. A network device such as a router requiring AAA services establishes a
connection to the security server using one of these three protocols. The security server is a
Windows or Linux host external to the network device, and contains a database containing user
names and passwords among other parameters. AAA on a Cisco network device can also be
configured to use a local database of user names and passwords. AAA is enabled using the global
configuration command aaa new-model.
In addition to AAA, several other simpler and less elaborate measures are available to achieve
network access control, including the following:
Local username authentication
Enable password authentication
Line password authentication
Secure Device Management
It is important ensure the security of management traffic between a network device and the remote
host used to manage the device.
SSH
Telnet is commonly used to remotely manage Cisco devices. Telnet is inherently insecure and the
reason for this insecurity is that all Telnet messages are sent in plain text including configuration
commands and even usernames and passwords in those configuration commands. All what an
attacker has to do is to be able to sniff this communication and then he owns your network. Once
your network devices are compromised, they can be used as launching pad for attcking more
interesting systems such as servers.
One effective alternative to this inherent lack of security in Telnet is the Secure Shell (SSH)
protocol. SSH uses sevure tunnels established over an insecure network to exchange information.
SSH is a client server application and your Cisco device can be configured to serve both as SSH
server and client. However, a Cisco device is usually configured as SSH server to accept
incoming SSH connections from a remote management station. SSH has two major versions that
are referred to as SSH-1 and SSH-2. The standard TCP port 22 has been assigned for SSH and
SSH servers listen on this port for incoming connections.
Just like Telnet, you can use SSH to remotely connect to a Cisco device and enter IOS commands
or copy files over the network. SSH uses encrypted messages so network communications are
secure. PuTTY is a popular and free Telnet/SSH client that is available for both Windows and
Linux platforms.
A Cisco router has to be configured with hostname and domain name before initial SSH
configuration. The configuration goes something like this:
Router>enable
Router#configure terminal
Router(config)#hostname R1
R1(config)#ip domain-name certificationkits.com
R1(config)#crypto key generate rsa modulus 512
SSH has also to be enabled on vty lines before the router starts accepting SSH connections for
remote management:
R1(config)#line vty 0 4
R1(config-line)#transport input ssh
R1(config-line)#end
R1#
SNMP
Simple Network Management Protocol (SNMP) is used by enterprises to manage and monitor a
large number of network devices. SNMP has several uses, from monitoring and generating alerts
to device configuration.
There are three main versions of SNMP:
Version 1: This version is defined in RFC 1157 and simple security based on SNMP
communities.
Version 2c: This version is defined in RFCs 1901, 1905, and 1906 and it also uses
community-based security.
Version 3: This version is defined in RFCs 3413 thru 3415 and introduces a new security
model supporting message integrity, authentication and encryption.
The community-based security model used by SNMP versions 1 and 2c is a known security
vulnerability because of its lack of encryption and authentication. It just uses a simple community
name for security. Configuration of SNMP version 3 is more complex, and it should be preferred
for enhanced security especially when traffic has to be moved across untrusted networks.
Syslog
Syslog is a method that can be used to collect system messages from Cisco devices to a system
running a syslog server. All system messages are sent to the central syslog server which helps in
aggregation of logs and alerts. Cisco devices can send their logging messages to a Unix-like
SYSLOG service. A SYSLOG service simply accepts log messages, and stores them in files or
prints them according to a configuration file. Syslog uses UDP as its transport protocol and listens
on port 512. This form of logging is the best available for Cisco devices because it can provide
external long-term storage of logs. But this external storage of logs can be useful in incident
handling when a device is compromised or undergoes a crash.
These logs are also useful in routine maintenance activities and the timestamps with each log
message provide an accurate chronological record of important events happening in your Cisco
device. But in order to make these timestamps meaningful, the time on your network devices must
be accurate and synchronized to the same source. Network Time Protocol (NTP) is typically used
to make sure timing information in Syslog messages is accurate. Network devices can use NTP to
synchronize their clocks to a central accurate source of timing information.
Network Time Protocol (NTP)
Network Time Protocol (NTP) is used to synchronize the time on the Cisco device clock. NTP
usually gets its time from an accurate and trusted time source, such as a radio clock or an atomic
clock attached to a time server. NTP is a client server protocol and uses UDP port 123 as both the
source and destination. NTP communications can be secured using an authentication mechanism
that uses the MD5 algorithm.
NTP is absolutely essential for syslog messages as it is used to keep accurate timing information.
Timestamps with syslog messages have to be accurate to make the logging information useful for
troubleshooting or incident handling. The Cisco IOS ntp command is used in global configuration
mode for al NTP related configurations.
Secure Communications
Encryption techniques are commonly used at all layers of the OSI reference model to ensure
security of network communications.
IPsec
IPsec (Internet Protocol Security) VPN is a standard defined by the IETF (Internet Engineering
Task Force). IPsec is a popular framework used to secure communications over an insecure
medium like the Internet at the network layer of the OSI reference model. IPsec uses a
combination of various techniques to provcide the following security services:
Peer authentication
Data confidentiality
Data integrity
IPsec has two methods of propagating the data across a network:
Tunnel Mode: This IPsec mode is used in network-to-network or site-to-site scenarios.
Tunnel mode encapsulates and protects the whole IP packet including the original IP
header and payload. It then adds the IPsec header alongwith a new IP header as well.
Transport Mode: This IPsec mode is used in host-to-host scenarios only. In transport
mode, IPsec protects only the payload of the original IP packet by excluding the IP header
and inserts the IPsec header between the original IP header and the payload. Transport
mode is available only when the IPsec endpoints are themselves the source and destination
of IP packets.
Both IPsec tunnel mode and transport mode can be deployed with Encapsulating Security Payload
(ESP) or Authentication Header (AH) protocols.
SSL
SSL (Secure Sockets Layer) is a remote access VPN technology that provides secure connectivity
from any computer through a standard web browser and its native SSL encryption.
SSL is an application layer (layer 7) cryptographic protocol that provides secure communications
for web browsing, email, instant messaging, and other traffic over the Internet. By default, SSL
makes use of TCP port 443.
The major advantage of SSL VPN is that it does not require a special client software to be
installed on the system. SSL uses the native SSL encryption of a web browser enabling a user to
connect from any computer, whether it is an official desktop or a personal laptop, tablet or
smartphone.
The Cisco Remote Access VPN solutions offer both IPsec VPN and SSL VPN technologies on a
single platform such as Cisco Integrated Services Routers (ISRs).
Summary
It was an introductory chapter that attempted to provide you a glimpse into the exciting world of
network security. We started the chapter by talking about the CIA triad and how it can be used as a
model to secure data and systems.
We also considered what information security threats are faced by enterprises today and what a
typical secured enterprise looks like at a high level. We considered some layer 2 security
techniques moving on to discusss a few examples of securing the managment and data planes.
IPsec and SSL were briefly touched though these topics would be covered in greater detail in a
later chapter.
Chapter 9: Access Lists
Cisco IOS Software has several built-in security tools that can be used as part of a good overall
security strategy which are covered on the CCNA exam. Probably, the most basic of those
security tools are access control lists (ACL) or access lists. Access lists enable us to
identify interesting traffic by providing the basic capability to match packets based on a number
of criteria. The interesting traffic can then be subjected to various special operations depending
upon the specific application. This chapter reviews different types of ACLs, that are available
and displays examples of how each of them would be configured in operation. We also introduce
Cisco Configuration Professional and how to use it to apply ACLs toward the end of the chapter.
9-1 Introduction to Access Lists
9-2 Standard Access Lists
9-3 Extended Access Lists
9-4 Access Lists -Remote Access, Switch Port, Modifying & Helpful Hints
9-5 Cisco Configuration Professional Initial Setup and Access List Lab
9-6 Summary
Introduction To Access Lists
The technical name for an acces list is access control list (ACL) and individual entries in an
access control list are called access control entries (ACEs). ACEs are also known as access list
statements. The term access control lists isn’t often used in practice and these lists are typically
referred to simply as access lists or ACLs. An access list is simply a list of conditions or
statements that categorize and match packets in a number of interesting ways.
Access lists are primarily used as simple filters to permit or deny packets through interfaces in
order to exercise control on traffic flowing through the network. But this is not the only use of
access lists and they can also be used in situations that don’t necessarily involve filtering packets.
I have listed a few of those other uses of access lists here:
Management Access You can use access lists to control which hosts can remotely manage your
router using Telnet or SSH by applying access lists to VTY lines using the access-class statement
in line configuration mode.
Route Advertisement You can use access lists to control which routes or networks will or
will not be advertised by dynamic routing protocols like RIP, EIGRP, or OSPF. In such situations,
access lists are defined in the same manner but the difference is where you apply those access
lists. When access lists are used to control route advertisements they are called distribute lists.
Debug Output Cisco IOS debug commands are very useful for deep network
troubleshooting but these commands often produce a lot of output which can be difficult to read
and interpret. You can define access lists to identify interesting packets and use the access lists
with debug command to display only the output that relates to interesting traffic.
Encryption When encrypting traffic between two routers or a router and a firewall, you must tell
the router what traffic to encrypt, what traffic to send unencrypted, and what traffic to drop.
Access lists are a natural choice to match or identify interesting traffic for these operations.
You should be able to appreciate the variety of ways in which access lists are used. We will not
cover all of these other uses of ACLs in this chapter. However, you will see these uses of ACLs
in more advanced Cisco certification exams as you move on in your career. Our coverage of
access lists will focus on their use as traffic filters
When you’re creating access lists (or any configuration, for that matter), it’s a good idea to create
them first in a text editor like Notepad, and then once you’ve worked out all the details, try them
in a lab environment. Keep in mind that access lists are traffic filters applied to interfaces and
anytime you’re working on filters, you risk causing an outage to a production network.
As you have learned, access lists are the means whereby Cisco devices categorize and match
packets and have several applications. The good news here is that regardless the specific
application of access lists, they are defined the same way. Access lists are a very important topic
for your CCNA exam so we will go into great depth while covering access lists in the next
several sections.
Access Lists Statements
An access list is basically a sequential listing of statements also known as access control entries
(ACEs). Each entry in an access list defines a specific condition that packets are compared
against before taking the specified action. Each access list statement specifies
a permit or deny action to be taken if a packet matches the associated condition. Please see Table
9-1 for a few simple examples of access list statements:
Table 9-1 Simple Access List Statements
Access List Statement Description
Match packets with a source IP address
permit host 172.16.34.2 of 172.16.34.2 only and permit those
packets.
Match packets with a source IP address
deny host 172.16.34.24 of 172.16.34.24 only and deny those
packets.
permit any Match and permit any and all packets.
There can be several permit and deny statements in an access list. A packet is compared with
each statement one by one in a sequential order. That is, it’ll start with the first line of the access
list, then go to line 2, then line 3, and so on. If the packet matches the condition on a line of the
access list, the packet is acted upon and no further comparisons take place. There is also an
implicit “deny” at the end of each access list. Which means that if a packet doesn’t match the
condition on any of the lines in the access list, the packet will be discarded.
If you have some exposure to computer programming or scripting, you would be able to
appreciate that access lists are much like a series of if-then statements found in many
programming languages. When a given condition in the if-then statement is met, then a given
action is taken. If the specific condition isn’t met, no action is taken and the next statement is
evaluated.
Named versus Numbered
Access lists on Cisco devices can be either named or numbered. Named access lists are
referenced with a name such as UET or CertificationKits. Numbered access lists are the older
method, where each ACL is defined by a number such as 1 or 104. In practice, both numbered and
named access lists are widely used but I personally believe named access lists make your
configuration more readable and less cryptic. Some devices such as certain Cisco Nexus switches
don’t support numbered access lists at all. I would advise using named access lists in the real
world where possible, but for the sake of your CCNA exam you should be thoroughly familiar
with both formats.
What are Wildcard Masks?
Wildcard masks, also known as inverse masks, are used in many devices for creating access lists.
Wildcards are used with IP addresses in IP access lists to specify a single host, a network, a
subnet, or a supernet in order to control what should be permitted or denied. These masks are
typically written in dotted decimal notation just like regular IP addresses and are quite confusing
at first simply because they’re the opposite, in binary, of subnet masks.
Subnet masks used while configuring IP addresses on interfaces start with 255 and have the large
values on the left side, for example, IP address 192.168.2.29 with subnet mask 255.255.255.0.
Wildcard masks for IP access lists are the reverse, for example, 0.0.0.255. In other words, the
wildcard mask you would use to match a range that is described with a subnet mask of
255.255.255.0 would be 0.0.0.255.
When the value of a wildcard mask is broken down into binary (0s and 1s), the results determines
which address bits (binary digits) are to be considered in processing the traffic, and which
address bits are to be ignored. A 0 in a wildcard mask indicates that the corresponding address
bit must be considered (matched); a 1 in the wildcard mask is a “don’t care” meaning that the
value of the corresponding address bit doesn’t matter. The following table further explains the
concept.
Table 9-2 Wildcard Mask Example
Value Explanation
192.168.2.0 Network address
255.255.255.0 Subnet mask
0.0.0.0.255 Wildcard mask
As you have seen, an access lists can be applied to an interface in either an inbound or outbound
direction and the direction is from the device’s viewpoint. In practice however, access lists
should almost always be applied inbound on an interface and there are good reasons for that
which we will cover later.
Exam Concept – An access list is created using access-list command but it is not effective
unless it is actually applied to an interface using ip access-group command. On the CCNA exam,
you will generally be asked to apply an access-list to an interface. Make sure you select the ip
access-group command.
When you are trying to filter traffic you usually want to prevent it from getting into the network or
to a device. Applying access lists to the inbound side of an interface keeps the packets from
entering the device, thus saving processing time. When a packet is allowed into a device, then
switched to another interface only to de dropped by an outbound access list, the resources used to
switch the packet have been wasted.
You can configure standard or extended access lists on any router in your network. But it makes
more sense to place standard access lists as close to the destination as possible. It is so because a
standard access list can only filter on the basis of source IP address. If it is placed near the
source, you may block the source IP address for your entire network rather than for a smaller
portion of your network. Extended access lists provide more granular control and you can specify
exactly what you want to filter. By placing extended access lists near the source you will
conserve bandwidth and router resources.
Exam Concept – A standard access list should be placed close to the destination while an
extended access list should be placed close to the source of traffic being filtered. Where each
type of access list is placed is a common CCNA question.
Access List Logging
Access list logging is accomplished using the optional log keyword used with the access-
list command when the access list is created. The log keyword causes an informational logging
message about the packet that matches the entry to be sent to the console. The log message
includes the access list number, whether the packet was permitted or denied, the source address,
and the number of packets. As a large number of packets would typically match an access list, the
message is generated for the first packet that matches, and then at 5-minute intervals, including the
number of packets permitted or denied in the last 5-minute interval.
Access List Remarks
You can include comments or remarks about individual entries in a named IP access list. An
access list remark is simply an optional comment before or after an access list entry that
describes the entry in plain language, so you don’t have to interpret the purpose of the entry by its
command syntax.
The remark can go before or after a permit or deny statement, but you should be consistent about
where you put your remarks so that it is clear which remark describes which statement. It could
be confusing to have some remarks before the associated permit or deny statements and some
remarks after the associated statement.
Types of Access Lists
Cisco IOS Software supports the following types of access lists for Internet Protocol (IP):
Standard Access Lists Standard access lists use source IP addresses for matching packets.
Extended Access Lists Extended access lists use source and destination IP addresses for
matching packets and optional protocol type information for finer granularity of control.
Reflexive Access Lists Reflexive access lists allow IP packets to be filtered based on session
information. Reflexive access lists contain temporary entries, something not found in standard
and extended access lists, and are nested within extended named access lists.
Time-based Access Lists Time-based access lists, as the name indicates, are not
active all the time but rather are triggered by a time function.
Table 9-5 Packet Matching Criteria for Access Lists
Extended
Packet Matching Criteria Standard ACL
ACL
Source IP address Yes Yes
Destination IP address No Yes
Protocol (TCP, UDP,
ICMP, EIGRP, OSPF No Yes
etc.)
TCP/UDP
No Yes
Source/DestinationPort
QoS parameters (ToS, IP
No Yes
Precedence, DSCP)
Standard Access Lists
Standard access lists are the oldest type of access lists, dating back as early as Cisco IOS
Software Release 8.3. Standard access lists control traffic by comparing the source address of
packets to the addresses configured in the access list.
In all software releases, the access list number for the standard IP access lists can be anything
from 1 to 99. In Cisco IOS Software Release 12.0.1, standard IP access lists began using
additional numbers from 1300 to 1999. These additional numbers are sometimes referred to as
the expanded range. In addition to using numbers to identify access lists, Cisco IOS Software
Release 11.2 and later added the ability to use names to define standard IP access lists. We will
learn how to configure both numbered and named access lists as we proceed in this chapter.
Table 9-6 Access List Types and Corresponding Numbers
Access List Type Number Range
IP Standard Access Lists 1-99
IP Standard Access Lists (expanded
1300-1999
range)
IP Extended Access Lists 100-199
IP Extended Access Lists (expanded
2000-2699
range)
You can differentiate between standard and extended access lists in the numbered format simply
by looking at the access list number. Based on the number used when access list is created, the
router also knows which type of syntax to expect as the list is entered. By using numbers 1 – 99 or
1300 – 1999, you are essentially telling the router that you want to create a standard IP access
list. Thus the router will expect the standard IP access list syntax specifying only the source IP
address in access list entries.
Creating a Numbered Standard Access List
If you want to filter traffic using source IP address only, a standard access list is a simple and
sufficient option. Probably the best way to show you how to configure a numbered standard
access list is by showing you how to do it one step at a time:
R1(config)#access-list ?
<1-99> IP standard access list
<100-199> IP extended access list
<1100-1199> Extended 48-bit MAC address access list
<1300-1999> IP standard access list (expanded range)
<200-299> Protocol type-code access list
<2000-2699> IP extended access list (expanded range)
<700-799> 48-bit MAC address access list
dynamic-extended Extend the dynamic ACL absolute timer
rate-limit Simple rate-limit specific access list
The command to create an access list, not surprisingly, is access-list entered in configuration
mode. As we just discussed the number we use to identify an access list cannot be any arbitrary
number. This number rather must belong to the range of numbers available for the type of access
list you want to create. At the moment, we are interested in creating a standard numbered access
list. So we can choose a number from the ranges 1-99 or 1300-1999.
R1(config)#access-list 1 ?
deny Specify packets to reject
permit Specify packets to forward
remark Access list entry comment
We chose to use 1 as our standard access list number and there are three keywords available now
as you can see from above output. Let’s first add a user-friendly remark in order to make our
access list more readable as we return to it at a later point in time. A remark of upto 100
characters can precede or follow an access control entry. We will add a remark before the entry
though you can choose to add remarks following access list entries. You should be consistent
throughout your configuration whether you choose to add remarks before or after access control
entries.
R1(config)#access-list 1 remark Don’t give access to Max and log any attempts
Now that’s interesting as we intend to not only deny access to Max but also to log any access
attempts made by him using the log keyword at the end of the statement.
R1#show access-lists 1
Standard IP access list 1
10 permit 172.16.23.0, wildcard bits 0.0.0.255
Though the access list has been created now but it is sitting idle, doing nothing as it has actually
not been applied to any interface. Let’s go ahead and apply it to interface Fa0/0 in
the inbound direction as depicted in Figure 9-2.
Figure 9-2 Standard Numbered Access List Example
The command to apply an access list to an interface is ip access-group entered in interface
configuration mode:
R1(config)#int Fa0/0
R1(config-if)#ip access-group 1 ?
in inbound packets
out outbound packets
R1(config-if)#ip access-group 1 in
R1(config-if)#
As you can see, there are two options available while applying an access list to an interface,
those are, in and out. We have applied the access list in the inbound direction filtering packets
coming into the interface from outside. The access list is now applied comparing all packets
received on interface Fa0/0 against entries in access list 1 and taking appropriate action.
Let’s run a final check verifying if the access list has been successfuly applied to the router
interface:
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
The command used to define a named access list is ip access-list which has several options:
R1(config)#ip access-list ?
extended Extended Access List
log-update Control access list log updates
logging Control access list logging
resequence Resequence Access List
standard Standard Access List
We are specifically interested here in two of these options: standard and extended, used
respectively to defined standard and extended access lists. We will proceed to define and
standard access list named Corp.
R1#show ip access-list
Standard IP access list Corp
20 permit 172.18.3.24
10 deny 172.18.0.0, wildcard bits 0.0.255.255 log
Exam Concept – Use the show ip access-list command to verify an access list has been created,
while use the show ip interface command to verify that the access list is applied to an interface.
Cisco now wants you to thoroughly understand how to use the show commands for
troubleshooting on the CCNA
And let’s have a look at the figure before applying the access list to an interface.
Figure 9-3 Standard, Named Access List Example
Let’s now finalize out configuration by applying the access list to router interface Fa0/1 in the
inbound direction.
R1(config)#int fa0/1
R1(config-if)#ip access-group Corp in
You can now verify if access list has actually been applied to the interface Fa0/1 using
command show ip interface.
Keyword Descirption
echo Echo request (used to ping)
echo-reply Echo reply (used to ping)
The packet was delivered to the destination network
host-unreachable but could not be sent to the specific host with
destination IP address.
The packet could not be delivered to the destination
net-unreachable network. It usually indicates a routing issue.
The destination port specified in the TCP or UDP
port-unreachable header was invalid for the host to which the packet
was sent.
protocol- The protocol specified in the IP header was invalid for
unreachable the host to which the packet was sent.
Time-to-live (TTL) expired while packet was in
ttl-exceeded transit.
Extended access lists can be used to match specific ICMP message types based on protocol
number of several keywords available on the Cisco IOS. You can find out which keywords are
available as shown below in abridged output from Cisco CLI:
If you want to filter traffic using any criteria other than source IP address, an extended access list
is needed. We will now show you how to configure a numbered extended access list one step at a
time:
R1(config)#access-list ?
<1-99> IP standard access list
<100-199> IP extended access list
<1100-1199> Extended 48-bit MAC address access list
<1300-1999> IP standard access list (expanded range)
<200-299> Protocol type-code access list
<2000-2699> IP extended access list (expanded range)
<700-799> 48-bit MAC address access list
dynamic-extended Extend the dynamic ACL absolute timer
rate-limit Simple rate-limit specific access list
At the moment, we are interested in creating an extended numbered access list. So we can choose
a number from the ranges 100-199 or 2000-2699.
R1(config)#access-list 101 ?
deny Specify packets to reject
permit Specify packets to forward
remark Access list entry comment
We have chosen 101 as our extended access list number and there are three keywords available
now, as you can see from above output. Let’s first add a user-friendly remark as usual in order to
make our access list more readable as we return to it at a later point in time.
R1(config)#access-list 101 remark allow Telnet packets from any source to network 172.16.0.0
As you may have guessed from the remark, we are going to create an access control entry that
would allow Telnet traffic sourced from anywhere destined for the network 172.16.0.0. We use
the wildcard mask 0.0.255.255 for our class B network 172.16.0.0.
R1#show access-lists
Extended IP access list 101
10 permit tcp any 172.16.0.0 0.0.255.255 eq telnet
20 deny tcp any any log
R1#
Though the access list has been created now, it is sitting idle doing nothing. As it has actually not
been applied to any interface to have functionality. Let’s go ahead and apply it to interface Fa0/0
in the inbound direction as depicted in Figure 9-4.
The command to apply an access list to an interface is ip access-group entered in interface
configuration mode:
R1(config)#int Fa0/0
R1(config-if)#ip access-group 101 ?
in inbound packets
out outbound packets
R1(config-if)#ip access-group 101 in
R1(config-if)#
We have applied the access list in the inbound direction filtering packets coming into the interface
from outside. The access list is now applied comparing all packets received on interface Fa0/0
against entries in access list 101 and taking a permit or deny action as appropriate.
Creating a Named Extended Access List
A standard, named access lists can be used if you need to filter on source and destination IP
address or a combination of addresses and other fields. There is no difference between numbered
and named access lists in terms of functionality; however each has its own syntax.
We will define an extended named access list including one permit statement and
one deny statement. The actual statements you use and their order would depend on your filtering
requirements. You should define your permit and deny statements depending on what you want to
allow or block.
Let’s first enter the privileged exec mode using enable command, and move to the global
configuration mode using configure terminal command.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
The command used to define a named access list is ip access-list for both standard and extended
access lists:
R1(config)#ip access-list ?
extended Extended Access List
log-update Control access list log updates
logging Control access list logging
resequence Resequence Access List
standard Standard Access List
We are interested here in the option extended used to defined extended access lists. We will
proceed to define an extended access list named NoSales.
Finally, let’s apply the access list to interface Fa0/0 in the inbound direction as shown in the
figure.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip access-group NoSales in
R1(config-if)#
Finally, we can display the access list using the good old show access-list command.
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#access-list 10 permit 172.16.0.0 0.0.255.255
R1(config)#line vty 0 4
R1(config-line)#access-class 10 in
R1(config-line)#end
The show line command can be used to view at a glance all active virtual terminal lines and
access lists applied to them.
R1#show line
Tty Line Typ Tx/Rx A Modem Roty AccO AccI Uses Noise Overruns Int
* 0 0 CTY – – – – – 0 1 0/0 –
1 1 AUX 9600/9600 – – – – – 0 0 0/0 –
194 194 VTY – – – – 10 0 0 0/0 –
195 195 VTY – – – – 10 0 0 0/0 –
196 196 VTY – – – – 10 0 0 0/0 –
197 197 VTY – – – – 10 0 0 0/0 –
198 198 VTY – – – – 10 0 0 0/0 –
Line(s) not in async mode -or- with no hardware support:
2-193
Modifying Access Lists
While you are creating an access list or after it is created, you might want to delete an entry. You
cannot delete an individual entry from a numbered access list. If you need to delete even a single
entry from a numbered access list, you have to delete the whole access list using no access-
list command and start over.
R1(config)#no access-list 1
R1(config)#end
R1#show access-list 1
R1#
It is a good strategy to copy the access list to Notepad before deleting it from router configuration.
You can then modify the access list in Notepad before applying it again to router configuration.
However, you sure can delete an individual entry from a named access list using the no
permit or no deny command. Let’s demonstrate this using the NoSales extended access list we
created earlier, by deleting the second access list statement.
SW1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#mac access-list ?
extended Extended Access List
SW1(config)#mac access-list extended ?
WORD access-list name
SW1(config)#mac access-list extended MY_MAC_LIST
SW1(config-ext-macl)#permit ?
H.H.H 48-bit source MAC address
any any source MAC address
host A single source host
SW1(config-ext-macl)#permit host ?
H.H.H 48-bit source MAC address
SW1(config-ext-macl)#permit host 00cd.38ab.4d35 ?
H.H.H 48-bit destination MAC address
any any destination MAC address
host A single destination host
SW1(config-ext-macl)#permit host 00cd.38ab.4d35 any
SW1(config-ext-macl)#deny any any
SW1(config-ext-macl)#end
SW1#show access-list
Extended MAC access list MY_MAC_LIST
permit host 00cd.38ab.4d35 any
deny any any
SW1#
It’s now time to apply the MAC ACL to a switch interface using mac access-group command:
SW1#configure terminal
SW1(config)#interface FastEthernet0/1
SW1(config-if)#mac access-group MY_MAC_LIST ?
in Apply to Ingress
SW1(config-if)#mac access-group MY_MAC_LIST in
SW1(config-if)#end
SW1#
Let’s try to understand what we just did. We created an extended MAC access list that we called
MY_MAC_LIST, allowing incoming frames sourced only from a specific MAC address
00cd.38ab.4d35. This scenario makes sense if you have a desktop cabled to your switch port and
you don’t want any other device connected to the same port by user.
In the last example, we defined an access list that made its filtering decision based on MAC
addresses. Sometimes it is desirable to make permit or deny decisions based on the protocol
carried inside Ethernet frames rather than source and/or destination MAC addresses.
Exam Concept – Cisco Configuration Professional (Cisco CP) has replaced Cisco’s Security
Device Manager (SDM) as the GUI configuration solution. Cisco CP is not on the CCNA at this
time. However it is on the CCNA Security exam.
Cisco Configuration Professional is due to replace Cisco Security Device Manager (SDM) over
time. CCP communications are pretty secure as it uses secure protocols such as Secure Shell
Protocol (SSH) and HTTPS to communicate with the devices.
Newly shipped Cisco routers do not have any configuration pre-loaded which means you have to
connect a console cable to the console port and use terminal emulation software like Hyper
Terminal to do initial configuration of the router. But, devices shipped with Cisco CP do have a
default configuration that allows you to connect a PC to an Ethernet port on the device and start
configuring it right away.
Let’s start by installing Cisco CP 2.5 on a Windows based computer. You should have the
installation package in the form of a file such as cisco-config-pro-k9-pkg-2_5-en.exe which you
launch to start the installation process.
Figure 9-6 Cisco Configuration Professional Installation
The installation is pretty straightforward and takes less than a minute to complete.
You can launch the application after finalizing the installation and you may be prompted to select /
manage a community of devices as the application loads. You can safely cancel this dialogue box
initially and reach the main application window which for version 2.5 looks like the figure
below.
Figure 9-7 Cisco Configuration Professional Main Window
We will set up a single device, a newly shipped Cisco 881 router, to be managed using Cisco
Configuration Professional. The computer on which you just installed Cisco CP should be
connected to the console port of the router through its serial port. If your computer does not have a
serial port, you can use an USB/RS-232 adapter to connect to the router console. After ensuring
physical connectivity, go to the Application menu and click on Setup New Device…. You see a
screen similar to below figure.
Figure 9-8 New Device Setup Wizard – Step 1
Simply press Next to move to Step 2 – Configuring Device where you can enter IP addresses for
available interfaces. In our case, we configure IP address 192.168.1.1 on interface FastEthernet4
of our Cisco 881 and press Next.
Figure 9-9 New Device Setup Wizard – Step 2
If everything goes well, you reach Step 3 – Configuration Summary as shown in the figure below.
Figure 9-10 New Device Setup Wizard – Step 3
What we have done so far is to configure IP address 192.168.23.1 on interface FastEthernet4 of
the router. Now you should connect the Ethernet port of your computer to interface FastEthernet4
of the router using a crossover Ethernet cable.
At this stage the main application window would look something like this:
Figure 9-11 New Device – Not Discovered
We highlight the IP address 192.168.23.1 we configured and press Discover. Cisco CP will not
try to connect to the router over the Ethernet interface and if all goes well the Discovery
Status should change to Discovered as shown in figure below.
Figure 9-12 New Device – Discovered
At this stage the router is fully set up with Cisco Configuration Professional and we can configure
it using easy-to-use wizards by pressing the Configure in the top left area of the display. Some
new entries appear in the left pane of the display as shown in Figure 9-13.
Figure 9-13 New Device – Configuration
We will re-create the named extended access list NoSales this time using CCP GUI wizard. We
created the same access list earlier in the chapter using command-line interface (CLI). Go
to Router > ACL > ACL Editor in the left pane and press Add… to get the dialogue box shown in
Figure 9-14, which can be used to enter and apply access lists as required. In this dialogue box
you supply a name and specify that it is an extended ACL, and then press Add to create the first
access list statement.
Figure 9-14 Dialogue – Add a Rule
We now creat the first access list statement as shown in Figure 9-15 and press OK to proceed.
Figure 9-15 Dialogue – Add an Extended Rule Entry 1
In the same fashion we create the second access list statement as shown in Figure 9-16.
Figure 9-16 Dialogue – Add an Extended Rule Entry 2
The access list has been created by now as shown in Figure 9-17, and we need to apply it to an
interface. Press Associate to proceed.
Figure 9-17 Dialogue – Add a Rule
We apply the access list to interface FastEthernet0/0 in the inbound direction, as shown in Figure
9-18.
Figure 9-18 Dialogue – Associate with an Interface
The configuration is complete and you return to the Add a Rule dialogue box, as shown in Figure
9-19. Simply press OK to proceed.
Figure 9-19 Dialogue – Add a Rule
Another box appears that displays configuration that would actually be applied to the router, as
shown in Figure 9-20. You can see that the configuration that actually gets applied to the router is
just the same we created in an earlier section of the chapter. Press Deliver to apply the
configuration to the running configuration. You may choose to select the Save running config to
device’s startup config checkbox to save the running configuration to startup configuration as
well.
Figure 9-20 Dialogue – Deliver Configuration to Device
Command delivery status looks good, as shown in Figure 9-20. We’re done with creating an
access list using Cisco Configuration Professional. You may press OK to return to the main
application window.
Figure 9-20 Command Delivery Status
Once you have come this far setting up Cisco Configuration Professional, it is a good idea to
explore the configuration options available. It would be fun and a great way to learn Cisco CP
while doing so. You are sure to get amazed with what you can do with Cisco CP with minimal
knowledge of the Cisco CLI.
Summary
We dedicated this chapter almost exclusively to access control lists (ACLs). Access lists are
generally used for traffic filtering but they are quite versatile and have several other uses as well
that were briefly mentioned in the beginning of the chapter
We covered both standard and extended access lists in detail and learned how to configure them
in both named and numbered formats. Several nuances of access lists were also covered from a
practical standpoint.
The chapter concluded with coverage of Cisco Configuration Professional (Cisco CP), a GUI
based tool that can be used to configure and manage Cisco devices.
Chapter 10: Network Address Translation (NAT)
In this chapter, we are going to learn Network Address Translation (NAT) as configured on Cisco
routers in common network scenarios. First time NAT users find NAT concepts difficult to grasp
and NAT configuration difficult to create and understand. But the fact of the matter is that the
basic concpets of NAT are pretty simple as we will see below. After developing an intimate
understanding of basic NAT concepts early in the chapter, we will learn how to configure and
troubleshoot NAT using the Cisco command line interface (CLI). Toward the end of the chapter,
we will also see how NAT can be configured the easy way using Cisco Configuration
Professional (Cisco CP).
10-1 Introduction to NAT
10-2 Static NAT Configuration & Verification
10-3 Dynamic NAT Configuration
10-4 NAT Overloading aka Port Address Translation (PAT)
10-5 NAT Troubleshooting
10-6 NAT Configuration with Cisco Configuration Professional
10-7 Summary
Introduction To NAT
What is NAT?
Network Address Translation (NAT) allows a host that does not have a registered IP address to
communicate with other hosts on the Internet. NAT has gained such wide-spread acceptance that
the majority of enterprise networks today use private IP addresses for most hosts on their network
and use a small block of public IP addresses, with NAT translating between the two.
Having come this far in your CCNA studies, you should be well aware of the IP header format.
The IP packet header has several fields in it, the most well-known of which probably are
the Source IP Address and Destination IP Address. NAT simply translates, or changes, one or
both of these addresses inside a packet header as the packet passes through the router performing
the NAT operation. This is what basic NAT operation is, nothing more, nothing less.
Purpose of NAT?
NAT is a feature that allows the internal network of an organization to appear to be using a
different IP address space from the outside than what it is actually using. Thus, NAT allows an
organization to use private IP addresses that are not globally routable and yet connect to the
Internet by translating those private addresses into globally routable addresses. The beauty of
NAT is that the hosts on the internal network using NAT to communicate to the outside world
don’t have to be aware of the very existence of NAT. NAT configuration exists only on the router
or another device typically at the boundary of the internal network. Due to this aspect of NAT, an
organization can also change service providers without any changes to the IP addresses
configured on individual hosts. Changing service providers also changes public IP addresses
available to an enterprise. The device performing NAT can have its configuration modified and
that’s all you need to do while changing your ISP.
Benefits of NAT
Internet Protocol (IP) or IPv4 as it is more precisely known uses addresses that are 32-bits long.
As such the total address space of IP is from 0 to 232 – 1 = 4,294,967,295. In other words, over
four billion unique IP addresses are available for assignment to hosts. These IP addresses are the
registered IP addresses that are centrally administered. Though four billion may seem like a very
large number, due to the explosive growth of the Internet over the years we have already depleted
most of those IP addresses.
RFC 1918 specifies three blocks of IP address space reserved by Internet Assigned Numbers
Authority (IANA) for private networks.
Table 10-1 Private IP Addresses
Address Number of Private Address Space
Class Networks
A 1 10.0.0.0 – 10.255.255.255
B 16 172.16.0.0 – 172.31.255.255
C 256 192.168.0.0 – 192.168.255.255
An enterprise can assign these private IP addresses to internal host without the need for registered
IP addresses. A router or other device can be used to perform Network Address Translation
(NAT) to convert these private IP addresses to public IP addresses routable on the Internet.
Network Address Translation (NAT) is defined in RFC 1631. The original intention of NAT was
to slow the depletion of available IP addresses by allowing many private IP addresses to be
represented by some smaller number of public IP addresses. NAT was envisioned to be a
temporary solution to the problem of IP version 4 address depletion. The permanent solution was
a migration from IP version 4 to IP version 6 (IPv6). IPv6 has 128-bit addresses and a much
larger address space expected to solve the issue of address scarcity forever. But NAT has been so
successful that it has delayed the full IPv4 address exhaustion by several years.
NAT has also come to find other applications that do not directly relate to IP address
conservation. One such NAT application is the merger of two companies and hence their
internetworks. The two companies would previously have two separate internetworks. After the
merger their two internetworks would have to be connected together. Unfortunately, when the two
separate internetworks were first constructed several years ago, nobody had anticipated a future
merger. So the designers of both internetworks chose to use the 10.0.0.0 address space. As a
result, many IP addresses would be assigned to devices in both internetworks. NAT can be used
as a temporary solution to connect the internetworks. Keep in mind that the best solution in such a
situation is to re-address the new internetwork. But re-addressing can be a major project if all the
devices have manually configured IP addresses. NAT can serve as an interim solution.
NAT allows organizations to solve the problem of IP address depletion when they want to
connect new networks to the Internet. NAT allows organizations to connect their networks to the
Internet without needing to have Network Information Center (NIC) registered IP addresses
assigned to their internal systems.
There are organizations that already have registered IP addresses for hosts on an internal network
but they want to hide those addresses from the Internet so that hackers cannot easily attack those
hosts. If the host address is hidden, a degree of security is achieved. NAT can be useful in this
situation where the motive is not primarily IP address preservation, but applying corporate
security policies to your network traffic.
A major advantage of NAT is that it needs to be configured only on those few routers that would
actually perform the NAT operation. The hosts or other routers not performing NAT operation
don’t need any configuration changes.
Disadvantages of NAT
Network Address Translation (NAT) is all about changing IP addresses and port numbers inside
an IP packet header which also creates some issues. Changing the content of an IP address or TCP
port can change the meaning of some of the other fields, especially the checksum. For example,
the checksum of an IP packet is calculated over the entire IP header. Therefore if the source or
destination IP address (or both) change, the checksum has to be calculated again. The same is also
true for the checksum in the TCP header. This number is calculated over the TCP header and data,
and also over a pseudo-header that includes the source and destination IP addresses. Therefore, if
an IP address or a port number changes, the TCP checksum must also change. NAT as
implemented on Cisco routers performs these recalculations. This is extra work for the router
performing NAT.
NAT should be transparent to the end systems that send packets through it. However, many
applications use the IP addresses at the application layer. Information within the data field may be
based on an IP address, or an IP address itself may be carried in the data field. If NAT translates
an address in the IP header without being aware of the effects on the data, the application breaks.
As a matter of fact, Cisco’s NAT implementation goes beyond translating addresses in the IP
header for the applications it supports. For the supported applications carrying IP address
information in the application data, NAT makes the appropriate corrections to the data as well.
This prevents the application from breaking due to NAT.
However, if the data fields are encrypted, NAT has no way of reading the data. Therefore, for
NAT to function properly, neither the IP addresses nor any information derived from them (such as
the TCP header checksum) can be encrypted. But this is not the case with virtual private networks
(VPNs), for example, IPsec. With certain modes of IPsec, if an IP address is changed in an IPsec
packet, the IPsec becomes meaningless and the VPN is broken. When any sort of encryption is
used, you must perform NAT on the secure side before encryption, rather than in the encrypted
path.
NAT is also viewed sometimes as part of a security plan, because it hides the details of the inside
network from the outside world. A host with translated address may appear on the Internet with
one address one day and with a different address on another day. But keep in mind that this is very
weak security at best. It might slow down an attacker who wants to hit a particular host but it will
not stop him if he is determined.
NAT Inside and Outside Addresses
Let’s define a few basic but important terms in the context of NAT. Before we jump into the
definitions, have a look at Figure 10-1 in order to understand the context in which NAT typically
operates.
Figure 10-1 NAT Context
A device performing Network Address Translation (NAT) divides its universe into the inside and
the outside. Typically the inside is a private enterprise with its internal network and hosts
connected to that network. The outside on the other hand is the public Internet and the servers
reachable over it. In addition to the notion of inside and outside, a Cisco NAT router classifies
addresses as either local or global. A local address is an address that is seen by devices on the
inside, and a global address is an address that is seen by devices on the outside.
Given these four terms, an address may be one of four types:
1. 1. Inside local addresses are assigned to inside devices. These addresses are not
advertised to the outside.
2. 2. Inside global are addresses by which inside devices are known to the outside.
3. 3. Outside local are addresses by which outside devices are known to the inside.
4. 4. Outside global addresses are assigned to outside devices. These addresses are not
advertised to the inside.
Types of NAT
In general, NAT is configured on a Cisco router that connects only two networks, and translates
the inside local (private) addresses from the internal network into inside global (public)
addresses. In most common scenarios the outside addresses are not translated so outside global
and outside local addresses are the same. You can configure NAT in a way that it will advertise
only a single address for your entire network to the outside world. Doing this effectively hides the
addresses in your internal network from the hostile environment of the Internet. Thus, giving you
some additional security and peace of mind as network administrator.
NAT has the following types:
Static NAT: Static NAT performs static address translation allowing one-to-one
mapping between local and global addresses. But you should keep in mind that static NAT
requires you to have one registered public IP address for every host on your network. As
such static NAT has no benefit in terms of IP address conservation. Nevertheless, static
NAT is important for the sake of understanding NAT.
Dynamic NAT: Dynamic NAT performs dynamic address translation mapping
unregistered private IP addresses to registered public IP addresses from a pool of
available registered IP addresses. You don’t have to statically configure your router to
map an inside to an outside address as you would using static NAT. But yet you do have to
have enough registered public IP addresses for everyone who’s going to communicate to
the Internet. Even dynamic NAT does not help with the issue of IP address conservation.
NAT Overload: NAT overload performs an overload mapping multiple unregistered
private IP addresses to a single registered public IP address. It is a many-to-one mapping
between private and public addresses and is accomplished using different port numbers.
This method is also known as Port Address Translation (PAT). By using PAT or NAT
overload, hundreds or even thousands of users can be connected to the Internet using only
one real global IP address. This is the most popular NAT type which basically is a form of
dynamic map but with multiple unregistered IP addresses mapped to a single registered IP
address. Dynamic NAT is one-to-one while NAT Overload or PAT is many-to-one though
both forms do the mapping dynamically. NAT Overload is the type of NAT that has
enabled us not to run out of IP addresses on the Internet.
Exam Concept – The three methods of Network Address Translation (NAT) are static, dynamic,
and overloading which is also called Port Address Translation (PAT). You are sure to see
questions on the CCNA exam contrasting each.
We will learn how to configure each of these three types of NAT in this chapter.
How NAT Operates?
Let’s have a look at Figure 10-2 that present a basic NAT scenario. We will explain the basics of
NAT operation with the help of this scenario and the understanding you develop here should also
help you understand the rest of the chapter.
Figure 10-2 Basic NAT Operation
We have defined several terms related to NAT so far. It’s the right time to test that understanding
in the context of Figure 10-2. R1 is the router performing NAT and has two interfaces: Fa0/0 as
the inside interface while Fa0/1 is the outside interface. A PC having IP address 192.168.1.2 on
the inside needs to communicate with a server having IP address 173.194.67.102 on the outside.
Table 10-2 NAT Address Types
NAT Address Type IP Address
Inside local 192.168.1.2
Inside global 67.210.97.212
Outside local 173.194.67.102
Outside global 173.194.67.102
Let’s try to understand what happens to IP packets travelling back and forth between the PC and
the server as they move across R1 – the router performing NAT operation. In plain simple words
IP addresses in the header of those IP packets get re-written with R1 also keeping a record of
these re-writes or translations in a table known not surprisingly as translation table.
Let’s first consider a packet that moves from inside to outside. This packet has source and
destination IP addresses of 192.168.1.2 and 173.194.67.102 respectively. The packet enters
router R1 at its inside interface Fa0/0 and exits the outside interface Fa0/1. Before the packet
actually exits R1, the source address gets re-written. The inside local IP address 192.168.1.2 is
replaced with the inside global IP address 67.210.97.212. The destination IP address is left
untouched here so the outside local and outside global address are both 173.194.67.102. And
some of you may have identified that the inside global address is in fact the IP address configured
on the outside interface Fa0/1 of R1. To the outside world, the packet appears to have originated
from the IP address 67.210.97.212 and the inside local IP address 192.168.1.2 is never known to
the outside world! For packets moving in the opposite direction, from outside to inside, the
destination IP address 67.210.97.212 gets re-written with 192.168.1.2 while the source IP
address remains unchanged. The PC and server can communicate successfully yet the server or
any entity on the outside does not know the real IP address of the PC.
In the coming sections we would learn in detail how to actually configure Network Address
(NAT) on a Cisco router for all three types of NAT.
Static NAT Configuration & Verification
With static NAT, a particular inside local address always maps to a particular inside global
(public) address. Due to one-to-one mapping between addresses static NAT does not conserve
public IP addresses. Although static NAT does not help with IP address conservation, it provides
a degree of security by hiding the inside IP addresses from the outside world. Static NAT also
allows an administrator to make an inside server available to clients on the Internet, because the
inside server will always use the same public IP addresses.
Exam Concept – NAT mapping commands are used in global configuration mode. It is common
for Cisco to provide you the correct command at different modes on the CCNA exam. Thus know
the mode in which the command is executed.
Let’s configure router R1 performing NAT as depicted in Figure 10-3.
Figure 10-3 NAT Scenario
Here is how the configuration goes.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip nat source static 192.168.1.2 67.210.97.2
R1(config)#ip nat source static 192.168.1.3 67.210.97.3
R1(config)#ip nat source static 192.168.1.4 67.210.97.4
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#ip nat inside
R1(config-if)#interface FastEthernet0/1
R1(config-if)#ip address 67.210.97.212 255.255.255.0
R1(config-if)#ip nat outside
R1(config-if)#end
R1#
The ip nat inside source command identifies which IP addresses will be translated. In the
preceding configuration example, the ip nat inside source command configures a static translation
between inside local and inside global IP addresses as shown in Table 10-2 below.
Table 10-3 Static NAT Address Mapping
Inside Local Addresses Inside Global Addresses
192.168.1.2 67.210.97.2
192.168.1.3 67.210.97.3
192.168.1.4 67.210.97.4
You may also identify an ip nat command under each interface in the above configuration. The ip
nat inside command identifies an interface as the inside interface. The ip nat outside command
identifies an interface as the outside interface. The ip nat inside source command is actually
referencing the inside interface with the inside keyword and the source address with the source
keyword. The static keyword indicates a static one-to-one mapping between inside local and
inside global addresses.
Exam Concept – Static NAT is designed to allow one-to-one mapping between local and global
addresses.
The ip nat inside source command is simply instructing the router to translate the source address
of every packet entering the router at the inside interface. In order to ensure two-way
communication return packets coming in the outside interfaces are also translated accordingly.
Once you finish your NAT configuration, you would usually want to verify if the configuration is
working as expected or not. Also, you may need to monitor NAT translations in a production
environment. It is quite tempting to use show running-config command to verify that the NAT
configuration lines you entered are actually there in the running configuration of the router. But
this does not tell you anything about whether actual translation of addresses is taking place or not.
The starting point for NAT verification and troubleshooting should be the show ip nat
translations command:
R1>
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip nat pool MyPool 67.210.97.2 67.210.97.4 ?
netmask Specify the network mask
prefix-length Specify the prefix length
R1(config)#ip nat pool MyPool 67.210.97.2 67.210.97.4 netmask 255.255.255.0
R1(config)#access-list 1 permit host 192.168.1.2
R1(config)#access-list 1 permit host 192.168.1.3
R1(config)#access-list 1 permit host 192.168.1.4
R1(config)#ip nat inside source list 1 pool MyPool
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#ip nat inside
R1(config-if)#interface FastEthernet0/1
R1(config-if)#ip address 67.210.97.1 255.255.255.0
R1(config-if)#ip nat outside
R1(config-if)#end
R1#
There are three parts of the above configuration.
First, the command ip nat pool MyPool 67.210.97.2 67.210.97.4 netmask 255.255.255.0 is used
to create a pool of inside global addresses from 67.210.97.2 to 67.210.97.4. That is a total of 3
addresses only with a subnet mask of 255.255.255.0. Please note that we chose MyPool as NAT
pool name but this choice is arbitrary and NAT pool name can be anything you like, even your
first name. Also note that a network mask has to be specified using netmask keyword when
defining a NAT pool.
Second, the ip access-list 1 commands create a standard access list matching interesting traffic
for address translation. The access list would match IP addresses of the three inside hosts.
Third and last, the ip nat inside source list 1 pool MyPool command instructs the router to
dynamically translate source IP addresses of packets coming in at the inside interface that
match access-list 1 to an address found in the ip nat pool named MyPool.
Exam Concept – Dynamic NAT allows one-to-one mapping of local addresses to global
addresses from a pool of global addresses.
Let’s verify it now:
PAT allows overloading or the mapping of more than on inside local address to the same inside
global address. But this also means that return packets would all have the same destination
address as they reach the NAT router. How would the router know which inside local address
each return packet belongs to? In order to deal with this scenario, the NAT entries in the
translation table are extended entries; the entries not only track the relevant IP addresses, but also
the protocol types and ports. By translating both the IP address and the port number of a packet,
up to 65535 inside local addresses could theoretically be mapped to a single inisde global
address (based on the 16-bit port number).
But keep in mind that a single NAT entry uses approximately 160 bytes of router memory, so
65535 entries would take more than 10 MB of memory and also large amounts of CPU power. In
practical PAT configurations, nowhere near this number of addresses are mapped, but it is
definitely a theoretical limit.
Here is a sample configuration for NAT overloading or PAT according to Figure 10-4.
R1>
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip nat pool MyPool 67.210.97.1 67.210.97.1 ?
netmask Specify the network mask
prefix-length Specify the prefix length
R1(config)#ip nat pool MyPool 67.210.97.1 67.210.97.1 netmask 255.255.255.0
R1(config)#access-list 1 permit 192.168.1.0 0.0.0.255
R1(config)#ip nat inside source list 1 pool MyPool overload
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#ip nat inside
R1(config-if)#interface FastEthernet0/1
R1(config-if)#ip address 67.210.97.1 255.255.255.0
R1(config-if)#ip nat outside
R1(config-if)#end
R1#
The above configuration may appear very similar to the configuration for dynamic NAT, however
there are important differences. First, the pool of IP addresses has been shrunk to a single IP
address assigned to the outside interface of router R1.
Second, access list 1 matches the entire class C network 192.168.1.0/24 which means any inside
local address from this network will be translated. If you want a specific host from this network
not to be translated, you have to explicitly specify by adding a deny statement to the access list.
Let’s assume we want to deny translation to a single host 192.168.1.2 while allowing all other
hosts:
Key Concept – NAT Overload is a special form of dynamic NAT that allows many-to-one
mapping of local addresses to a smaller number global addresses from a pool of global
addresses. The pool of global addresses may even consist of a single address. NAT Overload is
also called Port Address Translation (PAT). These are a favorite type of scenario question on the
CCNA exam.
Let’s start our usual verification by issuing the show ip nat translations command:
R1#debug ip nat
IP NAT debugging is on
R1#
*Mar 1 00:20:24.007: NAT*: s=192.168.1.2->67.210.91.2, d=173.194.67.102 [30]
*Mar 1 00:20:24.035: NAT*: s=173.194.67.102, d=67.210.91.2->192.168.1.2 [30]
R1#
*Mar 1 00:20:29.355: NAT*: s=192.168.1.3->67.210.91.3, d=173.194.67.102 [31]
*Mar 1 00:20:29.395: NAT*: s=173.194.67.102, d=67.210.91.3->192.168.1.3 [31]
R1#
*Mar 1 00:20:33.583: NAT*: s=192.168.1.4->67.210.91.4, d=173.194.67.102 [32]
*Mar 1 00:20:33.599: NAT*: s=173.194.67.102, d=67.210.91.4->192.168.1.4 [32]
R1#
The above output is from router R1 configured with Static NAT as presented earlier in the
chapter. Nothing is broken here and the configuration is good. We generate a single ping from
each of the inside hosts 192.168.1.2, 192.168.1.3, 192.168.1.4 to the server 173.194.67.102. You
can see six debug entries in the above output for outgoing as well as return packets. In the
outgoing packets, the source IP address is translated. While in the return packets, the destination
IP address is translated.
The debug ip nat command can be used to verify the operation of NAT displaying information
about each packet the router translates. This command also displays information about certain
errors, such as the failure to allocate a global address.
As a general rule, you should always use show commands first for verification and
troubleshooting. All debug commands should be used only when you have exhausted your options
with show commands. These debug commands consume resources namely CPU cycles and
memory, and should be used with caution on production networks especially if you love your
current job.
NAT Configuration With Cisco Configuration Professional
Cisco Configuration Professional can be used for configuration of NAT on a Cisco router. If you
have read Chapter 9, you should be familiar with the process of setting up Cisco CP. If not, it is
probably the right time to go back to Chapter 9 and review that material before proceeding. We
will configure NAT overloading or PAT using Cisco CP. PAT is the most prevalent form of NAT,
so it makes sense to use it for this section on Cisco CP.
First, launch Cisco Configuration Professional to connect to the router R1 on which we want to
configure NAT. We assume the device is already set up as shown in Chapter 9.
In the right pane select the device and press Configure button on the top left of the main
application screen.
Figure 10-5
In the left pane, select Router > NAT. In the right pane, select the Basic NAT radio button and
press the Launch the selected task button.
Figure 10-6
The Basic NAT wizard is launched and you just have to press the Next button to proceed.
Figure 10-7
Choose FastEthernet0/1 as the interface that connects to the Internet from the drop-down menu.
Also select the checkbox next to the FastEthernet0/0 network which is to share the connection to
the Internet, and press the Next button to proceed.
Figure 10-8
Press the Finish button to proceed.
Figure 10-9
Select the save running config. to device’s startup config. checkbox and press the Deliver button
to send the configuration to the router and also to save it to the startup configuration.
Figure 10-10
This finalizes your NAT Overload or PAT configuration using Cisco Configuration Professional.
Figure 10-12
I would encourage you to connect router R1 and the using CLI examine the configuration manually
paying special attention to NAT configuration. This is the configuration that was generated and
delivered to the router by GUI wizards of Cisco CP.
Summary
This is a really interesting chapter and you should have learned a lot about Network Address
Translation (NAT) concepts and configuration. We configured NAT in three different flavors
namely static, dynamic, and Port Address Translation (PAT) also known as NAT Overloading.
In order to stay focused on NAT concepts and not get lost in the intricacies of a complex topology,
we used the same topology for different types of NAT configuration. It should have enabled you
compare and contrast the three NAT types with relative ease.
We also went through some verification and troubleshooting commands before finishing the
chapter by learning how to use Cisco Configuration Professional to configure NAT the quick and
easy way.
Chapter 11: Wide Area Networks (WANs)
11-1 Introduction to Wide-Area Networks
11-2 Point-to-Point WANs: Layer 1
11-3 Point-to-Point WANs: Layer 2
11-4 PPP Concepts
11-5 PPP Configuration
11-6 Troubleshooting Serial Links
11-7 Frame Relay
11-8 LMI and Encapsulation Types
11-9 Frame Relay Congestion Control
11-10 Frame Relay Encapsulation
11-11 Frame Relay Addressing
11-12 Frame-Relay Topology Approaches
11-13 Frame Relay Configuration
11-14 Other WAN Technologies
11-15 Summary
Introduction To Wide Area Networks
A wide-area network (WAN) enables you to extend your local-area network (LAN) to other
LANs at remote sites. There are more than one ways to build wide-area networks employing
various types of connections, technologies, and devices.
Cisco IOS Software supports a number of WAN protocols. In this chapter, we will introduce you
to High-Level Data Link Control (HDLC), Point-to-Point Protocol (PPP), and Frame Relay on
serial interfaces. We will also learn how to configure these WAN protocols on Cisco routers. We
also give you a brief introductions to virtual private networks (VPNs) as an alternate to
traditional WAN solutions.
The OSI Layer 1 (physical layer) and Layer 2 (data link layer) work together to deliver data
across a wide variety of network types. Local-Area Network (LAN) standards and protocols
define how to network devices that are relatively close together, hence the term local-area in the
acronym LAN. Wide-Area Network (WAN) standards and protocols define how to network
devices that are relatively far apart, hence the term wide-area in the acronym WAN. LANs and
WANs both implement the same OSI Layer 1 and Layer 2 functions but with different mechanisms.
The big distinction between LANs and WANs relates to how far apart the devices can be and still
be capable of sending and receiving data. LANs tend to reside in a single building or at most
among nearby buildings in a campus using optical cabling approved for Ethernet. WAN
connections typically run much longer distances than Ethernet LANs: across town, between cities,
or even between continents. Usually only one or a few companies even have the rights to run
cables under the ground between the sites. For example, a company may have two offices just
across a road such that the distance between the two buildings is well within the maximum
distance supported by Ethernet technologies. However, the two companies still cannot simply run
a cable under the ground between the two offices due to right-of-way restrictions. When Ethernet
LANs are used to connect buildings, it normally is inside a campus like a university or office
complex.
Besides LANs and WANs, the term Metropolitan-Area Network (MAN) is sometimes used for
networks that extend between buildings and through rights-of-way. The term MAN typlically
implies a network that does not reach as far as a WAN, and generally spans a single metropolitan
area. However, you should keep in mind that the distinctions between LANs, MANs, and WANs
are blurry. There is no set distance that means a link is a LAN, MAN, or WAN link. For example,
the 1000BASE-ZX Ethernet standard with extended wavelength, single-mode (SM) fiber cabling
can achieve distances upto 100 km!
A company that needs to send data over longer distances does not actually own the line or cable;
it rather leases it from the company that actually own it and that’s why it is called a leased line.
The company that owns, manages, and installs such long links, or circuits has the right-of-way to
run cables under streets, highways, rivers etc. The generic term service provider is used to refer
to a company that provides leased lines for WAN connectivity.
Point-To-Point WANs: Layer 1
The OSI Layer 1, or physical layer, defines the specifics of moving data from one device to
another over a medium. No matter what type of data is sent, eventually the sender of data needs to
actually transmit the bits to another device in the form of physical signals or waveforms. The OSI
physical layer defines the standards and protocols used to make the physical transmission of bits
across a network possible.
Poit-to-point WAN links provide basic connectivity between two sites, as shown in Figure 12-1.
In order to get a point-to-point connectivity, you would engage a service provider to install a
circuit. The service provider would provision a point-to-point link or circuit and also install
devices at both ends of the circuit. This kind of point-to-point WAN connection is also called
a leased circuit or leased line because it is always available and you have the exclusive right to
use it as long as you keep paying for it.
Figure 12-1 Components of a Point-to-Point WAN Link
The technologies used by the service provider to build its network to support your point-to-point
WAN link are complex. Fortunately, you don’t need to spend time studying and learning those
technologies as they are outside the scope of your CCNA exam. You can conceptually view the
point-to-point WAN link as if the two routers R1 and R2 are connected back-to-back. Most of the
time, all you are concerned with is the type of interface provided by the CSU that connects to your
router and the speed of the leased circuit. This simplified view of a point-to-point WAN link
serves you well for your CCNA exam as well as real life work as network engineer. However
you will be introduced to a few basic concepts and terms related to service providers in the
coming paragraphs.
Typically, the router connects to a device called a channel service unit/data service unit
(CSU/DSU). The CSU/DSU is usually a standalone unit installed and maintained by the service
provider and looks somewhat like an external dial-up modem. The CSU/DSU is usually placed in
the same rack as the router and connects to the router with a relatively short cable, typically less
than 50 feet long. The much longer cable that runs from the central office (CO) to the customer
premises plugs into the CSU/DSU. Older leased line technologies used four wires or two pairs of
wires but modern technologies allow the use of a single pair just like a telephone line. Sometimes
the CSU/DSU is also integrated into the router and the leased line terminates directly at an
interface on the router. This cable connects the CSU/DSU to the telco switch in the nearest CO
and can be several kilometers long.
The router and CSU/DSU typically are two separate physical devices. However, Cisco also
manufactures WAN interface cards (WICs) with integrated CSU/DSU, eliminating the need for a
separate CSU/DSU. An example is the WIC-1DSU-T1 (and WIC-1DSU-T1-V2) which is a
CSU/DSU WAN interface card by Cisco for T1 or fractional T1 service. The WIC-1DSU-T1
installed in a Cisco router provides a simple and fully integrated solution from a single vendor.
The WAN connectivity is provided through the standard RJ-45 interface connector on the card.
The configuration is performed via the familiar Cisco IOS CLI and there is no need to learn the
command syntax of an external CSU/DSU from another vendor. It obviates the external CSU/DSU
affording ease of deployment, configuration, and management.
The term customer-premises equipment (CPE) is commonly used by telcos to refer to the
equipment installed at the customer site. For example, the LAN switch, router, and CSU/DSU are
classified as CPE in Figure 12-1.
From a legal perspective, two different companies own the various components of the equipment
and lines in Figure 12-1. For instance, the router along with the cable connecting the router to
CSU/DSU is typically owned by the customer. The CSU/DSU, the wiring from CSU/DSU to the
CO and the gear inside the CO are all owned by the telco. The telco uses the term demarcation
point or demarc to refer to the point at which the telco’s responsibility is on one side and the
customer’s responsibility is on the other. The demarc is not a separate device, but rather a
concept of where the responsibilities of the telco and customer separate. There may be different
ways to establish the demarc in different countries by different service providers. In some cases,
the telco also owns and manages the router in addition to the CSU/DSU. In some cases, the
CSU/DSU and router are both owned by the customer. The demarc point shifts according to the
specific ownership terms of a scenario. The term CPE still refers to the equipment at the
customer’s location regardless of ownership.
WAN Interfaces on Cisco Routers
Cisco offers a variety of different WAN interface cards (WICs) for its routers, including
synchronous and asynchronous serial interfaces. For HDLC, PPP, or Frame Relay links in this
chapter, the router always uses an interface that supports synchronous serial communication.
As we discussed in the last section, leased circuits or lines are used to build point-to-point WAN
links between routers. Typically synchronous serial interfaces in Cisco routers are used to
connect to the CSU/DSU. The cable connecting the router to the CSU/DSU uses a connector that
fits the router serial interface on the router side and a standardized WAN connector type that
matches the CSU/DSU interface on the CSU/DSU end of the cable.
Figure 12-2 Cisco Serial Connectors
As a network engineer, you have to choose the right cable based on the connectors on the router
and the CSU/DSU. Beyond that you usually do not have to think about pinouts or other
considerations. Once you choose the right cable and secure the connection, it just works.
Point-To-Point WANs: Layer 2
We will discuss two point-to-point WAN protocols available on serial interfaces of Cisco routers
namely High-Level Data Link Control (HDLC) and Point-to-Point Protocol. The two protocols
are inter-related though PPP is significantly more feature-rich and advanced than HDLC.
Table 12-1 Encapsulation Chart
Circuit Packet
Encapsulation Leased Line
Switched Switched
HDLC Yes Yes No
PPP Yes Yes No
Frame Relay No No Yes
We will explain how to configure leased lines between two routers, using both HDLC and PPP.
HDLC Concepts
High-Level Data Link Protocol (HDLC) is a simple data link protocol that performs a few basic
functions on point-to-point serial links. The standard HDLC frame does not have a protocol type
field to identify the type of packet carried inside the HDLC frame. The HDLC trailer has a Frame
Check Sequence (FCS) field that allows the receiving router to decide if the frame had errors in
transit and discard the frame if needed.
Key Concept High-Level Data Link Protocol (HDLC) is the default encapsulation on
serial interfaces of Cisco routers.
The absence of a protocol type field in the HDLC header posed a problem for links that carried
traffic from more than one Layer 3 protocol. Cisco, therefore, added an extra Type field to the
HDLC header, creating a Cisco-specific version of HDLC. The frame format of this Cisco
version of HDLC is shown in Figure 12-4 and it is this HDLC frame found on all HDLC serial
links connecting Cisco routers. Cisco routers can support multiple network layer protocols on the
same HDLC link. For example an HDLC link between two Cisco routers can forward both IPv4
and IPv6 packets because the Type field can identify which type of packet is carried inside each
HDLC frame.
Figure 12-3 HDLC Framing
The Address and Control fields do not have much work to do these days. For example, only two
routers are connected to each other on a point-to-point serial link. When a router sends a frame it
is obvious that the frame is destined for the only other router on the link. You may be wondering
why HDLC has an Address field at all. In years past, telcos offered multidrop circuits which
included more than two devices with more than one possible destination, requireing
an Address field to identify the correct destination. Both Address and Control fields had
important roles in those days, but today they are not important.
HDLC Configuration
Cisco IOS Software uses HDLC as the data link protocol, by default, on serial interfaces. In order
to establish a functional point-to-point leased line connection between two routers, you first need
to order a leased line. Once the leased line is provisioned, you need to complete the required
cabling between routers at the two ends and CSU/DSUs. In addition to that, you just need to
configure IP addresses and probably a no shutdown command if the interface is administratively
shutdown. The point-to-point WAN connection would become functional with HDLC as the Layer
2 protocol.
However, many optional commands exist for serial links and we will configure a point-to-point
serial links between two routers as shown in Figure 12-3, exploring some of those commands.
Figure 12-4 HDLC Configuration
First of all, let’s configure the interface IP address on R1 using the ip address command in
interface configuration mode.
If an encapsulation command already exists on the interface, for a non-HDLC protocol, we will
have to enable HDLC using the encapsulation hdlc command in interface configuration mode.
Alternatively, you can make the interface revert back to its default encapsulation by using
either no encapsulation or default encapsulation command to disable the currently enabled
protocol.
If the line status of the interface is administratively down, you must enable the interface using
the no shutdown command. That sort of concludes our configuration. However, there are some
optional commands that do not have any impact on whether our HDLC link works or not. It is
always a good practice to configure a description of the purpose of the interface using
the description command in interface configuration mode. You can also configure the speed of the
link using the bandwidth command that takes its parameters in kbps. The bandwidth command
does not set the actual bandwidth of the link which is rather determined by clocking provided by
the CSU/DSU. However it is good practice to set the bandwidth equal to the actual speed of the
link. Let’s now go ahead and actually configure R1.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface Serial1/0
R1(config-if)#ip address 172.16.32.1 255.255.255.252
R1(config-if)#encapsulation hdlc
R1(config-if)#bandwidth 64
R1(config-if)#description Serial link to R2 with HDLC encapsulation
R1(config-if)#no shutdown
R1(config-if)#end
R1#
R2 will have a similar configuration.
R2>enable
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface Serial0/0
R2(config-if)#ip address 172.16.32.2 255.255.255.252
R2(config-if)#encapsulation hdlc
R2(config-if)#bandwidth 64
R2(config-if)#description Serial link to R1 with HDLC encapsulation
R2(config-if)#no shutdown
R2(config-if)#end
R2#
Let’s now verify if the link is operational.
PPP is a varsatile protocol and supports both synchronous and asynchronous links. The protocol
Type filed in the header allows multiple Layer 3 protocols to be carried over the same PPP link.
PPP also supports authentication and two mechanisms are available for this purpose: Password
Authentication Porotocol (PAP) and Challenge Handshake Authentication Protocol (CHAP). PPP
has control protocols for each higher-layer protocol supported by PPP, allowing easier
integration and support for those protocols.
PPP Components
Even though PPP framing is pretty similar to HDLC, PPP defines a set of Layer 2 control
protocols that perform various link control functions. These control protocols of PPP are
separated into two caregories:
Link Control Protocol (LCP): It has several functions related to the data link itself
ignoring the Layer 3 protocol encapsulated by PPP.
Network Control Protocol (NCP): There is one protocol of this caregory for each
network layer protocol. Each protocol performs functions specific to its related Layer 3
protocol.
Key Concept PPP has two components: LCP that is responsible establishing, configuring,
maintaining, and terminating the connection, and NCP which is specific to the Layer 3 protocol
encapsulated by PPP.
The Link Control Protocol (LCP) implements all those control functions that work regardless of
the Layer 3 protocol encapsulated by PPP. All functions specific to a Layer 3 protocol are
performed by the Network Control Protocol (NCP) specific to the related protocl, such as IP
Control Protocol (IPCP) for the Internet Protocol (IP). PPP uses a single instance of LCP for a
PPP link while one NCP instance is used for each Layer 3 protocol defined on the link. For
example, a PPP link that uses IPv4, IPv6, and Cisco Discovery Protocol (CDP) will use one
instance of LCP plus IPCP for IPv4, IPv6CP for IPv6, and CDPCP for CDP.
Table 12-2 Functions of Link Control Protocol (LCP)
LCP Feature Function Description
Disables the router interface if a
Detection of looped link is detected so that
Magic number rerouting takes place over a working
looped link
route.
Disables a router interface that
Link-quality exceeds certain error percentage
Error
monitoring threshold, andallows rerouting over
detection
(LQM) better routes.
Exchanges names and passwords so
that each device can verify the identity
PAP and CHAP Authentication
of the device at the other end of the
link.
Multiple parallel PPP links are
Bundling bundled together to expand available
Multilink PPP
multiple links bandwidth by load balancing traffic
over those links.
Authentication
In the field of networking, authentication is a mechanism used to verify the identity of another
device. This identity verification is needed to confirm that the other device is legitimate and not
some one only appearing to be an authentic device in order to cause damage or steal information.
For example, if R1 and R2 are to form a serial link using PPP, R1 may want R2 to somewhow
prove that it really is R2. This scenario is where R1 is authenticating R2, or in other words,
asking R2 to prove its identity.
PPP is used over both synchronous leased lines and asynchronous dial lines, and configuration of
authentication remains the same for both cases. PPP defines two authentication protocols:
Password Authentication Protocol (PAP) and Channel Handshake Authentication Protocol
(CHAP). Both protocols involve exchanges of messages between the two PPP speaking devices,
but there are differences in detail. With PAP, the device to be authenticated starts the message
exchange by sending a clear text pasword, claiming to be legitimate. The device at the other end
of PPP link compares the password with its own password and if the password is correct, sends
back an acknowledgement. The authentication process is one way and one or both devices can
authenticate each other separately. PAP is simple in operation as well as configuration but it is
insecure because the password is sent in clear text and can be sniffed.
Channel Handshake Authentication Protocol (CHAP) is a much more secure option than PAP and
the password is never sent in clear text with CHAP. CHAP verifies the identity of the PPP peer by
means of a three-way handshake. The general steps performed are:
1. After Link Control Protocol (LCP) phase is complete and CHAP is negotiated between the
two devices, the authenticator sends a challenge message to the PPP peer.
2. The peer responds with a calculated through a one-way hash function of Message Digest 5
(MD5).
3. The authenticator calculates its own hash value and compares the received response
agains it. If the values match, the authentication is considered successful. Otherwise
connection is terminated.
CHAP is a one-way authentication method, which means it involves an authenticator
authenticating its peer. In practice, both peers are configured to authenticate each other and two
separate three-way handshakes take place.
PAP is much less secure because PAP sends both the hostname and passowrd in clear text inside a
message. These values can be easily read if someone places a tracing tool in the circuit to sniff
data. CHAP uses a one-way hash algortithm, known as MD5, with input to the algortithm being a
password that is used locally to compute the hash and never crosses the link and a shared random
number.
PPP Phases: LCP, Authentication, and NCP
LCP negotiation is a PPP phase in which parameters are negotiated for establishing, configuring,
and testing the data-link connection. During LCP negotiation, the two routers agree whether to use
PAP or CHAP for authentication or whether to use authentication at all or not. The LCP
negotiation also uses a parameter called MagicNumber, which is used to determine if the link is
looped back. A random text string is sent across the link and, if the same value is received back,
the router knows that the lin if looped. An LCP state of open means that LCP was successfully
completed, while an LCP state of closed indicates an LCP failure.
The authentication phase is optional as PPP authentication is not mandatory. The authentication
protocol agreed upon in the LCP negotiation (PAP or CHAP) is used to perform authentication in
this phase.
The mandatory NCP phase is used to establish and configure different network-layer protocols.
The most common network layer protocol is the Internet Protocol (IP). You know that there is a
specific NCP for each network layer protocol supported and the one for IP is IP Control Protocol
(IPCP). The two routers exchange IPCP messages to negotiate options specific to the network
layer protocol, that is, IP. IPCP negotiation can be used for IP address assignment to the peer.
PPP Configuration
Point-to-Point Protocol configuration is rather straightforward if you do not configure
authentication. Keep in mind that PPP authentication is optional and a link can pretty much
establish without authentication. In fact, the only change here as compared with HDLC
configuration earlier is that you have to use the encapsulation ppp command in interface
configuration mode. Several other link parameters can also be configured
like bandwidth and description of the interface. You may consider enabling the interface as well
using the no shutdown command.
We will perform simple PPP configuration using two routers shown in Figure 12-5, the same
internetwork used for HDLC configuration.
Figure 12-6 PPP Configuration
Let’s now configure R1 and R2 to establish a point-to-point serial link using PPP as the Layer 2
protocol.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface Serial1/0
R1(config-if)#ip address 172.16.32.1 255.255.255.252
R1(config-if)#encapsulation ppp
R1(config-if)#no shutdown
R1(config-if)#end
R1#
R2>
R2>enable
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface Serial0/0
R2(config-if)#ip address 172.16.32.2 255.255.255.252
R2(config-if)#encapsulation ppp
R2(config-if)#no shutdown
R2(config-if)#end
R2#
All what we have done is to configure PPP as the encapsulation method using encapsulation
ppp command other than configuring an IP address. That’s all we need to successfully establish a
PPP serial link without authentication though. The lack of any authentication related configuration
does not actually prevent the link from becoming fully operational.
Let’s use the show interfaces command on R1 to verify if a PPP link has established.
R1#show users
Line User Host(s) Idle Location
* 0 con 0 idle 00:00:00
Interface User Mode Idle Peer Address
Se1/0 R2 Sync PPP 00:00:05 172.16.32.2
Troubleshooting Serial Links
In a perfect world, you configure a point-to-point link for HDLC or PPP and it just works.
However, you may quite often find yourself in a situation when the link fails to come up while you
strongly believe you configured everything right. We will briefly discuss in this section, how to
isolate and fix problems on point-to-point WAN links.
A simple ping command is a good way to determine if a serial link configured with HDLC or PPP
can or cannot forward IP packets. If you are able to successfully ping the IP address on the serial
interface of the router at the other end of the link, it is enough proof that the link works.
If the ping does not work, you have a reason to worry. The problem may be related to functions at
Layers 1, 2, or 3 of the OSI reference model. The best way to isolate the problem to one of the
OSI layer is to use the show ip interface brief command and examine the line and protocol
status.
Table 12-3 Interface Status and Problematic Layer
Line Status Protocol Status Problematic Layer
Administratively
Down Interface is shutdown
down
Down Down Layer 1
Up Down Layer 2
Up Up Layer 3
Once you have identified the problematic layer, you know where to look. We introduce a few
common problem you may face on point-to-point serial links.
Keepalive Failure
The keepalive feature requires routers to to send keepalive messages to each other, every 10
seconds by default. Keepalive messages are treated as ordinary packets, and they exists for both
HDLC and PPP. The HDLC keepalive message is Cisco proprietary, whereas PPP defines a
keepalive message as part of Link Control Protocol (LCP).
The keepalive feature enables a router notice a dysfunctional link. A router expects to receive
regular keepalives from its neighboring router over an HDLC or PPP link. If a router does not
receive any keepalive messages from the other routers for 5 keepalive intervals by default, the
router brings down the interface, believing the router on the other end of the link is no longer
working. This allows routing protocol to converge and use other valid routes if they exist.
You can change the keepalive interval from the default of 10 seconds using
the keepalive command in interface configuration mode. It is possible to speed up failed link
detection by reducing the keepalive interval. But this strategy is not useful in all situations. For
example, a typical failure of serial link involves losing the Carrier Detect (CD) signal. This sort
of failure is detected very quickly, within a few milliseconds. Reducing the keepalive interval
cannot speed things up in this case. In most cases, the default keepalive interval is used.
You can disable keepalives using the no keepalive command in interface configuration mode.
However, either both routers should use keepalives, or both should disable them. If there is a
mistake in which one end leaves keepalives enabled while the other end disables keepalives, the
link is bound to fail. This mistake only breaks HDLC links; the PPP keepalive feature can prevent
the problem.
Frame Relay
Frame Relay was a very popular WAN technology in the past. It still is today to some extent.
However, it is safe to say that it is being replaced by competing technologies like Ethernet WAN,
Multi-Protocol Label Switching (MPLS), and Virtual Private Network (VPN).
VPN technology has matured to a level where it is believed to provide the same level of security
and confidentiality afforded by private WANs, using the Internet as transport medium. It is much
cheaper to deploy VPNs over the Internet than private WANs.
The service model of MPLS is the same as that of Frame Relay. However, MPLS enables service
providers to offer richer services and affords many technical advantages. MPLS is the technology
of choice over Frame Relay for private WAN deployments today. Frame Relay is far from dead
though due to the large existing installed base and its simplicity for point-to-point WANs
connecting the branch office to corporate headquarters. Frame Relay is also used in combination
with MPLS to provide Layer 2 circuits to the nearest MPLS point of presence (POP). Therefore,
despite all what you have heard about Frame Relay being obsolete, it will continue to be an
important networking topic for some time at least.
Packet Switching versus Circuit Switching
WAN technologies can usually be categorized as either circuit-switching or packet-switching. A
electrical circuit is a system of conductors (wires) forming a complete path around which a
current can flow. The original telephone systems actually created an electrical circuit between
two phones in order to carry the voice signal. The leased lines used for carrying data are also
circuits, providing the ability to transfer bits as signals between two end points. In
telecommunications terminology today, a circuit refers to the physical path between two end
points providing the ability to send voice or data from one end point to the other.
Packet switching, as a technology, is more complex than circuit switching. The devices involved
in packet switching have to do more than simply passing bits as signals from one end point to
another. The devices in the service provider’s network have to be intelligent for packet switching.
This is in contrast to circuit switching where devices in the service provider’s network simply
have to carry signals without making sense of them. With packet switching, the devices read the
bits sent by customers interpreting usually some form of address field in the packet header. The
address field in the packet header is used by the devices to make choices, switching one packet to
go in one direction and the next packet to go possibly in another direction to another device.
Circuit switching is an old but expensive technoloy, and it is what the traditional telephone
network known as the public switched telephone network (PSTN) uses. Packet switching is more
modern and may eventually replace circuit switching completely. Meanwhile we have to live in a
world that is a hybrid of the two technologies.
We will now cover Frame Relay thoroughly describing terminology, protocol details, and
configuration.
Frame Relay Concepts
Frame Relay is more complex a technology than point-to-point WAN links but also provides more
features and benefits. Frame Relay networks are multiaccess networks, which means that more
than two devices can connect to the network. This is similar to LANs where more than two
devices can attach to the same network and any two devices can communicate directly. However,
unlike LANs you cannot send a broadcast at data link layer over Frame Relay. Therefore, Frame
Relay networks are termed as non-broadcast multiaccess (NBMA) networks.
Figure 12-7 presents a Frame Relay topology showing its most basic components.
Figure 12-7 Frame Relay Components
A Frame Relay network is made up of a large number of Frame Relay switches dispersed all over
the coverage area of a Frame Relay service provider. This coverage area may span a country,
region, or even the whole world. The switches are interconnected in a complex mesh topology.
Some Frame Relay switches also terminate user circuits, in addition to connecting to other
switches, and are called access switches. Other Frame Relay switches do not terminate user
circuits, connecting to other Frame Relay switches only, and make the backbone of the Frame
Relay network.
A leased line is installed between the router at a customer site and the nearest Frame Relay
switch. This leased line is called the access link. In the context of Frame Relay, the router is the
data terminal equipment (DTE) while the Frame Relay switch is the data circuit-terminating
equipment (DCE). To ensure that the link is working DTE and DCE exchange regular messages
with each other. These keepalive messages, along with other messages, are defined by the Frame
Relay Local Management Interface (LMI) protocol. Please keep in mind that the terms DTE and
DCE have different meanings in different contexts and the terms here are used in the context of
Frame Relay.
The physical connectivity from a Frame Relay DTE router to the Frame Relay network is the
access link. However, the end goal is to provide end-to-end connectivity between two DTE
routers across the Frame Relay cloud. The logical end-to-end communications path between two
DTE device is known as a virtucal circuit (VC). The provisioning of virtual circuits is
responsibiliy of the service provider, and these predefined virtual circuits are also known as
permanent virtual circuits (PVC). Frame Relay routers use the data link connection identifier
(DLCI) as the Frame Relay address. DLCI identifies the VC over which the frame should travel.
Key Concept Committed Information Rate (CIR) is the average rate, in bits per second,
at which Frame Relay switch agrees to transfer data for a customer.
Let’s now formally define some important Frame Relay terms before moving forward:
Virtual circuit (VC) is a logical communications path that is used by frames travelling
between DTEs.
Permanent virtual circuit (PVC) is a permanently defined virtual circuit. PVC is
analogous to a point-to-point leased line in concept.
Switched virtual circuit (SVC) is set up dynamically when needed. An SVC is analogous
to a dial-up connection in concept.
Data terminal equipment (DTE) is a networking device like a router used by a customer
to connect to the Frame Relay network of a service provider. The DTE typically resides at
the customer site and is frequently referred to as customer premises equipment (CPE).
Data circuit-terminating equipment (DCE) are the Frame Relay access switches that
terminate customer access links and reside in the service provider network. The term DCE
is also considered to mean data communication equipment by many.
Access link is the leased line between the DTE (router) and DCE (Frame Relay switch).
Access rate (AR) is the speed at which the access link is clocked. The access rate does
not necessarily have to match the CIR. However in order to fully utilize the CIR, the
access rate must be equal to or higher than the CIR.
Committed information rate (CIR) is the speed at which the bits can be sent over a VC,
according to the service contract between the Frame Relay service provider and its
customer.
Data link connection identifier (DLCI) is a Frame Relay address present in the header of
every Frame Relay frame. DLCI is significant over a single hop only and different DLCI
values may be used on different hops along a VC for the same packet.
Non-broadcast multi-access (NBMA) is a network on which broadcasts are not
supported but more than two device can be connected to the same network.
Local Management Interface (LMI) is the protocol used between a DCE and DTE to
manage the connection. LMI involves messages to establish SVCs, status messages for
PVCs, and keepalives to mention a few.
Virtual Circuits
Frame Relay is a cost-effective alternate to point-to-point leased lines to build enterprise WANs.
In the absence of Frame Relay, enterprises wishing to connect offices worlwide would have to
lease very expensive international leased circuits to connect LANs through routers. A Frame
Relay network is owned by a service provider offering services to companies that want to
connect its locations to each other. Frame Relay virtual circuits act like point-to-point leased
lines for the customer while providing significant cost benefits as compared with leased lines.
Figure 12-8 Frame Relay Virtual Circuit (VC)
A virtual circuit (VC) spans the access links at the two ends as well as the Frame Relay network.
For example, you can see two VCs in figure 12-8, one between R1 and R3 and the other between
R2 and R3. Bold and grayed dashed lines have been used to represent VCs. You should keep in
mind that the Frame Relay network is owned and operated by a service provider and is shared by
many customers of the same service provider. Yet virtual circuits provisioned by the service
provider for a certain customer create the illusion of a point-to-point dedicated circuit. Also the
traffic from different customer is kept separate and Frame Relay networks built around this model
are considered sufficiently secure.
Originally, when the world was moving from expensive private leased lines to the co-operative
model of Frame Relay, customers were concerned about bandwidth because of the contention
within the Frame Relay cloud with other customers for available capactity. In order to address
these concerns, Frame Relay uses a concept of committed information rate (CIR). Each VC has a
CIR, which is a guarantee by the provider that a particular VC would get that much bandwidth. So
you can migrate from a private leased line to Frame Relay with a CIR equal to the leased line
bandwidth.
Frame Relay service model requires one access from each site to the Frame Relay service
provider, regardless of the number of sites to be interconnected. This is not the case if you want to
build a WAN using private leased lines. In that case you would need N*(N-1) leased lines where
N is the number of sites you are trying to connect. For example, if you have 3 sites you would
require 3*(3-1)=6 leased lines, while for 10 sites the number of leased lines required steps up to
10*(10-1)=90 leased lines. This solution simply does not scale to large deployments. Though the
access links required to connect a site to nearest Frame Relay point of presence (POP) are still
private leased lines, but they are shorter and fewer.
When a Frame Relay network is designed, there may not be a VC between any pair of sites. If
there is a PVC between any two sites, it is called a full-mesh topology. When not all pairs of sites
have a direct PVC, it is called a partial-mesh topology. In most practical scenarios, partial mesh
is used as not all customer sites typically need to connect to all other sites. For example, global
enterprises typically use a star-topology which is a special case of partial-mesh topology. In a
star topology a large number of remote branch offices are connected to the data center to access
the resources including data and applications.
LMI & Encapsulation Types
While the PVC is a point-to-point logical path between two customer routers, there are many
physical and logical components that work together to create the illusion of a single logical path.
Each router needs a physical access link from the router to the nearest Frame Relay switch. The
provider needs to have some kind of physical network between those switches as well. In
addition, the provider has to somehow provision those virtual circuits in order to make sure
frames sent from one end of a VC arrive at the correct destination.
Frame Relay uses the Local Management Interface (LMI) protocol to manage each physical
access link and the PVCs that use that link. The basic Frame Relay protocol format used for
carrying user data frames is also used to carry LMI messages. However, LMI messages are sent
in frame distinguished by a special LMI-specific DLCI usually set to 1023.
Two LMI message types have been defined that flow between the router acting as DTE and the
Frame Relay switch acting as DCE. The Status-enquiry messages are sent from the router to the
switch and allow the router to ask about the status of network. The Status messages are sent from
the switch to the router responding to status-enquiry messages. In fact, the Frame Relay switch
sends two types of messages: a status message every 10 seconds and a full status message instead
of a status message every 60 seconds. The full status message containc all the information about
known DLCIs and their state. The LMI status-enquiry messages are sent every 10 seconds from
the router to the switch while the router responds with the status message. These periodic LMI
messages also serve as keepalives for both the router and the switch. LMI status messages act as
a keepalive between the DTE and DCE. If the access link is having a problem, these keepalives
will be missed and link problem will be detected. In addition to performing a keepalive function
between the DTE and DCE, LMI status messages also signal if a PVC is active or inactive. Every
PVC is predefined by the Frame Relay service provider, but its status can change due to network
conditions like failure of trunk links in the provider network. An access link may be up and and
running and keepalives may be present but one or more VCs may still be down. The reason is that
a VC is an end-to-end logical connection that involves not only the access links at the two ends
but also spans the core of the provider network. The router needs to know that which VCs are
functional and which are not. The router learns this information as well from the Frame Relay
switch through LMI status messages.
In addition to the common features like keepalives, LMI has several optional features defined as
LMI extensions. We will briefly introduce two LMI extensions related to global addressing and
multicasting. The basic Frame Relay specification supports DLCI values that are only locally
significant. For example, the DLCI value used on the access link between a router and Frame
Relay switch is significant only on the access link and does not in any way identifies the router
globally. This DLCI value cannot serve as an address for the router due to its local significance.
In other words, Frame Relay addresses do not exist and hence cannot be discovered by usual
address resolution methods. Therefore, static maps must be created to tell a router which DLCI to
use to reach a remote router. The global addressing extension solves this problem by allowing
DLCI values that are globally significant and hence can serve as addresses of individual end
routers. The Frame Relay network with global addressing looks much like a LAN to the end
routers that can use global addresses (DLCIs) as Frame Relay addresses similar to MAC
addresses used in a LAN.
The multicasting extension defines multicasting as another optional LMI feature. There is a series
of four reserved DLCI values (1019 to 1022) that represent multicast groups. The frames sent by
a device using one of these reserved DLCIs are replicated by the network and sent to all
destinations in the group. The LMI extension for multicasting also defines LMI messages to notify
devices of the presence, addition, and deletion of multicast groups.
Cisco routers have three options for different variations of LMI protocols: Cisco, ITU, and ANSI.
These LMI options have their differences and are incompatible with each other. For LMI to work
correctly, both the DTE and DCE devices across an access link must use the same LMI type.
LMI configuration is pretty straightforward. Most of the time, we are good with the default LMI
setting. This default setting uses something known as LMI autosense, in which router simply
figures out on its own which LMI type the Frame Relay switch is using. You can just let the router
autosense the LMI and never bother manually configuring it on the router. However, if you choose
to configure the LMI type manually, it will automatically disable the autosense featue. Table 12-2
lists the three LMI types, the standrd document, and the keyword used in the Cisco IOS
Software frame-relay lmi-type interface configuration mode command.
Table 12-4 LMI Types
Cisco IOS
LMI Type Standard Document
Keyword
Cisco Proprietary cisco
ANSI T1.617 Annex D ansi
ITU Q.933 Annex A q933a
Frame Relay Congestion Control
There are three flag bits inside the Frame Relay header that can be used to control what goes on
inside the Frame Relay network. Imagine a situation when one (or more) Frame Relay
sites/routers use an access link that is clocked higher than the CIR of a VC. In such a situation, the
router can send more data to the Frame Relay switch at the edge of the provider network than
what is allowed by the contracted rate or CIR between the customer and service provider. The
three bits in the Frame Relay header can influence how the switches control the network when the
network gets congested due speed mismatches. These bits are:
Forward Explicit Congestion Notification (FECN)
Backward Explicit Congestion Notification (BECN)
Dicard Eligibility (DE)
Figure 12-9 Operation of FECN and BECN
The FECN bit can be set by a router as well as a Frame Relay switch to indicate that the frame
itself has experienced congestion. In other words, FECN indicates that congestion exists in the
direction in which the frame is travelling. Keep in mind that network congestion can be
unidirectional. In other words, the network can become congested in one direction while not
being congested at all in the other direction. Referring to the Figure, router R1 sends a frame out
to the switch with both FECN and BECN set to zero, shown as Step 1. The switch on the left
experiences congestion left to right, and sets the FECN bit to 1 before sending the frame out,
shown as Step 2. But what’s the point of all this? The goal is to somehow make R1 reduce the
speed at which it is sending frames in view of the congestion. But R1 needs to informed of
network congestion before it can think of slowing down. The Frame Relay switch on the left,
knowing that it set FECN in Step 2, can now set the BECN bit in the next frame going right to left
toward R1 on that same VC, shown as Step 3. When R1 receives a frame with BECN set, it
knows that congestion occured in the opposite direction. In other words, the BECN bit set in a
frame received by R1 says that congestion occured for the frame sent by R1 on the same VC (to
R2). R1 can then decide to slow down a bit (it’s a choice not compulsion for R1).
The IOS feature used by R1 to slow down is known as Traffic Shaping. It essentially makes R1
send some packets, wait a while, send some more packets, wait again, and so on. If the router
keeps sending non-stop, it would be sending frames at the access rate or the clock rate of the
access link. By the wait periods introduced by Traffic Shaping the router effectively sends at a
rate lower than the access rate. We can configure Traffic Shaping with the appropriate parameters
to even make the router send exactly at the CIR when the access rate is higher than the CIR.
Finally, the Discard Eligibility (DE) bit allows the provider to selectively discard frames in
which the DE bit is set at times of congestion. Frame Relay service providers usually build their
networks to handle traffic loads that far exceed the collective CIRs of all VCs. As a result,
customers may be allowed to send data at rates higher than the CIR. However, if one or more
customers start sending data that way exceeds their contracted CIR, the provider can rightfully
discard some traffic sent by those customers. The provider can set the DE bit on some frames
received by such a customer that exceed the CIR. The marked frames are not discarded when
there is no congestion. However when network congestion does happen, these DE marked frames
are the first to be dropped. When the switch marks some of the frames recieved by a customer, it
would normally do it indiscriminately. Some high priority frames sent by a customer may get
marked and dropped ahead of some low priority frames. The customer may also want to set the
DE bit in some frames, such as for less important traffic. The customer can ensure that the more
important traffic gets through the Frame Relay network, even when the provider has to discard
traffic. When the provider’s network is not so congested, the customer can pump a lot of extra
data through the network without its being discarded.
Frame Relay Encapsulation
Frame Relay is a data link protocol and the customer router encapsulates each Layer 3 packet
inside a Frame Relay frame comprising a header and trailer before it is sent out the access link.
The header and trailer used is actually defined by the Link Access Procedure Frame Bearer
Services (LAPF) specification, ITU Q.922-A. That was quite a mouthful but the LAPF framing,
shown in Fiure 12-9, provides important functionality including error detection with the FCS in
the trailer and a DLCI field along with a few oher fields in the header.
Figure 12-10 LAPF Framing
The standard LAPF header is too simplistic and does not provide all the fields needed by Frame
Relay routers. More specifically, there is no Protocol Type field in LAPF. Each data link layer
needs such a field to define the type of Layer 3 packet carried by the data link frame. If Frame
Relay uses only LAPF header, routers cannot support multiprotocol traffic because there is no
way to identify the type of Layer 3 protocol.
The simple LAPF header was extended to compensate for the absence of a Protocol Type field:
Cisco created a proprietary additional header, which appears between the LAPF header
and the Layer 3 packet shown in Figure 12-10. It includes a separate 2-byte Protocol Type
field with values exactly matching the ones used in the same field Cisco uses for HDLC,
as discussed earlier in the chapter.
Internet Engineering Task Force (IETF) defined the second solution via RFC standards
1490 and later 2427. This solution is known as Multiprotocol Interconnect over Frame
Relay and it defines a header similar to the Cisco propreitary solution placed between the
LAPF header and Layer 3 packet. The additional header includes a Protocol Type field as
well as several other options.
Key Concept Frame Relay encapsulation has two types: Cisco which is proprietary and
the default on Cisco routers, and IETF which is standards based. Cisco encapsulation can be used
when all routers are Cisco while IETF can be used in a multi-vendor environment.
You should keep in mind that Frame Relay encapsulation should match on the routers at the two
ends of a VC. If you fail to match the Frame Relay encapsulation (both sides cisco or both ietf) on
the two routers, the connection does not come up. However, if you have Cisco routers at both
ends of the connection (a likely scenario), and you don’t explicitly configure Frame Relay
encapsulation, both routers default to cisco and the connection does get established. Frame Relay
switches do not care about the Frame Relay encapsulation. In Cisco IOS Software configuration,
the Cisco proprietarty encapsulation is called cisco while the other one is called ietf.
Figure 12-11 Cisco and IETF Framing
Frame Relay Addressing
Frame Relay defines how to deliver frames from one router to another across the Frame Relay
network. The router uses a single physical access link to connect to the Frame Relay switch. The
single access link may have many VCs connecting it to many remote routers. There must be
something to identify each of the remote routers. That something is the data-link connection
identifier or DLCI – the Frame Relay address.
The DLCI is a 10-bit value written in decimal. The possible range of DLCIs is 0-1023, however
the low- and high-end values are usually reserved and typical DLCI values range from around 17
to a little less than 1000.
DLCIs can be simple and confusing at the same time. The most important fact about the DLCI is
that it does not identify a VC but only a single hop on the VC. A Frame Relay service provider
assigns two local DLCI values to each PVC: one for each end of the PVC to be used between the
DTE router and DCE switch.
Frame Relay Global Addressing
Global addressing is a Frame Relay addressing scheme that serves to lessen the confusion about
DLCIs. Global addressing makes DLCIs look like MAC addresses in Ethernet LANs. Global
addressing is a very simple convention of how DLCI values are assigned when planning a Frame
Relay network so that working with DLCIs is much easier. Global addressing does not change
anything inherently with DLCIs or Frame Relay addressing. It simply chooses such DLCI values
that they become more intuitive to understand and deal with.
Here is how global addressing works. The Frame Relay service provider supplies a
configuration sheet and a diagram similar to Figure 12-x, with global DLCIs shown.
Figure 12-12 Frame Relay Global DLCIs
Frame Relay global addressing as planned in Figure 12-12 is used to place DLCIs in Frame
Relay frames as shown in Figure 12-13. For example, router R1 uses DLCI 50 when sending a
frame to router C, because router R3’s global address is 50. In a similar manner, router R3 uses
DLCI 51 when sending frames over the VC to router R1. The beauty of global addressing is that it
works like addressing in a LAN with a single MAC address for each device, making it much
more logical to most people.
Figure 12-13 DLCI Values for Two PVCs
In Figure 12-13, the PVC between routers R1 and R3 has two DLCIs assigned by the service
provider, one at each end. R1 uses local DLCI 50 to identify the PVC and R3 uses local DLCI 51
to identify the same PVC. Similarly, the PVC between routers R2 and R3 also has two DLCIs
assigned, one at each end. In this case, R2 uses local DLCI 50 while R3 uses local DLCI 52.
DLCI values are only locally significant and can be reused on different links. In Figure 12-11,
both R1 and R2 use DLCI 50 to identify their respecitve PVCs which is perfectly fine. However,
the local DLCIs on a single access link must be unique among all PVCs that exist on that access
link. If you work for an enterprise, you need not worry about DLCIs as their values are chosen by
the provider.
The local router is aware of only the local DLCI and it effectively identifies a PVC for the router.
When you configure a router, you only configure the local DLCI value and dont need to concern
yourself with DLCI value at the other end of the PVC.
The Frame Relay header lists only a single DLCI field which performs the addressing function. It
does not identify both a source and destination address like the Ethernet and IP headers. The
Ethernet header has both a source and destination MAC address, while the IP header contains
both source and destination IP addresses.
The DTE router identifies a PVC with the DLCI assigned to that VC by the provider. The DTE
router will send all packets for that VC encapsulated in a Frame Relay frame with that specific
DLCI value listed in the frame header. The service provider itself assigns DLCI values to the
customer and it knows which DLCI values are to be used at the two ends of a VC to enable end-
to-end communication on a VC.
Frame Relay networks have some additional considerations when it comes to assigning subnets
and IP addresses on interfaces. You can have:
Single subnet covering all Frame Relay DTEs
One subnet per VC
A hybrid of the first two options
Frame Relay Topology Approaches
Single Subnet for all Routers
The first approach is to use a single IP subnet for the whole Frame Relay network, as shown in
Figure 12-12.
Figure 12-14 Single Subnet for all Routers
The single-subnet option is normally used when there is a full mesh of virtual circuits (VCs). In a
full mesh, every router has a virtual circuit to every other router, which means that every router
can send frames directly to every other router. This addressing scheme resembles Ethernet LANs
with the difference that IP addresses are configured on the serial interfaces of routers with Frame
Relay encapsulation. The single-subnet option is conceptually simple because it looks like what
you are used to on Ethernet LANs. However, the vast majority of Frame Relay deployments use
partial mesh and the single-subnet option is not well suited for that.
One Subnet per VC
The second alternative of having one IP subnet per VC, works better for a partially meshed Frame
Relay network, like the one shown in Figure 12-13. This is the more prevalent Frame Relay
network because most organizations have a large number of remote sites that need to connect to a
central site to access applications. Here there is no VC, for example, between R2 and R3 and so
R2 cannot communicate directly with R3.
Figure 12-15 One Subnet per VC
You may have noticed that R1 has three IP addresses associated with it. Cisco IOS software
allows you to create logical subdivisions of a physical interface, called subinterfaces.
Subintefaces allow R1 to have three IP addresses associated with the same physical interface.
The router can treat each subinterface and teh VC associated with it as a separate point-to-point
serial link.
Also, we are using private IP addresses with predictabe /24 prefixes to enable you focus on
underrlying concepts rather than numbers. However, you should keep in mind that on point-to-
point subinterfaces you would usually see /30 addresses with 255.255.255.252 as subnet mask.
This allows for only two valid IP addresses on a subnet and conserves available IP address
space.
A Mix of Full and Partial Mesh
The third and last alternative for IP addressing is a mix of the first two alternatives. Figure 12-14
show a trio of routers R1, R2, and R3 with VCs in full mesh among them while a single VC to R4.
In this case, you have two options for Layer 3 addressing. The first is to treat each VC as a
separate Layer 3 subnet. However you would need four subnets for the Frame Relay netowrk in
that case. The second option also shown here is to create a smaller full mesh between routers R1,
R2, and R3 while leaving R4 out. This allows R1, R2, and R3 to use a single subnet, The VC
between R1 and R4 is then treated as a separate subnetm, which results in only two subnets for
the Frame Relay network rather than four.
Figure 12-16 A Mix of Full and Partial Mesh
In order to accomplish this addressing scheme, subinterfaces are used. Point-to-point
subinterfaces are used when a single subnet is mapped to a single VC, for example, between R1
and R4. Multipoint subinterfaces are used when more than two routers are in the same subnet, for
example, with R1, R2, and R3.
Multipoint interfaces can terminate more than one VC, and the term multipoint refers to the fact
that more than one remote sites may be reachable off the interface.
We will provide you full configurations for all three scenarios discussed so far in the next
section.
Frame Relay Configuration
You should have a good understanding of Frame Relay by now and its time to get your hands dirty
with some configuration. Frame Relay configuration has any options, yet the actual configuration
you perform can be very basic depending on how many default settings can be used. Cisco IOS
Software uses the following defaults for Frame Relay:
LMI Cisco IOS automatically senses the LMI type by default and this feature is referred
to as LMI autosense. If you manually configure the LMI using the frame-relay lmi-
type command, LMI autosense is silently disabled.
IARP Cisco IOS automatically discovers the next-hop IP address associated with a
DLCI or VC using Inverse Address Resolution Protocol (IARP). You can also create a
mapping between a DLCI and next-hop IP address manually using frame-relay map
ip command.
Encapsulation Cisco IOS uses Cisco encapsulation for Frame Relay and if you are using
only Cisco routers, this default setting works fine without any additional configuration.
You are familiar with the concept of physical and logical sub-interfaces. For example, you may
configure several sub-interfaces on a single Fast Ethernet physical interface on a Cisco router.
Frame Relay is a Layer 2 WAN protocol that cand be configured on physical serial links. In
addition to physical interfaces, you can also configure two types of logical interfaces for Frame
Relay – point-to-point and multipoint. We will introduce you to some of the specifics of Frame
Relay configuration for these different interface types.
In certain cases, you may have a working Frame Relay connection by just using a single
command encpsulation frame-relay, and leaving everything else to default values. However, you
should be familiar with the many configuration options and when they are used. Frame Relay is
the source of many tricky questions on CCNA, CCNP, and beyond.
Here is your step-by-step guide to configuring Frame Relay:
The first step should always be to configure the physical interface to use Frame Relay
encapsulation using the command encapsulation frame-relay in interface configuration
mode.
Configure an IP address on the interfaces or sub-interface using the good old ip
address command.
Optionally, configure the LMI type of each physical interace using the frame-relay lmi-
type command.
Optionally, change the default Frame Relay encapsulation using the
command encapsulation frame-relay. If you use the command on the interface (or sub-
interface), it will change the encapsulation for all VCs on the interface (or sub-interface. If
you want to change the encapsulation only for a specific VC, you should use
the ietf keyword with the command frame-relay interface-dlci (point-to-point sub-
interfaces) or frame-relay map.
The default is to use the Inverse ARP (IARP) to map the DLCI to the IP address of next-
hop router. However, you can also configure static mapping using the frame-relay map
ip ip-address dlci broadcast command.
There are two ways to associate one DLCI to point-to-point or multiple DLCIs to
multipoint interfaces. The first involves using the frame-relay interface-dlci dlci sub-
interface command. The second involves using the frame-relay map ip ip-address
dlci broadcast sub-interface command.
We are going to present three different Frame Relay configuration examples to see all those
configuration steps in action. The examples correspond to the three Frame Relay scenarios we
presented earlier in the chapter. We will also introduce you to several show commands that are
useful to verify your configuration and troubleshoot if something is not working as expected.
Configuration – Single Subnet for all Routers
The first option involves a single IP subnet for all routers/DTEs, with IP addresses configured on
physical serial interfaces, as shown in Figure 12-17.
Figure 12-17 Configuration – Single Subnet for all Routers
We will use a single class C private subnet 192.168.1.0/24 in this example. Table 12-5 should
serve as a reference for all configuration in this section.
Table 12-5 Configuration Table
R1> enable
R1# configure terminal
R1(config)# interface Serial0/0
R1(config-if)# ip address 192.168.1.1 255.255.255.0
R1(config-if)# encapsulation frame-relay
R1(config-if)# no shutdown
R1(config-if)# exit
R1(config-if)# interface FastEthernet1/0
R1(config-if)# ip address 192.168.10.1 255.255.255.0
R1(config-if)# no shutdown
R1(config-if)# exit
R1(config)# router eigrp 100
R1(config-router)# network 192.168.1.0
R1(config-router)# network 192.168.10.0
R1(config-router)# end
R1#
The configuration for R2 is very similar.
R2> enable
R2# configure terminal
R2(config)# interface Serial0/0
R2(config-if)# ip address 192.168.1.2 255.255.255.0
R2(config-if)# encapsulation frame-relay
R2(config-if)# no shutdown
R2(config-if)# exit
R2(config-if)# interface FastEthernet1/0
R2(config-if)# ip address 192.168.20.1 255.255.255.0
R2(config-if)# no shutdown
R2(config-if)# exit
R2(config)# router eigrp 100
R2(config-router)# network 192.168.1.0
R2(config-router)# network 192.168.20.0
R2(config-router)# end
R2#
There are no surprises with the configuration of R3 either.
R3> enable
R3# configure terminal
R3(config)# interface Serial0/0
R3(config-if)# ip address 192.168.1.3 255.255.255.0
R3(config-if)# encapsulation frame-relay
R3(config-if)# no shutdown
R3(config-if)# exit
R3(config-if)# interface FastEthernet1/0
R3(config-if)# ip address 192.168.30.1 255.255.255.0
R3(config-if)# no shutdown
R3(config-if)# exit
R3(config)# router eigrp 100
R3(config-router)# network 192.168.1.0
R3(config-router)# network 192.168.30.0
R3(config-router)# end
R3#
We are done with our Frame Relay configuration here, and it’s time to verify if it works as
expected. A good starting point for Frame Relay verification can be the show frame-relay map
command.
R1#show ip route
<Some output omitted for brevity.>
Gateway of last resort is not set
D 192.168.30.0/24 [90/2172416] via 192.168.1.3, 00:06:46, Serial0/0
C 192.168.10.0/24 is directly connected, FastEthernet1/0
D 192.168.20.0/24 [90/2172416] via 192.168.1.2, 00:07:39, Serial0/0
C 192.168.1.0/24 is directly connected, Serial0/0
The ultimate test is to verify end-to-end connectivity across all three VCs we have, which can be
done by going to each of the three routers one by one and pinging the other two routers.
R1#ping 192.168.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 16/40/56 ms
We should also verify connectivity between the local-area networks (LANs) attached to routers.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface Serial0/0
R1(config-if)#encapsulation frame-relay
R1(config-if)#no shutdown
R1(config-if)#interface Serial0/0.2 point-to-point
R1(config-subif)#ip address 192.168.12.1 255.255.255.0
R1(config-subif)#frame-relay interface-dlci 52
R1(config-fr-dlci)#interface Serial0/0.3 point-to-point
R1(config-subif)#ip address 192.168.13.1 255.255.255.0
R1(config-subif)#frame-relay interface-dlci 53
R1(config-fr-dlci)#interface Serial0/0.4 point-to-point
R1(config-subif)#ip address 192.168.14.1 255.255.255.0
R1(config-subif)#frame-relay interface-dlci 54
R1(config-fr-dlci)#interface FastEthernet1/0
R1(config-if)#ip address 192.168.10.1 255.255.255.0
R1(config-if)#no shutdown
R1(config-if)#exit
R1(config)#router eigrp 100
R1(config-router)#network 192.168.12.0
R1(config-router)#network 192.168.13.0
R1(config-router)#network 192.168.14.0
R1(config-router)#network 192.168.10.0
R1(config-router)#end
R1#
R2 has a similar configuration.
R2>enable
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#
R2(config)#interface Serial0/0
R2(config-if)#encapsulation frame-relay
R2(config-if)#no shutdown
R2(config-if)#interface Serial0/0.1 point-to-point
R2(config-subif)#ip address 192.168.12.2 255.255.255.0
R2(config-subif)#frame-relay interface-dlci 51
R2(config-fr-dlci)#interface FastEthernet1/0
R2(config-if)#ip address 192.168.20.1 255.255.255.0
R2(config-if)#no shutdown
R2(config-if)#router eigrp 100
R2(config-router)#network 192.168.12.0
R2(config-router)#network 192.168.20.0
|R2(config-router)#end
R2#
R3 has pretty much similar configuration as well.
R3>enable
R3#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#interface Serial0/0
R3(config-if)#encapsulation frame-relay
R3(config-if)#no shutdown
R3(config-if)#interface Serial0/0.1 point-to-point
R3(config-subif)#ip address 192.168.13.3 255.255.255.0
R3(config-subif)#frame-relay interface-dlci 51
R3(config-fr-dlci)#interface FastEthernet1/0
R3(config-if)#ip address 192.168.30.1 255.255.255.0
R3(config-if)#no shutdown
R3(config-if)#router eigrp 100
R3(config-router)#network 192.168.13.0
R3(config-router)#network 192.168.30.0
R3(config-router)#end
R3#
R4 too has a single PVC to R1 like R2 and R3. R1 happens to be the hub in this hub-and spoke
topology. This topology is commonly used in real-world Frame Relay networks where a large
number of remote offices are connected to the company headquarters.
R4>enable
R4#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface Serial0/0
R4(config-if)#encapsulation frame-relay
R4(config-if)#no shutdown
R4(config-if)#interface Serial0/0.1 point-to-point
R4(config-subif)#ip address 192.168.14.4 255.255.255.0
R4(config-subif)#frame-relay interface-dlci 51
R4(config-fr-dlci)#interface FastEthernet1/0
R4(config-if)#ip address 192.168.40.1 255.255.255.0
R4(config-if)#no shutdown
R4(config-if)#router eigrp 100
R4(config-router)#network 192.168.14.0
R4(config-router)#network 192.168.40.0
R4(config-router)#end
R4#
Let’s verify that PVCs are established between R1 and the rest of routers by using the show
frame-relay map command.
R1>enable
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface Serial0/0
R1(config-if)#encapsulation frame-relay
R1(config-if)#no shutdown
R1(config-if)#interface Serial0/0.4 point-to-point
R1(config-subif)#ip address 192.168.14.1 255.255.255.0
R1(config-subif)#frame-relay interface-dlci 54
R1(config-fr-dlci)#interface Serial0/0.123 multipoint
R1(config-subif)#ip address 192.168.123.1 255.255.255.0
R1(config-subif)#frame-relay interface-dlci 52
R1(config-fr-dlci)#frame-relay interface-dlci 53
R1(config-fr-dlci)#interface FastEthernet1/0
R1(config-if)#ip address 192.168.10.1 255.255.255.0
R1(config-if)#no shutdown
R1(config-if)#router eigrp 100
R1(config-router)#network 192.168.14.0
R1(config-router)#network 192.168.123.0
R1(config-router)#network 192.168.10.0
R1(config-router)#end
R1#
R1 has a multipoint Frame Relay sub-intefaces connected to the subnet 192.168.123.0/24 as well.
R2>enable
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface Serial0/0
R2(config-if)#encapsulation frame-relay
R2(config-if)#no shutdown
R2(config-if)#interface Serial0/0.123 multipoint
R2(config-subif)#ip address 192.168.123.2 255.255.255.0
R2(config-subif)#frame-relay interface-dlci 51
R2(config-fr-dlci)#frame-relay interface-dlci 53
R2(config-fr-dlci)#interface FastEthernet1/0
R2(config-if)#ip address 192.168.20.1 255.255.255.0
R2(config-if)#no shutdown
R2(config-if)#router eigrp 100
R2(config-router)#network 192.168.123.0
R2(config-router)#network
R2(config-router)#end
R2#
R3 also shares the subnet 192.168.123.0/24 via its Frame Relay multipoint sub-interface that
terminates two PVCs.
R3>enable
R3#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#interface Serial0/0
R3(config-if)#encapsulation frame-relay
R3(config-if)#no shutdown
R3(config-if)#interface Serial0/0.123 multipoint
R3(config-subif)#ip address 192.168.123.3 255.255.255.0
R3(config-subif)#frame-relay interface-dlci 51
R3(config-fr-dlci)#frame-relay interface-dlci 52
R3(config-fr-dlci)#interface FastEthernet1/0
R3(config-if)#ip address 192.168.30.1 255.255.255.0
R3(config-if)#no shutdown
R3(config-if)#router eigrp 100
R3(config-router)#network 192.168.123.0
R3(config-router)#network 192.168.30.0
R3(config-router)#end
R3#
R4 has a point-to-point sub-interface only terminating a PVC to R1.
R4>enable
R4#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface Serial0/0
R4(config-if)#encapsulation frame-relay
R4(config-if)#no shutdown
R4(config-if)#interface Serial0/0.1 point-to-point
R4(config-subif)#ip address 192.168.14.4 255.255.255.0
R4(config-subif)#frame-relay interface-dlci 51
R4(config-fr-dlci)#interface FastEthernet1/0
R4(config-if)#ip address 192.168.40.1 255.255.255.0
R4(config-if)#no shutdown
R4(config-if)#router eigrp 100
R4(config-router)#network 192.168.14.0
R4(config-router)#network 192.168.40.0
R4(config-router)#end
R4#
It’s time to view the Frame Relay DLCI to IP address mappings learned via InARP, using show
frame-relay map command on R1.
Key Concept The most widely used data link protocols used on serial WAN links are
HDLC, PPP, Frame Relay.
In this section, we will briefly introduce you to a handful of other WAN protocols.
Ethernet WANs
Ethernet began life as a LAN technology and remained so for quite a while because distance
limitations made it difficult to create longer links. However, Ethernet standards kept improving
with time, both in speed and distance, especially for optical fiber media. The result is that service
providers now can and do offer WAN services that employ Ethernet both on the edge for customer
access links and in the core of the provider network.
Figure 12-20 Ethernet WAN Service
Different kinds of Ethernet WAN services are commercially available with many different names
such as Wide Area Ethernet, Ethernet over MPLS (EoMPLS), Metropolitan Ethernet (MetroE),
and Virtual Private LAN Service (VPLS). In fact the provider can use any technology inside its
network to create an Ethernet WAN service for its customers. Ethernet WAN services usually
offer 100 Mbps or 1 Gbps speeds to customers.
Multi-Protcol Label Switching (MPLS)
Multiprotocol Label Switching (MPLS) technology is used by service providers to offer many
types of WAN services. We will mention one of those WAN services called MPLS VPN that
happens to be very popular with enterprise customers. MPLS VPN has a familiar service model,
with customer sites connecting to the provider’s network cloud and the cloud moving data
between customer sites connected to the cloud as required. The service provider also promises to
keep data from different customers separate as it passes through its network.
MPLS VPNs have many differences from other WAN services, but the most significant difference
is that they are aware of IP packets from customers. They do not just promise to deliver bits like
leased lines or data link frames like Frame Relay and Ethernet WAN. MPLS network is more like
an IP network, routing IP packets between customers sites. Due to this IP awaresness of the MPLS
network, service providers are offering many interesting services to customers.
Digital Subscriber Line (DSL)
Digital Subscriber Line (DSL) has enabled much faster Internet access speeds to both homes and
businesses as compared with dial-up and ISDN technologies that DSL has almost completely
replaced now.
One limitation of DSL is that it only works at certain distances from the central office (CO) to the
home and as cable distance increases it suffers speed degrades. So if the site where you want to
have a DSL connection happens to be far from the nearest CO, the quality of the service may
become poor or the service may not be available at all. Though this is usually not a concern in
urban areas, yet you may occasionally see this problem.
PPP over Ethernet (PPPoE)
PPP over Ethernet (PPPoE) is one technology overlaid on top of another. You know that PPP is a
data link protocol used on serial interfaces to create point-to-point links over leased lines. PPP is
also used on those links that are created from a user to an ISP with dial-up modems. Some
features of PPP are very useful for ISPs. First, PPP supports a way to assign IP addresses to the
other end of the PPP link. PPP also supports CHAP for authentication which allows ISPs to check
their accounting records to see if the customer’s bill was paid before granting Internet access.
DSL came after dial-up and ISDN that both used PPP, so ISPs still wanted their PPP with DSL.
The customer however mostly used an Ethernet link between the customer PC or the router and
the DSL modem. That Ethernet link only supported Ethernet data link protocols and not PPP. ISPs
demanded a way to create the equivalent of a PPP connection between the customer router and the
ISP router over the various technologies used on DSL connections.
PPP over Ethernet (PPPoE) was created to allow the sending of PPP frames encapsulated inside
Ethernet frames. PPPoE essentially creates a tunnel between customer router and the ISP router.
PPP was originally meant for point-to-point links and there is not a single point-to-point link
between the two routers here. With PPPoE and its associated protocols, the rotuer logically
creates a tunnel and then creates and sends PPP frames over that tunnel as if the tunnel were a
point-to-point link between the routers.
Summary
In this chapter, you learned about the following WAN technologies: High-Level Data Link Control
(HDLC), Point-to-Point Protocol (PPP), and Frame Relay.
You learned that HDLC is a basic protocol for point-to-point serial links but if all what you need
is to connect two routers over a leased line, HDLC is just fine and it’s enabled by default on
serial interfaces of Cisco routers. If you need more features than HDLC offers or if you are using
two routers from different manufacturers, you should use PPP rather than Cisco-proprieatry
HDLC.
You were introduced to to several PPP concepts including the role of LCP and different NCPs,
one for each Layer 3 protocol encapsulated by PPP. You also learnt about two types of
authentication that can be used with PPP: Password Authentication Protocol (PAP) and Channel
Handshake Authentication Protocol (CHAP).
We talked about Frame Relay in detail covering different encapsulation methods, addressing, LMI
options, Frame Relay maps, and virtual circuits. We also learned in-depth how to configure and
verify Frame Relay.
Chapter 12: Virtual Private Networks (VPNs)
12-1 VPN Concepts
12-2 Types of VPN
12-3 Encryption
12-4 IPsec VPNs
12-5 SSL VPNs & Tunneling Protocols
12-6 GRE Tunnels
12-7 Summary
VPN Concepts
A company wanting to connect two (or more) of its sites can choose from several different types
of WAN services: leased lines, Frame Relay, or more likely Multiprotocol Label Switching
(MPLS) today. All these services are typcially expensive. However, another much cheaper option
exists for connecting company sites to each other. Each site can simply be connected to the
Internet using a broadband Internet access technology like digital subscriber line (DSL), cable,
WiMAX, or even 3G/4G. Different sites then can send data to each other using the public Internet
as a wide area network (WAN).
There is one problem with using Internet as a WAN though. The Internet is not as secure as other
WAN options. The vulnerability of the Internet is, to a great extent, due to the fact that it is a
public network. Just anyone with a computer can access the Internet and possibly attack any other
computer. Other WAN options mentioned here are relatively secure. For example, in order to
steal data flowing over a leased line, the attacker has to physically tap into the line with
specialized equipment or be present at the telco central office. These actions are punishable by
law and not easy for just anyone.
The possibility to use the Internet as a WAN is quite tempting despite the security concerns.
Virtual private network (VPN) technology provides answers to the security questions associated
with using the Internet as a private WAN service. In this chapter, we introduce you to the basic
concepts and terminology related to VPNs. We then discuss details of two main types of VPNs: IP
Security (IPsec) and Secure Sockets Layer (SSL).
VPNs have several advantages over other WAN technologies, some of which are summarized
here:
Cost: Internet VPN solutions can be much cheaper than alternate private WAN options
available today.
Security: Modern VPN solutions can be as secure as private WAN options and are being
used even by organizations with the most stringent security requirements such as credit
card companies.
Scalability: Internet VPN solutions can be scaled quickly and cost-effectively to a large
number of sites. Each location can choose from multiple options of Internet connectivity.
Main Concepts
A virtual private network (VPN) is used to transport data from a private network to another
private network over a public network, such as the Internet, using encryption to keep the data
confidential. In other words, a VPN is an encrypted connection between private networks over a
public network, most often the Internet. VPNs provide the following services:
Confidentiality: VPNs prevent anyone in the middle of the Internet from being able to
read the data. The Internet is inherently insecure as data typically crosses networks and
devices under different administrative controls. Even if someone is able to intercept data
at some point in the network they won’t be able to interpret it due to encryption.
Integrity: VPNs ensure that data was not modified in any way as it traversed the Internet.
Authentication: VPNs use authentication to verify that the device at the other end of VPN
is a legitimate device and not an attacker impersonating a legitimate device.
Anti-Replay: VPNs ensure that hackers are not able to make changes to packets that flow
from source to destination. .
Key Concept: VPNs offer confidentiality, integrity, authentication, and anti-replay protection for
user data.
A VPN is essentially a secure channel, often called a tunnel, between two devices or end points
near the edge of the Internet. The VPN end points encrypt the whole of original IP packet, meaning
the contents of the original packet cannot be understood by someone who even manages to see a
copy of the packet as it traverses the network. The VPN end points also append headers to the
original encrypted packet. The additional headers include fields that allow VPN devices to
perform all their functions.
The graphic below and the explanation that follows should help you grasp basic VPN operation.
Figure 12-1 VPN Concepts for a Site-to-Site VPN
Key Concept : VPNs are classified as site-to-site VPNs that connect all the computers at two
sites and remote access VPNs that connect individual users to a company network over the
Internet. Site-to-site VPNs can be either intranet or extranet VPNs depending on if the two sites
belong to the same or different partnering organizations respectively.
Encryption
Encryption is the fundamental mechanism used to secure communications and is at the heart of any
type of VPN implementation. Encryption obscures information to make it unreadable to
unauthorized recipients. It provides a means to secure communications over an insecure medium
such as the Internet. Let’s now establish the definitions of some basic terms:
Plaintext: The original data before encryption is known as plaintext.
Ciphertext: The data after encryption is called ciphertext.
Hash: A hash, or hash value, is a binary number generated from original data by applying
a mathematical formula. Hash is a value calculated from the original data to uniquely
identify the data.
Encryption: It is the process that transforms plaintext into ciphertext. Encryption involves
the use of an algorithmic process that uses a secret key (binary string) to transform plain
data into a secret code.
Decryption: It is the reverse process of encryption that is used to convert encrypted data
back into its original form.
Cryptography Algorithms
In general, there are three types of cryptography algorithms:
Symmetric Key Cryptography: It involves a single key that is used for both encryption
and decryption.
Asymmetric Key Cryptography: It uses a pair of two different keys, one used for
encryption and the other for decryption.
Hash Function: A hash function is a one-way mathematical function that is used to
produce a unique hash value from original data. The hash function is not reversible which
means that the original data cannot be reconstituted from the hash value even with the
knowledge of the hash function. The hash value is usually appended to the original
message as the unique identifier of the message like a fingerprint.
In Figure 12-2, we present a very simple encryption algorithm known as the Caesar cipher. This
method is named after Julius Caesar, who used it to encrypt his private correspondence. Each
alphabet is shifted right or left by a fixed number of positions. The number of positions and the
direction of shift must be known to both the sender and receiver in order to encrypt and decrypt
the message.
Caesar cipher with a left shift of three positions looks like this:
Plaintext: ABCD EFGH IJKL MNOP QRST UVWX YZ
Ciphertext: XYZA BCDE FGHI JKLM NOPQ RSTU VW
Figure 12-2 Encryption Process
Please keep in mind that today’s encryption algorithms are way more complex than the Caesar
cipher and involve complicated mathematical computations that can be performed only by
computers. However, the basic principle of encryption is still the same.
Symmetric key cryptography does not require a lot of computational power and therefore is much
faster. It is well suited for encrypting large amounts of data such as data transfers over VPN
connections. It can also run on network devices even without dedicated cryptography hardware
due to being less computationally intensive. It should not be a surprise that symmetric key
cryptography is employed by the most popular cryptographic algorithms today namely DES,
3DES, and AES.
Data Encryption Standard (DES): DES is an old and common cryptographic algorithm.
It uses a 56-bit key to encrypt 64-bit data blocks. DES is no longer considered very secure
and is not recommended any more. The weakness of the protocol is primarily due to the
very short key size of 56-bits.
Triple DES (3DES): 3DES is an enhancement of DES that employs up to three 56-bit keys
(168-bits). It runs three passes of the encryption and decryption process over the same
block of data. DES was considered insecure due to its small key length of 56-bits. 3DES
was derived from DES mainly to increase the length of key to 168-bits (three times the 56-
bit key for DES) without switching over to an entirely new algorithm. 3DES also encrypts
64-bit data blocks just like DES, though it uses a 168-bit key. 3DES is the recommended
replacement protocol to use in all DES implementations.
Advanced Encryption Standard (AES): Advanced Encryption Standard (AES), also
known as Rijndael, is one of the most common cryptography algorithms today. The AES is
more flexible than both DES and 3DES as it uses a variable data block length as well as
key length. It can use any combination of key lengths of 128, 192, or 256 bits and data
block lengths of 128. 192. or 256 bits. AES is gradually replacing the predecessor DES
and 3DES standards.
The following table provides a comparison of the three encryption algorithms at a glance.
Table 12-1 Encryption Algorithms for VPNs
Key Length Block Length Security
Algorithm
(bits) (bits)
DES 56 64 Insecure
168 (3 times
3DES 64 Relatively secure
56)
128, 192, or 128, 192, or
AES Strong
256 256
Asymmetric key cryptography, also known as public-key cryptography, uses a two-key pair: one
key is used to encrypt plaintext while the other key is used to decrypt the ciphertext. Each end
user has its own pair of public and private keys. The public key of each end user is publicly
available via a key management system. The private key is known only to the end user and is
never exchanged or revealed to anyone other than the end user.
Asymmetric key cryptography is typically used in the key management process of VPN
establishment, though it is not used to encrypt data being computationally intensive. Some of the
common asymmetric key algorithms include RSA and Diffie-Hellman.
RSA: The RSA algorithm derives its name from the surnames of its three developers,
Rivest, Shamir, and Adleman. It can be used for key exchange, digital signatures, and
message encryption.
Diffie-Hellman (DH): DH is used for exchanging keys over an insecure medium, between
two end users that have no prior knowledge of each other. The secret key obtained through
DH can be used to encrypt subsequent messages using a symmetric key algorithm like
3DES or AES. The DH algorithm is used only for secret key exchange.
Hash Functions
A hash function is a mathematical formula used to compute a fixed-length hash value from the
original plaintext. A hash function is also known by a number of other names as well including
hash algorithm, message digest, and one-way encryption. The original message cannot be
reconstituted from the hash value even with the knowledge of hash function. Hash functions are
used to create a digital fingerprint of the any type of data that is then appended to the original
data. Hash functions provide data integrity ensuring that the information has not been altered
during transmission.
We introduce two of the most common hash functions widely used today:
Message Digest (MD): Message Digest algorithms are a series of hash functions (MD2,
MD4, and MD5) that produce a 128-bit fixed-length hash value (also called a message
digest or fingerprint) from input data of arbitrary length. We cover MD5 in some detail
here. MD5 was developed by Ronald Rivest in 1991. MD5 replaced its predecessor
MD4, addressing potential weaknesses in MD4. The MD5 algorithm produces a 128-bit
(16-byte) hash value, typically expressed in text format as a 32 digit hex number. Several
vulnerabilities have been discovered in the design of MD5 and SHA family of hash
functions is recommended as a replacement of MD5.
Secure Hash Algorithm (SHA): SHA is another series of popular hash functions that
produces 160-bit hash value. SHA is slower than MD5 but is more secure. The first
member of SHA family was SHA-0 that was introduced in 1993. SHA1 was the successor
to SHA-0 that came in 1995 and is the most popular in the SHA family. SHA-1 is
considered to be the successor to MD5 and is widely used in a variety of applications
including Secure Sockets Layer (SSL) and IPsec.
IPSec VPNs
IPsec derives its name from the title of RFC 4301, that is, Security Architecture for the Internet
Protocol. IPsec is a set of security protocols that work together to ensure security of IP traffic as it
traverses the Internet.
IPsec can be used to secure IP traffic between:
Two hosts
Two security gateways (usually routers of firewalls)
A host and a security gateway
IPsec not only provides encryption at the network/IP layer but also defines a new set of headers
that are added to the encrypted IP packet.IPsec is flexible being a framework of open standards,
and describes the messaging to secure communications, but relies on existing algorithms.
IPsec uses the concept of a security association (SA) to define a set of security parameters used
for various VPN functions. SAs are used by AH and ESP as well as by the IKE protocol. SAs are
created as a result of an IPsec VPN connection establishment between two hosts or two gateways.
SAs are uni-directional in nature and there will be two SAs in place with each secure connection,
one for each direction.
There are three main protocols in the IPsec framework.
Internet Key Exchange (IKE)
Internet Key Exchange (IKE) is a combination of Internet Security Association and Key
Management Protocol (ISAKMP), Oakley, and SKEME protocols. The names IKE and ISAKMP
are sometimes used interchangeably in IPsec discussions though we prefer to use IKE in this
chapter. IKE establishes authenticated keys and also negotiates security associations (SAs) that
are then used by ESP and AH protocols. IKE uses UDP port 500.
IKE is a two-phase protocol: IKE phase 1 verifies the identity of the remote peer or in other
words authenticates the remote peer. The two peers then establish an authenticated secure channel
to communicate further. IKE offers two primary methods of authenticating a remote peer:
Preshared Keys: It is the most common method that uses manually configured secret keys
on both peers. It is easy to deploy but is not scalable and very secure.
Public Key Signature: It uses the Public Key Infrastructure (PKI) and is the most secure
method.
At the end of phase 1 negotiation, an ISAKMP/IKE SA (phase 1 SA) is established. Phase 2
negotiations then take place over the secure channel established in phase 1.
IKE phase 2 negotiates SAs that are used to protect actual user data. At the end of phase 2
negotiations, two unidirectional IPsec SAs (phase 2 SAs) are established for user data. One SA is
used for sending encrypted data and the other is used for receiving encrypted data.
Authentication Header (AH)
Authentication Header (AH) provides data integrity and authentication for IP packets passed
between two systems. It can be used when confidentiality is not required. It is used to verify that a
message that has been passed from router A to router B has not been modified during transit. AH
does not provide confidentiality and does not use encryption. All messages are sent in clear text,
if the AH protocol is used alone which offers only weak security. However, AH is used in
combination with other protocols like ESP to offer more robust security features.
Encapsulating Security Payload (ESP)
Encapsulating Security Protocol (ESP) is a member of the IPsec protocol suite. It is an IP based
protocol that uses IP port number 50 for communication between IPsec peers. It can provide
authentication, integrity, confidentiality, and anti-replay protection of data. IP packet encryption
not only hides the contents of the packet but also conceals the identities of the real source and
destination found in the IP header in the form of source and destination IP addresses. ESP
provides authentication for the encrypted IP packet and the ESP header. Authentication ensures
data originated at a trusted source and was not modified during transit.
IPsec Modes
IPsec has two modes of operation:
Tunnel Mode: Tunnel mode secures data in site-to-site or network-to-network scenarios.
In tunnel mode, the device performing VPN functions, such as a router or security
appliance, does that on behalf of other users. In tunnel mode, the entire IP packet including
the original IP header and the payload is encrypted and a new IP header is appended.
Transport Mode: Transport mode secures data in host-to-host or end-to-end scenarios. In
transport mode each user performs VPN functions on its own. In transport mode, IPsec
protects the payload of the original IP packet but excludes the IP header. The transport
mode, unlike the tunnel mode, preserves the original IP header and inserts the IPsec header
between the original IP header and payload.
Both tunnel mode and transport mode can make use of ESP and AH protocols.
Cisco IOS defines bundles of encryption algorithms called transform sets that are used together to
secure VPN traffic. IPsec transform sets define encapsulation (ESP or AH), encryption (3DEs or
AES-128), authentication/integrity algorithm (MD5 or SHA-1), and the IPsec mode (transport or
tunnel). You have the option to create your own custom transform sets though Cisco IOS also
provides some defaults.
Table 12-2 Default IPsec Transform Sets in Cisco IOS
Encryption Hash
Priority Encapsulation
Algorithm Algorithm
Higher ESP 3DES SHA-1
Lower ESP AES-128 SHA-1
SSL VPNs & Tunneling Protocols
The Secure Sockets Layer (SSL) is another VPN technology that serves as an alternative to IPsec.
All modern web browsers support SSL which means it is readily available on virtually all
computers. SSL is used to create a secure connection from the web browser to a web server to
support secure online access to emails, data, and bank accounts. We will discuss a few details
about how can SSL be used to create remote access VPNs.
Web browsers use Hyper Text Transfer Protocol (HTTP) to connect to web servers that listen on
TCP port 80 by default. However HTTP is a plain text protocol which means it is relatively easy
for someone to read the data in transit and is not suited for any application that requires
confidentiality. Therefore, when the communications between web browser and server need to be
secure, the browser automatically switches to SSL. SSL uses port number 443, encrypting data
exchanged between the browser and the server as well as authenticating the user. Normal HTTP
messages then flow over the SSL VPN thus established.
Web browsers are commonly used to create secure web browsing sessions using built-in SSL
functionality. However, the SSL technology is not limited to securing web browsing sessions. The
same technology can also be used to create remote access VPNs using, for example, the Cisco
VPN client. The Cisco AnyConnect VPN client is a software that can be installed on a PC and
uses SSL to create the client side of a remote-access VPN. As a result, all packets sent to the
other end of the VPN are encrypted, not just the packets sent over a single HTTP session in a web
browser.
A web server can be the end point of an SSL connection from a web browser. However, often the
server side of the SSL tunnel terminates on specialized VPN devices such as the Cisco ASA.
Secure Sockets Layer (SSL) and IP Security (IPsec) are important security technologies and we
here present a short comparison of the two:
Table 12-3 IPsec Versus SSL
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 10.10.1.1 255.255.255.0
R1(config-if)#no shutdown
R1(config-if)#exit
R1(config)#interface Serial0/0
R1(config-if)#ip address 192.168.12.1 255.255.255.252
R1(config-if)#no shutdown
R1(config-if)#exit
R1(config)#ip route 10.10.2.0 255.255.255.0 192.168.12.2
And the configuration for R2 goes here:
R2(config)#interface FastEthernet0/0
R2(config-if)#ip address 10.10.2.1 255.255.255.0
R2(config-if)#no shutdown
R2(config)#interface Serial0/0
R2(config-if)#ip address 192.168.12.2 255.255.255.252
R2(config-if)#no shutdown
R2(config-if)#exit
R2(config)#ip route 10.10.1.0 255.255.255.0 192.168.12.1
At this point, hosts on Site A can ping hosts on site B, and vice versa.
Generic Routing Encapsulation (GRE) tunnels work just like a serial link, with
virtual tunnel interfaces replacing physical serial interfaces. The routers still use physical serial
interfaces to connect to the physical network which may be a single leased line directly
connecting two routers. More commonly, the two routers would rather be connected over a
private WAN service provider or the public Internet. The GRE tunnel would operate as an
overlay network but behave like a point-to-point serial link in many ways. The IP addresses on
the two tunnel end point interfaces would be configured from a single subnet, as if they were
directly connected.
Figure 12-4 GRE Tunnel Interfaces
We summarize GRE tunnel configuration steps here:
Step 1: Create tunnel interfaces on R1 and R2 using the interface
tunnel number command. The tunnel interface number is locally significant only and can
be just any permissible number. We choose the number 0 on both sides only for the sake of
consistency and predictability. The numbers otherwise don’t have to match.
Step 2: Choose a subnet to be used on tunnel interfaces and assign an IP address from the
subnet to both end points. We have chosen a /30 subnet as this scheme results in the most
efficient address assignment.
Step 3: Configure the source IP address of the tunnel interface using the tunnel
source interface or the tunnel source ip-address command. The source interface or IP
address must be the one that connects the router to the public part of the network. Please
note that we are using private IP addresses on the physical serial interfaces on the two
routers because we are working in a test environment. In a real deployment, the IP
addresses on the two serial interfaces would typically be public IP addresses. If you refer
to the interface in the command tunnel source, the IP address configured on the listed
interface is used. The tunnel source on one end point must match the tunnel destination at
the other end point and vice versa.
Step 4: Configure the destination IP address of the tunnel interface. The destination IP
address must be from the public part of the network though we are using private IP
addresses even for the public part of the network in our test environment as already
explained.
Step 5: Configure the routers to use the tunnel interfaces to reach remote subnets. We may
use either static routing or a dynamic routing protocol enabled on tunnel interfaces to
achieve that. In our test environment we opt for static routing as the scenario is simple and
the focus here is not on dynamic routing protocols.
The following configuration for R1 is in addition to the configuration already shown. We will
configure a virtual tunnel interface and then configure a new static route to make sure outgoing
packets to the remote subnet are diverted to the tunnel interface rather than the serial interface.
R1(config)#interface Tunnel 0
R1(config-if)#ip address 10.10.12.1 255.255.255.252
R1(config-if)#tunnel source Serial0/0
R1(config-if)#tunnel destination 192.168.12.2
R1(config-if)#no shutdown
R1(config)#no ip route 10.10.2.0 255.255.255.0 192.168.12.2
R1(config)#ip route 10.10.2.0 255.255.255.0 10.10.12.2
And here goes the configuration for R2:
R2(config)#interface Tunnel 0
R2(config-if)#ip address 10.10.12.2 255.255.255.252
R2(config-if)#tunnel source Serial0/0
R2(config-if)#tunnel destination 192.168.12.1
R2(config-if)#no shutdown
R2(config)#ip route 10.10.1.0 255.255.255.0 192.168.12.1
R2(config)#ip route 10.10.1.0 255.255.255.0 10.10.12.1
The routers will start using the tunnel interfaces to route packets to the remote subnets
10.10.1.0/24 and 10.10.2.0/24.
GRE Tunnel Verification
The tunnel configuration is complete but we need to test whether it can pass user traffic or not.
There are some invaluable show commands that can be used for tunnel verification. A good
starting point for GRE tunnel verification is the good old show ip interface brief command. If the
tunnel interface is in an up/up state, the tunnel is successfully established.
R1#show ip route
<Some output omitted>
192.168.12.0/30 is subnetted, 1 subnets
C 192.168.12.0 is directly connected, Serial0/0
10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
C 10.10.1.0/24 is directly connected, FastEthernet0/0
S 10.10.2.0/24 [1/0] via 10.10.12.2
C 10.10.12.0/30 is directly connected, Tunnel0
We can run a traceroute to verify that traffic passes through the tunnel and find out the path taken
by packets:
R1#traceroute
Protocol [ip]:
Target IP address: 10.10.2.1
Source address: 10.10.1.2
Numeric display [n]:
Timeout in seconds [3]:
Probe count [3]:
Minimum Time to Live [1]:
Maximum Time to Live [30]:
Port Number [33434]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Type escape sequence to abort.
Tracing the route to 10.10.2.1
1 10.10.12.2 0 msec 4 msec 0 msec
2 10.10.2.2 4 msec 4 msec 0 msec
You may have noticed that the traceroute does not list any IP addresses on the serial interfaces of
routers though the traffic physically passes through them. The reason is that the packets sent
by traceroute are encapsulated before being sent from R1 to R2. Any other user packets between
Site A and Site B are also treated in a similar fashion.
Summary
In this chapter, you learned about virtual private networks (VPNs) as an alternative to expensive
private WAN solutions providing confidentiality, integrity, authentication, and anti-replay
protection for user data.
We introduced basic VPN concepts covering encryption techniques that are at the core of any
VPN implementation. We then introduced IPsec and SSL VPNs as one of the most popular VPN
deployments today.
We closed the chapter by mentioning a few other IP tunneling protocols including L2F, PPTP,
L2TP, and GRE.
We also covered configuration of GRE tunnels building on a simple serial link configuration.
Chapter 13: IPv6
13-1 IPv6 Introduction
13-2 IPv6 Address Configuration
13-3 OSPF Version 3
13-4 EIGRP for IPv6
13-5 Summary
IPv6 Introduction
IP version 4 (IPv4) has been a core part of the TCP/IP protocol suite and has served well during
tremendous growth of the Internet. IPv4, mostly called just IP, defines addressing and routing for
most corporate networks that use TCP/IP as well as the public Internet. Though IPv4 has been a
long-time companion, it has its shortcomings that created the need for a protocol that could
replace IPv4. That protocol is Internet Protocol version 6 or IPv6 for short. IPv6 defines the same
general functions that are defined by IPv4. However, there are differences in detail that we will
explore in this chapter.
IP version 6 (IPv6) serves as the protocol that will eventually replace IP version 4 (IPv4). The
most obvious reason for migrating TCP/IP networks from IPv4 to IPv6 is growth. IPv4 uses a 32-
bit address, which allows for a little over four billion addresses. It may seem like a pretty large
number of addresses but the immense growth of networks and the Internet has almost exhausted
our stock of available IPv4 addresses for new deployments. IPv6 uses 128-bit addresses and
increases the number of available addresses to 2^128, a number so large that we don’t have a
word for it.
The change from IPv4 to IPv6 is not just about one protocol being replaced by another; it impacts
many other protocols as well. In this super-sized chapter, we start by introducing IPv6 addressing
and routing, also discussing troubleshooting of the same. We then cover OSPFv3 (Open Shortest
Path First Version 3) and EIGRPv6 (Enhanced Interior Gateway Routing Protocol for IPv6), in
detail.
The IPv6 Protocols
The core IPv6 protocol, defined in RFC 2460, describes the concept of a packet, addessing for
those packets, and the role of hosts and routers. The end objective of IPv6 like IPv4 is to enable
devices forward packets sourced by hosts through multiple routers so that they arrive at the
correct destination. However, because IPv6 affects several other functions in a network as well
beyond addressing and packet forwarding, many more RFCs must define other details. For
example, some RFCs describe how to migrate from IPv4 to IPv6, while some other RFCs define
newer versions of familiar IPv4 protocols for IPv6:
OSPF Version 3: The older version 2 of OSPF works for IPv4 but not for IPv6. So, a
newer version known as OSPF version 3 (OSPFv3) was created to support IPv6.
EIGRP for IPv6: EIGRP for IPv4 runs over IPv4 as the transport protocol, communicates
only with IPv4 peers, and advertises only IPv4 routes. EIGRP for IPv6 follows the same
model but it can propagate IPv6 prefixes to route IPv6 packets.
ICMP Version 6: Internet Control Message Protocol (ICMP) worked well with IPv4 to
provide feedback to senders on packet forwarding especially when packets could not be
forwarded by a router. ICMP was changed into what is known as ICMP version 6
(ICMPv6) to support IPv6.
Neighbor Discovery Protocol: Address Resolution Protocol (ARP) is used to discover
the MAC addresses of hosts whose IPv4 addresses are known. IPv6 replaces ARP with
the more general Neighbor Discovery Protocol (NDP).
IPv6 is a layer 3 routing protocol and defines a header that holds both the source and destination
address fields, just like IPv4. The IPv6 header is not similar to the IPv4 header and the
differences are more than just bigger source and destination addresses. The IPv6 header is bigger
in size compared with the IPv4 header, though it is otherwise simpler for the sake of reducing the
computational overhead on routers that process IPv6 packets. The following diagram displays the
IPv6 header:
Figure 14-1 IPv6 Header Format
IPv6 Addressing
There are three types of IP version 6 (IPv6) addresses:
Unicast: A unicast address identifies a single interface and a packet sent to a unicast
address is delivered to the one interface identified by that address.
Anycast: An anycast address is for a set of interfaces typically on different nodes. A
packet sent to an anycast address is delivered to only one of the interfaces identified by
that address. The nearest interface, according to the routing protocol metric, gets the
packet delivered to it.
Multicast: A multicast address identifies a set of interfaces typically on different nodes
just like an anycast address. However, a packet sent to a multicast address is delivered
to all interfaces identified by that address.
If you are wondering what happened to broadcast addresses, they don’t exist in the IPv6 world.
Multicast addresses also function in place of broadcast addresses.
Key Concept IPv6 broadcast addresses are a special case of multicast addresses.
Just like IPv4, IPv6 addresses of all types are assigned to interfaces, not nodes. The IPv6 unicast
address of any of a node’s interfaces can be used as an identifier for that node.
A single interface may have multiple IPv6 addresses of any type (unicast, anycast, and multicast)
and scope.
Representation of IPv6 Addresses and Prefixes
An IP version 6 address is a 128-bit value written as xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
where each x is a hexadecimal digit. There are 8 groups of 4 hexadecimal digits in the above
representation, with each digit representing four binary digits (bits). It is not necessary to write
the leading zeros in an individual group of four hex digits.
An example of an IPv6 address is:
2001:0DB8:0000:0000:0006:0600:300D:527B
The same IPv6 address can also be written as:
2001:DB8:0:0:6:600:300D:527B
There must be at least one digit in every group of four digits. That is the reason, the groups with
all four zeros in the mentioned address are replaced with a single zero.
In order to make writing IPv6 addresses with long strings of zero bits easier, a method is
available to compress zeros. The special symbol :: can be used to represent one or more adjacent
groups of 16 bits of zeros. The symbol :: can appear only once in an address though. The :: can
also include leading zeros after the last group of 16 zero bits, or before the first group of 16 zero
bits. For example, the same address we mentioned can be squeezed further as:
2001:DB8::6:600:300D:527B
Note that the :: in above representation replaces two groups of 16 zero bits, as well as three
leading zeros in the group after those two groups of 16 zero bits.
The table below first shows the full representation of a few IPv6 addresses, and then shows the
short representation making use of the :: according to rules we described.
Table 13-1 IPv6 Addresses Representation
Full Representation Short Representation
2001:0DB8:0000:85A4:0000:08B3:0370:7348 2001:DB8:0:85A4::8B3:370:7348
FF01:0000:0000:0000:0000:0000:0000:0212 FF01::212
0000:0000:0000:0000:0000:0000:0000:0001 ::1
0000:0000:0000:0000:0000:0000:0000:0000 ::
IPv4 divides its address space into three classes: A, B, and C. IPv6 does not have a concept of
classful networks like IPv4. IPv6 subnets or prefixes can be of arbitrary length without any
classful boundaries. An IPv6 address prefix is represented by:
IPv6 address / Length of prefix
For example, the following are four different but valid representations of the 64-bit prefix
20010DB80000ABCD:
2001:0DB8:0000:BCD0:0000:0000:0000:0000/64
2001:0DB8::ABCD:0:0:0:0/64
2001:0DB8:0:BCD0::/64
The last of the above three formats is most commonly used to represent IPv6 prefixes. A node can
have an address like 2001:0DB8:0:BCD0:123:4567:89AB:CDEF for the IPv6 prefix
2001:0DB8:0:BCD0::/64.
IPv6 provides two similar options for unicast addressing:
Global Unicast
IPv6 global unicast addresses are similar to public IPv4 addresses. These addresses are
allocated by the Internet Assigned Numbers Authority (IANA) to the Regional Internet Registries
(RIRs). RIRs have the task of allocation to services providers and other local registries. IANA
maintains an official list of the current state IPv6 address allocation
at http://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml
So, each company is assigned a unique IPv6 address block called a global routing prefix – a set
of addresses that only one company can use. The company subnets its assigned IPv6 address
block using addresses only from the block. As a result, IPv6 global unicast addresses are unique
across the globe.
Key Concept IPv6 global unicast addresses and globally unique, and are analogous to IPv4
public address.
The term global routing prefix refers to the idea that routers can have one route that covers all the
addresses inside an address block, without the need to have individual routes for smaller portions
of that block. All IPv6 addresses in a company should begin with the global routing prefix
assigned to it. IPv6 has plenty of space to allow all companies to have a global routing prefix
with plenty of addresses, thanks to its 128-bit address size.
Global unicast addresses make up the majority of IPv6 address space.
Unique Local
IPv6 unique local addresses are similar to private IPv4 addresses. These addresses can be used
by companies that do not plan to connect to the Internet, and companies that plan to use IPv6
Network Address Translation (NAT).
These addresses are readily available, and you can simply read documentation and start assigning
IPv6 addresses without worrying about registration with IANA or another authority. Multiple
companies can possibly end up using the exact same IPv6 unique local addresses, and it works
just fine just like private IPv4 addresses.
Key Concept IPv6 unique local addresses are only locally significant and don’t have to be
globally unique, just like IPv4 private addresses.
Address Range of Global Unicast Addresses
Global unicast addresses make up the majority of the IPv6 address space. Originally, Internet
Assigned Numbers Authority (IANA) reserved all IPv6 addresses that begin with hex 2 or 3 as
global unicast addresses. This address range can be written concisely as 2000::/3. Later, the
global unicast address range was made wider by a series of RFCs. The present state of affairs is
that all IPv6 addresses not otherwise allocated for other purposes are included in the global
unicast address space.
Because the number of addresses that sit within the global unicast address space is astonishingly
large, IANA does not assign prefixes from all over the address range.
The type of an IPv6 address can be identified by the initial bits of the address, as explained in
below table.
Table 13-2 IPv6 Address Type Identification
Address Type Binary Prefix IPv6 Notation
000…0 (128
Unspecified ::/128
bits)
000…1 (128 ::1/128
Loopback
bits)
Multicast 1111 1111 FF00::/8
Link-Local
1111 1110 10 FE80::/10
Unicast
Global Unicast (everything else) (everything else)
Anycast addresses are taken from the global unicast address space, and are not otherwise
distinguishable from unicast addresses.
The Unspecified Address
The address 0:0:0:0:0:0:0:0 is called the unspecified address and indicates the absence of an
address. It serves as a place holder and must never actually be assigned to a node. As an example
of its use, the Source Address field of an IPv6 packet sent by an initializing host before it has
learned its own address carries the unspecified address.
The Loopback Address
The unicast address 0:0:0:0:0:0:0:1 is known as the loopback address, and can be used by an
IPv6 host to send a packet to it. The loopback address may be considered the IPv6 equivalent of
the IPv4 loopback address 127.0.0.1. The address is reserved for the special purpose of
addressing self and must not be actually assigned to an interface. You can think of the loopback
interface as a virtual interface to an imaginary link that goes nowhere. The loopback address is
treated as having a link-local scope and an IPv6 packet with a destination address of loopback
must never be sent outside of a single node.
Global Unicast Address Format
The general format of global unicast address is shown in Figure 13-2.
Figure 13-2 Global Unicast Address Format
Link-Local IPv6 Unicast Address Format
The format of link-local addresses that are for use on a single link is given in Figure 13-3.
Figure 13-3 Link-Local Address Format
Link-local addresses are supposed to be used for addressing of a single link for purposes such as
automatic address configuration and neighbor discovery. Routers do not forward packets with
link-local source or destination addresses to other links, thus respecting and enforcing link-local
scope of these addresses.
Site-local addresses were originally designed to be used for addressing inside a site without the
need for a global routing prefix. They are now deprecated.
IPv6 Address Configuration
Having come this far in your CCNA studies, you should be well familiar with the concepts of
static and dynamic IPv4 addresses. Static IPv4 addresses are permanently assigned to devices
like servers and routers that need to have persistent addresses. Other devices like PCs can live
with IPv4 addresses assigned dynamically by DHCP (Dynamic Host Configuration Protocol).
IPv6 uses the same general scheme, with devices like servers and routers using pre-configured
IPv6 addresses, while user devices making use of dynamically learned IPv6 addresses. In this
section, we will configure different types of IPv6 addresses used by routers to participate in
different protocols.
Enterprise networks have been using IPv4 as the exclusive protocol for quite some time now. In
other words, TCP/IP has been the only protocol stack in use in company networks. IPv6 is the
new protocol that is supposed to replace IPv4 over time, requiring end-user hosts, servers,
routers, and all other networked devices to implement IPv6. You probably can understand that the
world cannot migrate all IPv4 devices to IPv6 in a week or month. The migration will rather be a
process that will occur gradually, and one that has already started. Most companies will gradually
migrate from IPv4 to IPv6 and the process may span years. In the mean time, most enterprise
networks will be a mix of IPv4 and IPv6 protocol stacks. Our guess is that the process will be
quite slow and you will still have to deal with IPv4 for the rest of your working life.
You will hear a lot about dual-stack strategy for implementing IPv6 in enterprise networks. The
strategy offers a gradual migration path from IPv4 to IPv6 letting IPv4 and IPv6 to coexist. The
routers are configured with IPv6 addresses on their interfaces, and they router IPv6 packets just
like they route IPv4 packets. The hosts can implement IPv6 when ready, running dual stack or
running both IPv4 and IPv6.
IPv6 Static Address Configuration
There are two methods of configuring static IPv6 addresses on Cisco routers:
Configuring the full 128-bit address
Configuring a 64-bit prefix only, and letting the router derive the rest of address
You can use the ipv6 address address/prefix-length command to configure the full 128-bit global
unicast and unique local addresses. You can use the full 32-digit hex address as well as the
abbreviated address in the ipv6 address address/prefix-length command.
Figure 14-4 IPv6 Address Configuration
We are going to configure 128-bit IPv6 addresses on R1 and R2 and you will see the
configuration is quite simple.
R1:
ipv6 unicast-routing
!
interface FastEthernet0/0
ipv6 address 2001:0DB8:0001:0001:0000:0000:0000:0001/64
!
interface Serial0/0
ipv6 address 2001:0DB8:0001:0012:0000:0000:0000:0001/64
R2:
ipv6 unicast-routing
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:2::1/64
!
interface Serial0/0
ipv6 address 2001:DB8:1:12::2/64
We have used the unabbreviated address format in the configuration for R1 while using the
abbreviated address format on R2.
Key Concept You must use the ipv6 unicast-routing command to enable IPv6 routing on the
router.
We used an easily forgotten command while configuring IPv6 addresses on R1 and R2 and that
is ipv6 unicast-routing. We are used to configuring IPv4 addresses on routers and you don’t have
to enable IPv4 processing on routers as it is enabled by default. That’s not the case with IPv6 yet
and you must enable IPv6 routing using the single ipv6 unicast-routing command. If you configure
IPv6 addresses on router interfaces but leave out the ipv6 unicast-routing command, the router
can still be configured with interface IPv6 addresses, but it acts more like a host and cannot route
IPv6 packets.
You can use the show ipv6 interface brief or show ipv6 interface commands to verify IPv6
addresses configured on router interfaces. The router always displays IPv6 addresses in the
abbreviated format even if you had configured them in the unabbreviated format. The output
of show ipv6 interface brief command executed on R1 and R2 demonstrates the fact, as shown
below.
R1#show ipv6 interface brief
FastEthernet0/0 [up/up]
FE80::C000:18FF:FE28:0
2001:DB8:1:1::1
Serial0/0 [up/up]
FE80::C000:18FF:FE28:0
2001:DB8:1:12::1
FastEthernet0/1 [administratively down/down]
unassigned
Serial0/1 [administratively down/down]
unassigned
R2#show ipv6 interface brief
FastEthernet0/0 [up/up]
FE80::C001:18FF:FE28:0
2001:DB8:1:2::1
Serial0/0 [up/up]
FE80::C001:18FF:FE28:0
2001:DB8:1:12::2
FastEthernet0/1 [administratively down/down]
Serial0/1 [administratively down/down]
You can also use the show ipv6 interface command that provides more detailed information.
Key Concept The ipv6 address command gives the interface a unicast IPv6 address, defines the
IPv6 prefix for the interface, enables routing of IPv6 packets in/out that interface, and tells the
router to add a connected route for that prefix to the IPv6 routing table when the interface is
up/up.
In the IPv6 world, end-user devices use DHCP (Dynamic Host Configuration Protocol) or
SLAAC (Stateless Address Auto Configuration) to dynamically learn IPv6 addresses, while
routers use static IPv6 addresses. There is just one way to configuration static IPv4 addresses
where the complete address is hard-coded in router configuration. IPv6 is a little different and
there are actually two options to configure static IPv6 addresses on router interfaces.
The first method that uses the ipv6 address command to define the entire 128-bit address has
already been discussed in this chapter. The second method uses the same ipv6 address command
to configure only the 64-bit IPv6 prefix for the interface letting the router automatically generate a
unique interface ID. This second method uses a mechanism called EUI-64 (extended unique
identifier). The configuration includes the eui-64 keyword to inform the router it has to use EUI-
64 rules to create the interface ID portion of IPv6 address.
Split the 12-hex-digit (6-byte/48-bit) MAC address into two halves of 6 hex digits each.
Insert hexadecimal FFFE in between the two halves making a total of 16 hex digits or 8
bytes/64 bits.
Invert seventh bit of the first byte in the 64-bit string, reading from left to right.
The following graphic elaborates these concepts for a router interface that has C200.1B2C.0000
as its MAC address.
Figure 14-5 EUI-64 Interface ID Calculation
The final step in the EUI-64 process requires you to convert the first byte (two hex digits) from
hex to binary, invert the seventh bit, and convert the bits back to hex. Inverting the bit means
making it a 1 if it’s 0 and making it a 0 if it’s 1.
The best way to master these EIU-64 interface IDs is to calculate some yourself. You may find out
the burned-in address of a router interface using the show ipv6 interface command.
R1:
ipv6 unicast-routing
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:1::/64 eui-64
!
interface Serial0/0
ipv6 address 2001:DB8:1:12::/64 eui-64
You may list the EUI-64 address using the show ipv6 interface brief command and compare the
interface ID portion C000:1BFF:FE2C:0 calculated by the router to the one you calculated
yourself from the burned-in address (C200.1B2C.0000).
R1:
interface FastEthernet0/0
ipv6 address dhcp
!
interface FastEthernet0/1
ipv6 address autoconfig
IPv6 Link-Local Address Configuration
IPv6 link-local addresses are a special kind of unicast addresses. These addresses are not used
for regular user traffic flows. These addresses are rather used by other protocols as well as for
routing. Each IPv6 host including routers uses an additional unicast address called a link-local
address. The most important fact to remember about link-local addresses is that routers do not
forward packets that have a link-local address as its destination address. Many IPv6 protocols
function between directly connected routers and need to send messages on a single subnet only.
These IPv6 protocols such as NDP (Neighbor Discovery Prtocol) use link-local addresses.
IPv6 routers also use link-local addresses as the next-hop address in IPv6 routes. IPv6 hosts have
the concept of a default gateway (router) similar to IPv4, but hosts refer to the link-local address
of the gateway instead of the router address in the same subnet. The show ipv6 route command
lists the link-local address of the next hop router and not the global unicast or unique local unicast
address.
The following list summarizes important information about link-local addresses:
Unicast: Link-local addresses are unicast and packets sent to a link-local address reach a
single IPv6 host.
Forwarding Scope: Packets sent to a link-local address never leave the local data link as
routers never forward packets sent to a link-local address.
Automatic: These addresses are available for use even before hosts can dynamically
learn a global unicast address. Every interface on an IPv6 router automatically generates
its own link-local address.
Uses: IPv6 link-local addresses are use by several overhead protocols and as next-hop
address of IPv6 routes.
IPv6 hosts and routers can autonomously calculate their own link-local addresses, for each
interface. There are two parts of a link-local address: a prefix and the interface ID. The first ten
bits of a link-local address, by definition, are FE80::/10 while the next 56 bits should be binary
0. As a result, a link-local address should always start with FE80:0000:0000:0000 that covers
the first 64 bits of the address. The second half of a link-local address, can be formed with
different rules depending on the platform. Cisco routers use EUI-64 for interface ID part of link-
local addresses. Host operating systems have their own way of generating interface IDs. For
example, Microsoft Windows variants use a random process to choose the interface ID and
change it over time as well.
The Cisco IOS Software automatically configures a link-local address for any interface that has at
least one unicast address configured using the ipv6 address command. So, there is no
configuration separately needed for link-local addresses. The usual show ipv6
interface and show ipv6 interface brief can be used to display link-local addresses as well.
ipv6 unicast-routing
!
ipv6 router ospf 1
router-id 1.1.1.1
exit
!
interface Serial 0/0
ipv6 address 2001:DB8:1:12::1/64
ipv6 ospf 1 area 0
no shutdown
!
interface Serial0/1
ipv6 address 2001:DB8:1:13::1/64
ipv6 ospf 1 area 0
no shutdown
R2 Configuration:
ipv6 unicast-routing
!
ipv6 router ospf 1
router-id 2.2.2.2
!
interface Serial 0/0
ipv6 address 2001:DB8:1:12::2/64
ipv6 ospf 1 area 0
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:23::2/64
ipv6 ospf 1 area 0
no shutdown
R3 Configuration:
ipv6 unicast-routing
!
ipv6 router ospf 1
router-id 3.3.3.3
!
interface Serial 0/0
ipv6 address 2001:DB8:1:13::3/64
ipv6 ospf 1 area 0
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:23::3/64
ipv6 ospf 1 area 0
no shutdown
The ipv6 router ospf process-id command creates the OSPV3 process and gives it a number
called process ID. We have used the same process ID on all three routers, but please keep in mind
that OSPFv3 process ID is only locally important. We could have used difference process IDs on
routers without any problems. In practice, most enterprises would use the same process ID on all
routers for the sake of consistency.
The ipv6 ospf process-id area area-id enables OSPFv3 on individual interfaces also assigning
the area number. In this case, we are configuring a single-area OSPFv3 domain so all interfaces
in all routers are placed in area 0, the backbone area.
There is one completely optional feature that is relevant to your CCNA version 2 exams: OSPFv3
passive interface. This feature is quite similar in concepts and configuration to its OSPFv2
counterpart. If you want a router not to form OSPFv3 neighbor relationships on an interface, that
interface may be made passive. In this case, we do not want any interface to passive as we want
each router to make two OSPFv3 neighbor relationships, so there is no passive interface
configuration.
Finally, you need to configure OSPFv3 RID (router ID) to identify the router in the OSPFv3
routing domain. In this case, we set the OSPFv3 router ID on all three routers using the router-
id command in router configuration mode.
OSPFv3 Multi-Area Configuration
We will now build on the single-area configuration to create a multi-area OSPFv3 configuration
as shown in Figure 12-x. The router R1 is the ABR (area border router), with OSPFv3 process
ID 1, and OSPFv3 enabled on four interfaces:
Area 0: Serial0/0 and Serial0/1
Area 1: FastEthernet0/0
Area 14: FastEthernet0/1
There is nothing in the configuration of R1 that explicitly sets it as ABR. We will configure
interfaces of R1 in different OSPFv3 areas automatically making R1 and ABR. We will also
configure FastEthernet0/0 of R1 as passive because there are no OSPFv3 routers off this interface
and R1 will not make any neighbor relationships on this interface any way.
Figure 13-7 OSPFv3 Multi-Area Domain
We add two OSPFv3 areas and a single OSPFv3 router R4 to the single-area scenario presented
in the last section, as shown in Figure 14-7. You have to perform additional configuration on
FasEthernet0/0 and FastEthernet0/1 of R1. You also have to configure R4 from scratch assigning
OSPF areas to interfaces as shown in Figure 13-7. We will configure FastEthernet0/0 of R4 as
passive as there are not any OSPFv3 neighbors off this interface.
R1 Configuration:
interface FastEthernet0/0
ipv6 address 2001:DB8:1:1::1/64
ipv6 ospf 1 area 1
passive-interface
no shutdown
!
interface FastEthernet0/1
ipv6 address 2001:DB8:1:14::1/64
ipv6 ospf 1 area 14
no shutdown
!
ipv6 router ospf 1
passive-interface FastEthernet0/0
R4 Configuration:
ipv6 unicast-routing
!
ipv6 router ospf 1
router-id 4.4.4.4
passive-interface FastEthernet0/1
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:14::4/64
ipv6 ospf 1 area 14
no shutdown
!
interface FastEthernet0/1
ipv6 address 2001:DB8:1:4::4/64
ipv6 ospf 1 area 14
no shutdown
Additional OSPFv3 Configuration
The core OSPFv3 configuration is complete by now. However, we will configure some
additional OSPFv3 features that are very similar to corresponding OSPF features for IPv4.
OSPFv3 Interface Cost
OSPFv3 is very much like OSPFv2 when it comes to calculation of route metric, with minor
differences in concepts, configuration commands, and verification commands.
The SPF (shortest path first) algorithm on a router finds all possible routes to a subnet and then
calculates the cost of each route by adding the OSPF interface cost for all outgoing interfaces on
the path from the router to the subnet. You can influence OSPFv3 route selection using methods
that are just similar to the corresponding rules for OSPFv2:
1. Set the interface cost explicitly using the ipv6 ospf cost x command in interface
configuration mode. The permissible values of interface cost are between 1 and 65,535,
both inclusive.
2. Change the nominal interface bandwidth in kbps (kilo bits per second) using
the bandwidth speed command, and let the router calculate the OSPFv3 interface cost by
the formula reference-bandwidth / interface-bandwidth.
3. Change the reference bandwidth in Mbps (mega bits per second) for the interface cost
calculation formula, using the auto-cost reference-bandwidth ref-bw command.
OSPFv3 Load Balancing
The OSPFv3 load balancing concept is again similar to the corresponding OSPFv2 concept.
Also, the exact same command is used to make equal-cost load balancing happen. When an
OSPFv3 router has multiple routes to reach one subnet, each with the same metric, the router can
put multiple equal-cost routes in the routing table. The maximum-paths number command is used
in router configuration mode to define just how many such routes can be added by OSPFv3 to the
IPv6 routing table. For example, if a network has six equal-cost routes, and you want all routes to
be used, you should configure the router with maximum-paths 6 subcommand under the ipv6
router ospf command.
Injecting Default Routes
OSPFv3 can advertise a default route and the feature again works much like OSPFv2. This
feature allows an OSPFv3 router to have a default route and then tell all other routers, to use that
default route.
If a company has a single IPv6 enabled Internet connection, it can use a default IPv6 route to send
all IPv6 traffic out that one link to the Internet. All the internal routers need to send traffic to the
single Internet facing router:
All routers learn specific routes to subnets inside the company network, so the default
route is not used for destinations inside the company.
The router facing the Internet has a static default IPv6 route that points all IPv6 traffic not
matching any other specific route to the Internet.
All routers learn the default route from the Internet facing routers over OSFPv3 and send
all IPv6 packets not matching specific routes to the Internet facing router that, in turn,
sends them to the Internet.
The default-information originate command is used in OSPFv3 configuration mode to originate
a default route from the router facing the Internet. The IPv6 default route is represented as ::/0
with a prefix length 0, and it is analogous to the 0.0.0.0/0, the default route used with IPv4.
This completes our discussion of OSPFv3 configuration. You may have noticed that we frequently
referred to OSPFv2 when introducing OSPFv3 concepts. The reason was that OSPFv3 is so
similar to OSPFv2 with certain differences in some areas. It is probably the right time to recap
the similarities and differences between OSPFv3 and OSPFv2.
OSPFv3 works much like OSPFv2 with regard to:
Design of areas and related terminology.
The general idea of enabling the OSPF process and assigning areas to individual router
interfaces.
The neighbor discovery process making use of Hello messages
Transitioning through neighbor states and echange of topology information through LSAs
(Link State Advertisement)
The role of full and 2-way states as the normal stable states for functional neighbor
relationships, with other states being temporary pointing to some problem.
The ideas of Type 1, 2, and 3 LSAs and the LSDB (link-state database).
The formula used by SPF (shortest-path first) algorithm to calculate interface cost.
OSPF messages sent to reserved multicast addresses (FF02::5 for all OSPFv3 routers and
FF02::6 for all DR/BDR routers) similar to the use of 224.0.0.5 and 224.0.0.6 with
OSPFv2.
You understand now how similar the two protocols are, but still they are not exactly the same.
There are many differences as presented in the following list, but the good news for you as a
CCNA candidate is that most of the differences are outside the scope of this book and your CCNA
exams:
The Type 3 LSA name.
The OSPFv3 neighbors do not have to have their IPv6 addresses in the same IPv6 subnet,
while OSPFv2 neighbors must have their IP addresses in the same subnet to form neighbor
relationships.
There are some new LSA types used by OSPFv3 only and not by OSPFv2, but these are
beyond scope as we mentioned.
The details inside LSA types 1, 2, and 3 are also different for OSPFv3 and OSPFv2 but
these are again outside scope here.
So, as you can see there are more similarities than differences and the few differences that exist
are also out of scope for your CCNA exams. So you can reuse most of your OSPFv2 concepts for
OSPFv3.
In the next section, we will cover OSPFv3 verification and troubleshooting with associated
concepts.
OSPFv3 Verification and Troubleshooting
The OSPFv3 show commands used for verification and troubleshooting are also very similar to
OSPFv2 commands mostly the ip keyword getting replaced by ipv6. When the OSPFv3 process
first comes up on a router, the IOS reads the OSPFv3 configuration and then enables OSPFv3 on
interfaces. So, we will start our OSPF verification and troubleshooting by examining OSPFv3
interfaces. If the interfaces look good, you can move on to OSPFv3 neighbors, and then to the
OSPFv3 topology database, and finally to OSPFv3 routes added to the IPv6 routing table.
For verification and troubleshooting examples, we will use the OSPFv3 multi-area topology
presented and configured earlier in this chapter.
OSPFv3 Interfaces
The ip ospf process-id area area-id command is used in interface configuration mode to make
OSPFv3 run on an interface. You can quickly scan the output of show running-config to identify
the OSPFv3 interfaces as well as the area number for each.
You can be much more effective at verification and troubleshooting by using the
respective show commands than trying to read the configuration. And that’s not the only reason to
favor specific show commands over the show running-config command. Many simlet questions
on the CCNA exams do not even let you into the enable mode of the router, so you just cannot use
the show running-config command to see the configuration. So, if you think your good
configuration skills alone can help you verify and troubleshoot OSPFv3 for CCNA, think again.
There are three show commands that can provide you useful information about interfaces enabled
for OSPFv3:
show ipv6 protocols
show ipv6 ospf interface brief
show ipv6 ospf interface
All three commands list the interfaces, both non-passive and passive, on which OSPFv3 has been
enabled, but with different level of details. In our case, there is no passive interface, but you can
make any interface passive by using the passive-interface command in interface configuration
mode.
R1 and R4 are the only two routers in area 14 and each generates one Type 1 LSA. There is a
single subnet (between R1 and R4) in area 14 that has a DR, so a single Type 2 LSA is generated
by the DR, that is, R4 for that subnet. Finally, the router R1 being the ABR generates four Type 3
LSAs into area 14 to represent four subnets (prefixes) in areas 0 and 1 .
You can actually view the LSAs shown in the graphic above using the show ipv6 ospf
database command:
R1 Configuration:
ipv6 unicast-routing
!
ipv6 router eigrp 1
router-id 1.1.1.1
no shutdown
!
interface Serial 0/0
ipv6 address 2001:DB8:1:12::1/64
ipv6 eigrp 1
no shutdown
!
interface Serial0/1
ipv6 address 2001:DB8:1:13::1/64
ipv6 eigrp 1
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:1::1/64
ipv6 eigrp 1
no shutdown
!
interface FastEthernet0/1
ipv6 address 2001:DB8:1:14::1/64
ipv6 eigrp 1
no shutdown
R2 Configuration:
ipv6 unicast-routing
!
ipv6 router eigrp 1
router-id 2.2.2.2
no shutdown
!
interface Serial 0/0
ipv6 address 2001:DB8:1:12::2/64
ipv6 eigrp 1
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:23::2/64
ipv6 eigrp 1
no shutdown
R3 Configuration:
ipv6 unicast-routing
!
ipv6 router eigrp 1
router-id 3.3.3.3
no shutdown
!
interface Serial 0/0
ipv6 address 2001:DB8:1:13::3/64
ipv6 eigrp 1
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:23::3/64
ipv6 eigrp 1
no shutdown
R4 Configuration:
ipv6 unicast-routing
!
ipv6 router eigrp 1
router-id 4.4.4.4
no shutdown
!
interface FastEthernet0/0
ipv6 address 2001:DB8:1:14::4/64
ipv6 eigrp 1
no shutdown
!
interface FastEthernet0/1
ipv6 address 2001:DB8:1:4::4/64
ipv6 eigrp 1
no shutdown
You should carefully review the configuration for each router shown above. The number used in
the ipv6 router eigrp asn command is the ASN (autonomous system number) and not the process
ID as used by OSPFv3. The ASN must match for all the routers in the routing domain or
autonomous system (AS). We have used the number 1 for the ASN on all routers. After this, all
routers must explicitly set the RID (router ID) using the eigrp router-id command in router
configuration mode. EIGRPv6 also uses a 32-bit RID llike OSPFv3, with identical rules for how
the router picks the value. The rest of the configuration simply enables EIGRPv6 on all interfaces
using the ASN to associate with the interface, using the ipv6 eigrp asn command.
Please keep in mind that if you do not use the same ASN on two EIGRP routers, they can never
become neighbors. The Cisco IOS allows the EIGRPv6 routing process to be disabled and
enabled using the shutdown and no shutdown commands in router configuration mode. The
default status of the EIGRPv6 routing process when it’s created may depend on the IOS version
you are using. The newer IOS versions usually have the EIGRPv6 routing process enabled by
default. But we have included the explicit no shutdown command in router configuration mode to
make sure EIGRPv6 works even if you happen to use an older IOS version to practice the
example.
Table 13-6 Comparison EIGRPv4 and EIGRPv6 Configuration Commands
Function EIGRPv4 EIGRPv6
Create routing process
and assign ASN (global router eigrp asn ipv6 router eigrp asn
configuration mode)
Define router ID
explicitly (router eigrp router-id rid eigrp router-id rid
configuration mode)
Change number of
multiple routes to same
maximum-paths num maximum-paths num
subnet (router
configuration mode)
Set the variance (router
variance multiplier variance multiplier
configuration mode)
Set interface bandwidth
and delay to influence
metric calculation bandwidth valuedelay value bandwidth valuedelay value
(interface configuration
mode)
Change hello and hold ip hello-interval eigrp asn ipv6 hello-interval
timers (interface timeip hold-time eigrp asn eigrp asn timeipv6 hold-
configuration mode) time time eigrp asn time
Enable EIGRP on an network ip [wildcard-mask] ipv6 eigrp asn(interface
interface (router configuration mode) configuration mode)
Additional EIGRPv6 Configuration
We have covered the core configuration of EIGRPv6 so far. In this section, we are going to cover
several additional configuration options for EIGRPv6.
Bandwidth and Delay to Influence EIGRPv6 Metric
EIGRPv6 uses the exact same parameters, specifically the interface bandwidth and delay, which
are used by EIGRPv4 to calculate the metric for each route. The IOS configuration commands to
set those parameters, specifically the bandwidth and delay commands used in interface
configuration mode, are also the same for EIGRPv4 and EIGRPv6. The similarities do not end
here and the exact same formula is used by EIGRPv4 and EIGRPv6 to calculate the metric for a
route.
Let us consider a design with all the routers are dual-stack running both IPv4 and IPv6, with
EIGRPv4 and EIGRPv6 enabled on all interfaces. In some conditions, the EIGRPv4 metric for a
route to an IPv4 subnet will be the same as the EIGRPv6 metric from the same router to an IPv6
subnet in the same location.
EIGRP Load Balancing
EIGRPv6 and EIGRPv4 use the same concepts and configuration commands for equal-cost and
unequal-cost load balancing. The configuration settings are made with the maximum-
paths and variance commands in EIGRPv6 router configuration mode, reached with the ipv6
router eigrp command. EIGRPv4 also uses the same commands in EIGRPv4 router configuration
mode, reached with the router eigrp command. Please keep in mind that these settings have to be
configured separately for EIGRPv4 and EIGRPv6 on dual-stack routers despite the fact that same
configuration commands are used.
EIGRP Timers
EIGRPv6 uses the same concepts for the Hello and Hold timers as does EIGRPv4. The
commands used to set these parameters have to be used in global configuration mode. In order to
set these parameters separately for EIGRPv6 and EIGRPv4, the Cisco IOS uses the ipv6 keyword
for EIGRPv6 commands and the ip keyword for the EIGRPv4 commands.
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet 0/0
R1(config-if)#ipv6 hello-interval eigrp 1 3
R1(config-if)#ip hello-interval eigrp 2 5
R1(config-if)#end
R1#
In the above example, we have set the Hello timer equal to 3 and 5 seconds for EIGRPv6 and
EIGRPv4 respectively. Please note that these values are absolutely arbitrary and will likely have
the same values for both EIGRPv4 and EIGRPv6 in a real network.
EIGRPv6 Verification and Troubleshooting
We have talked a lot about similarities between EIGRPv6 and EIGRPv4 concepts and
configuration commands. In this section, we look at EIGRPv6 verification and troubleshooting,
discovering even more similarities between EIGRPv6 and its predecessor EIGRPv4.
There are lot more similarities than differences, so it makes sense to list the few differences that
exist between the two protocols. The list is pretty short:
EIGRPv6 advertises IPv6 prefixes, while EIGRPv4 not surprisingly advertises IPv4
subnets.
EIGRPv6 show commands for verification and troubleshooting use the ipv6 keyword as
compared with ip keyword by WIGRPv4 show commands.
EIGRPv6 routers use the same checks for deciding whether to become neighbors, except
that EIGRPv6 routers may become neighbors even if they are in different subnets. You may
recall that EIGRPv4 neighbors must be in the same IPv4 subnet to become neighbors.
In this section on EIGRPv6 verification and troubleshooting, we follow the same sequence that is
used by EIGRPv6 itself when bringing up the EIGRPv6 routing process. We will examine
EIGRPv6 interfaces, neighbors, topology and finally routes installed by EIGRPv6 in the IPv6
routing table.
The verification and troubleshooting examples in this section will all use the topology shown
earlier, composed of routers R1, R2, R3 and R4.
EIGRPv6 Interfaces
When you enable EIGRPv6 on an interface using the ipv6 eigrp asn command, the router would
start discovering neighbors off that interface. The first step in EIGRPv6 verification would be to
make sure it is enabled on all the right interfaces. One of the most common problems associated
with EIGRPv6 probably is that it is not enabled on an interface.
Before we jump into the specifics of relvent show commands here, let’s first make FastEthernet
0/0 of R1 a passive interface.
R1#configure terminal
R1(config)#ipv6 router eigrp 1
R1(config-rtr)#passive-interface FastEthernet 0/0
R1(config-rtr)#end
R1#
You may use the show ipv6 eigrp interfaces command to verify if EIGRPv6 has been enabled on
correct interfaces:
R1#show ipv6 eigrp interfaces
IPv6-EIGRP interfaces for process 1
Xmit Queue Mean Pacing Time Multicast Pending
Interface Peers Un/Reliable SRTT Un/Reliable Flow Timer Routes
Se0/0 1 0/0 24 0/15 99 0
Fa0/1 1 0/0 40 0/2 216 0
Se0/1 1 0/0 25 0/15 103 0
The show ipv6 eigrp interfaces command shows three interfaces that have EIGRPv6 enabled.
But it does not list FastEthernet 0/0 despite the fact it has EIGRPv6 enabled, just because it is
configured as passive interface. You can use the show ipv6 protocols command to list all
EIGRPv6-enabled interfaces including passive interfaces.
R4#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface FastEthernet0/1
R4(config-if)#no ipv6 eigrp 1
R4(config-if)#end
R4#
You may display the IPv6 routing table of R1 again, and the subnet corresponding to
FastEthernet0/1 of R4 will no longer be listed
You have to check all these conditions, if EIGRPv6 routers are not able to become neighbors.
The show ipv6 protocols commands lists a lot of useful information for troubleshooting
EIGRPv6, including ASN (autonomous system number), K-values, and EIGRPv6 interfaces.
Because the virtual router uses the IP address of the physical Ethernet interface of R1, R1
assumes the role of virtual router master. The virtual router master is also known as the IP
address owner. There can be multiple virtual router backups, though in the figure above routers
R2 and R3 are virtual router backups. If the virtual router master fails, the virtual router backup
configured with the highest priority will become the virtual router master. As a result, client hosts
on the LAN receive uninterrupted connectivity through their default gateway (192.168.1.1).
R1:
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#vrrp 10 ip 192.168.1.1
R1(config-if)#
*Mar 1 00:29:06.095: %VRRP-6-STATECHANGE: Fa0/0 Grp 10 state Init -> Master
R1(config-if)#end
R1#
R2:
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface FastEthernet0/0
R2(config-if)#ip address 192.168.1.2 255.255.255.0
R2(config-if)#vrrp 10 priority 110
R2(config-if)#vrrp 10 ip 192.168.1.1
R2(config-if)#end
R2#
*Mar 1 00:32:02.859: %VRRP-6-STATECHANGE: Fa0/0 Grp 10 state Init -> Backup
R2#
R3:
R3#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#interface FastEthernet0/0
R3(config-if)#ip address 192.168.1.3 255.255.255.0
R3(config-if)#vrrp 10 priority 100
R3(config-if)#vrrp 10 ip 192.168.1.1
R3(config-if)#end
R3#
*Mar 1 00:33:54.715: %VRRP-6-STATECHANGE: Fa0/0 Grp 10 state Init -> Backup[OK]
We can verify VRRP configuration using the show vrrp command.
R1#show vrrp
FastEthernet0/0 – Group 10
State is Master
Virtual IP address is 192.168.1.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 255
Master Router is 192.168.1.1 (local), priority is 255
Master Advertisement interval is 1.000 sec
Master Down interval is 3.003 sec
You can see from above output that the priority of R1 is 255 and it is the master. As a matter of
fact, we never explicitly changed the priority on R1 from the default of 100. The highest priority
(255) assignment to R1 is a consequence of using the physical IP address of R1 as the virtual
group IP address.
The output of show vrrp on R2 below shows that it is a virtual router backup having priority 110.
R2#show vrrp
FastEthernet0/0 – Group 10
State is Backup
Virtual IP address is 192.168.1.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 110
Master Router is 192.168.1.1, priority is 255
Master Advertisement interval is 1.000 sec
Master Down interval is 3.570 sec (expires in 2.806 sec)
The below output of show vrrp on R3 indicates that it is also a backup with a priority of 100. The
default VRRP priority is also 100 though and we configured it manually just for the sake of
demonstration.
R3#show vrrp
FastEthernet0/0 – Group 10
State is Backup
Virtual IP address is 192.168.1.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 100
Master Router is 192.168.1.1, priority is 255
Master Advertisement interval is 1.000 sec
Master Down interval is 3.609 sec (expires in 2.633 sec)
If the router R1 becomes unavailable, the backup with higher priority, that is R2, should assume
the role of master. Let’s simulate the failure of R1 by manually shutting down its FastEthernet0/0.
R1(config)#interface FastEthernet0/0
R1(config-if)#shutdown
R1(config-if)#end
The result of this would be R2 becoming the master while R3 staying as backup, as indicated by
the output of show vrrp command on R2 and R3.
R2#show vrrp
FastEthernet0/0 – Group 10
State is Master
Virtual IP address is 192.168.1.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 110
Master Router is 192.168.1.2 (local), priority is 110
Master Advertisement interval is 1.000 sec
Master Down interval is 3.570 sec
R3#show vrrp
FastEthernet0/0 – Group 10
State is Backup
Virtual IP address is 192.168.1.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 100
Master Router is 192.168.1.2, priority is 110
Master Advertisement interval is 1.000 sec
Master Down interval is 3.609 sec (expires in 3.165 sec)
Hot Standby Router Protocol
HSRP (Hot Standby Router Protocol) is a Cisco proprietary FHRP (first-hop redundancy
protocol) that is available in two versions. The newer version 2 improves upon version 1 and is
now the preferred choice. These two versions of HSRP are not compatible with each other.
HSRP Operation
Two or more routers on a LAN segment form an HSRP group also known as standby group. One
router in the group assumes the role of the active router and handles all requests from clients. The
other router or routers become standby and take over if the active router fails. The multicast
address 224.0.0.102 is used to send HSRP version 2 hello messages. These messages
communicate HSRP parameters to other members of the group and also serve as a keep alive.
The problem with HSRP really is that only one router is active at one time. The other routers in
the standby group are just sitting there watching the show, until the active router fails. This
scheme of things is not very efficient as if you have redundant uplinks connected to the standby
routers, all the additional bandwidth provided by these uplinks will not be used.
HSRP Configuration
The figure below shows a basic HSRP topology with two routers forming an HSRP or standby
group. The router R1 is configured with a priority 110 that is higher than the default priority 100.
The router R2 is configured with the default priority 100. The Ethernet interfaces of R1 and R2
are configured with IP addressed 192.168.1.1 and 192.168.1.2, respectively. The IP address
assigned to the HSRP group 10 is 192.168.1.10 that is configured on both group members using
the standby ip command.
Figure 14-2 HSRP Topology
R1:
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#standby version 2
R1(config-if)#standby 10 preempt
R1(config-if)#standby 10 priority 110
R1(config-if)#standby 10 ip 192.168.1.10
R1(config-if)#end
R1#
R2:
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface FastEthernet0/0
R2(config-if)#ip address 192.168.1.2 255.255.255.0
R2(config-if)#standby version 2
R2(config-if)#standby 10 preempt
R2(config-if)#standby 10 priority 100
R2(config-if)#standby 10 ip 192.168.1.10
R2(config-if)#end
R2#
It is time for verification using the show standby command. You can see from the output for R1
below that it is the active router.
R1#show standby
FastEthernet0/0 – Group 10 (version 2)
State is Active
5 state changes, last state change 00:08:23
Virtual IP address is 192.168.1.10
Active virtual MAC address is 0000.0c9f.f00a
Local virtual MAC address is 0000.0c9f.f00a (v2 default)
Hello time 3 sec, hold time 10 sec
Next hello sent in 0.948 secs
Preemption enabled
Active router is local
Standby router is 192.168.1.2, priority 100 (expires in 9.412 sec)
Priority 110 (configured 110)
Group name is “hsrp-Fa0/0-10” (default)
The output of show standby command on R2 below indicates that it is the standby router.
R2#show standby
FastEthernet0/0 – Group 10 (version 2)
State is Standby
7 state changes, last state change 00:00:12
Virtual IP address is 192.168.1.10
Active virtual MAC address is 0000.0c9f.f00a
Local virtual MAC address is 0000.0c9f.f00a (v2 default)
Hello time 3 sec, hold time 10 sec
Next hello sent in 2.756 secs
Preemption enabled
Active router is 192.168.1.1, priority 110 (expires in 8.760 sec)
MAC address is c200.09ac.0000
Standby router is local
Priority 100 (default 100)
Group name is “hsrp-Fa0/0-10” (default)
Gateway Load Balancing Protocol
GLBP (Gateway Load Balancing Protocol) prevents a single point of failure, like HSRP and
VRRP, but also allows load-sharing among a group of redundant routers. Multiple first-hop
routers on the LAN form a group to offer a single virtual router, also sharing the IP packet
forwarding load.
HSRP and VRRP also allow multiple routers to form a virtual router group with a virtual IP
address. But only one member of the group is elected as the active router that forwards packets
sent to the virtual IP address for the group. The other routers in the group stay idle until the active
router fails. In other words, the bandwidth of standby routers is not utilized and goes waste.
Although it is possible to configure multiple virtual router groups to achieve load balancing in
case of HSRP and VRRP, but it requires configuring different default gateways on different hosts,
which is an extra administrative burden.
The advantage of GLBP is that it provides load balancing in addition to redundancy without
requiring configuration of different default gateways on different clients.
GLBP Operation
The routers participating in GLBS communicate with each other through hello messages sent
every 3 seconds to the multicast address 224.0.0.102, UDP port 3222 (both source and
destination). GLBP supports up to 1024 GLBP groups on each physical interface, and up to four
active virtual forwarders per group.
Routers participating in GLBP form a group and elect one router as the AVG (active virtual
gateway) for that group. Other members of the group provide backup for the AVG if it goes down.
The AVG controls all members of the group by assigning a virtual MAC address to each member.
Each router takes responsibility of forwarding packets sent to the virtual MAC address assigned
to it by the AVG. These routers are each called AVF (active virtual forwarder) for their virtual
MAC address. The AVG also responds to ARP (Address Resolution Protocol) requests for the
virtual IP address. This is the key to GLBP operation as load balancing is actually achieved by
the AVG replying to ARP requests from different hosts with different virtual MAC addresses.
When a client sends an ARP message for the IP address of its default gateway, the AVG responds
with the virtual MAC address of one of the AVFs. When another client sends an ARP message for
default gateway address resolution, the AVG returns the virtual MAC address of the next AVF. So
each client gets a different virtual MAC address for the same virtual IP address of the default
gateway. As a result, each client will send its traffic to separate routers despite the fact that they
are configured with the same default gateway.
GLBP Configuration
The figure below shows a basic GLBP topology with R1 and R2 forming a GLBP group. The
router R1 is the AVG for the GLBP group and is responsible for the virtual IP address
192.168.1.10. Router R1 is also the AVF for the virtual MAC address 0007.b400.0a01. Router
R2 is a member of the same GLBP group and is the designated AVF for the virtual MAC address
0007.b400.0a02. Client 1 has a default gateway of 192.168.1.10 and a gateway MAC address of
0007.b400.0a01. Client 2 has the same default gateway 192.168.1.10 but receives the gateway
MAC address 0007.b400.0a02 because router R2 is sharing the traffic load with R1.
Figure 14-3 GLBP Topology
R1:
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 192.168.1.1 255.255.255.0
R1(config-if)#glbp 10 ip 192.168.1.10
R1(config-if)#end
R1#
R2:
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface FastEthernet0/0
R2(config-if)#ip address 192.168.1.2 255.255.255.0
R2(config-if)#glbp 10 ip
R2(config-if)#end
R2#
You may verify GLBP configuration and find out which role each router is playing using the show
glbp command.
R1#show glbp
FastEthernet0/0 – Group 10
State is Active
2 state changes, last state change 00:07:32
Virtual IP address is 192.168.1.10
Hello time 3 sec, hold time 10 sec
Next hello sent in 0.488 secs
Redirect time 600 sec, forwarder timeout 14400 sec
Preemption disabled
Active is local
Standby is 192.168.1.2, priority 100 (expires in 9.888 sec)
Priority 100 (default)
Weighting 100 (default 100), thresholds: lower 1, upper 100
Load balancing: round-robin
Group members:
c200.140c.0000 (192.168.1.1) local
c201.140c.0000 (192.168.1.2)
There are 2 forwarders (1 active)
Forwarder 1
State is Active
1 state change, last state change 00:07:22
MAC address is 0007.b400.0a01 (default)
Owner ID is c200.140c.0000
Redirection enabled
Preemption enabled, min delay 30 sec
Active is local, weighting 100
Forwarder 2
State is Listen
2 state changes, last state change 00:00:10
MAC address is 0007.b400.0a02 (learnt)
Owner ID is c201.140c.0000
Redirection enabled, 598.188 sec remaining (maximum 600 sec)
Time to live: 14398.188 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is 192.168.1.2 (primary), weighting 100 (expires in 8.188 sec)
Similarly, you can use the show glbp command on R2.
R2#show glbp
FastEthernet0/0 – Group 10
State is Standby
1 state change, last state change 00:05:21
Virtual IP address is 192.168.1.10 (learnt)
Hello time 3 sec, hold time 10 sec
Next hello sent in 2.740 secs
Redirect time 600 sec, forwarder timeout 14400 sec
Preemption disabled
Active is 192.168.1.1, priority 100 (expires in 7.468 sec)
Standby is local
Priority 100 (default)
Weighting 100 (default 100), thresholds: lower 1, upper 100
Load balancing: round-robin
Group members:
c200.140c.0000 (192.168.1.1)
c201.140c.0000 (192.168.1.2) local
There are 2 forwarders (1 active)
Forwarder 1
State is Listen
MAC address is 0007.b400.0a01 (learnt)
Owner ID is c200.140c.0000
Time to live: 14397.456 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is 192.168.1.1 (primary), weighting 100 (expires in 8.888 sec)
Forwarder 2
State is Active
1 state change, last state change 00:05:07
MAC address is 0007.b400.0a02 (default)
Owner ID is c201.140c.0000
Preemption enabled, min delay 30 sec
Active is local, weighting 100
The table below rounds off our coverage of first-hop redundancy protocols in this chapter by
presenting a comparison of VRRP, HSRP, and GLBP.
Table 14-1 Comparison of VRRP, HSRP, and GLBP
Feature VRRP HSRP GLBP
Router Role 1 master1 (or 1 active1 standby 1 AVG2 (or
more) backup 1 or more more) AVF
listening
IP Address Real Virtual Virtual
Election 1 – highest 1 – highest 1 – highest
priority2 – priority2 – priority2 –
highest IP highest IP highest IP
(tiebreaker) (tiebreaker) (tiebreaker)
Load Balancing No No Yes
Cisco proprietary No (IEEE Yes Yes
standard)
Cisco IOS Netflow
NetFlow is a Cisco IOS application that provides statistics on packets flowing through routers.
NetFlow is primarily used for network accounting and identifies flows of packets coming in and
going out of an interface. The beauty of NetFlow is that it does not involve any additional
protocol setup between network devices or hosts. NetFlow is completely transparent to the
existing network devices, hosts, and applications. NetFlow can be enabled individually on some
network devices like routers and switches, without having to enable it on all devices in the
network.
NetFlow Operation
The components used in a complete NetFlow system include a router enabled with NetFlow and a
NetFlow collector. A number of free software packages like Caida (www.caida.org) and
NetFlow Monitor (netflow.cesnet.cz) are available to act as NetFlow collector.
The figure below shows basic traffic monitoring with NetFlow.
Figure 14-4 Basic NetFlow
NetFlow provide near real-time stastics that can be used for visualization and analysis by the
collector software. The software can present NetFlow statistics in a nice manner with the help of
bar charts, pie charts, visualizations, and so on.
The concept of a flow is basic to NetFlow, which is defined as a unidirectional stream of packets
from a source to destination. NetFlow considers a number of fields in packets being monitored
including source IP address, destination IP address, source port number, destination port number,
layer 3 protocol, and ToS (Type of Service). These fields are used by NetFlow to classify
packets into separate flows. If a packet has a key field that’s different from another packet, the
two packets are considered to belong to two different flows.
NetFlow Configuration
NetFlow is configured in the interface configuration mode on a router. First, you must specify if
you want monitoring of ingress traffic, egress traffic, or both. Second, you have to specify the IP
address of the NetFlow collector and also the UDP port on which the collector is listening. Cisco
has been improving NetFlow continually and the most recent version is 9. You have to configure
the NetFlow version as well.
Here is an example of NetFlow configuration on a Cisco router.
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip flow ingress
R1(config-if)#ip flow egress
R1(config-if)#exit
R1(config)#ip flow-e
R1(config)#ip flow-export destination 192.168.1.2 9996
R1(config)#ip flow-export version 9
R1(config)#end
R1#
The above configuration assumes that a NetFlow collector is available at IP address 192.168.1.2
and is listening at UDP port number 9996. The Cisco default port number on which NetFlow
collectors listen for NetFlow packets is 9996. The verification of NetFlow can come directly
from analyzing data collected on the NetFlow collector. However, you may also verify NetFlow
operation using relevant show commands on the NetFlow router itself.
The show ip flow interface command tells you the interfaces and directions for which NetFlow
has been enabled.