A Map of The Networking Code
A Map of The Networking Code
M. Rio et al.
31 March 2004
www.datatag.org
1
EU grant IST 2001-32459
Abstract
2
to support network interrupt mitigation via a mechanism known as NAPI. We
describe the main data structures, the sub-IP layer, the IP layer, and two
transport layers: TCP and UDP. This material is meant for people who are
familiar with operating systems but are not Linux kernel experts.
3
Contents
1 Introduction...............................................................................................................................4
2 Networking Code: The Big Picture............................................................................................5
3 General Data Structures.............................................................................................................8
3.1 Socket buffers.......................................................................................................................8
3.2 sock.......................................................................................................................................9
3.3 TCP options........................................................................................................................10
4 Sub-IP Layer............................................................................................................................13
4.1 Memory management.........................................................................................................13
4.2 Packet Reception................................................................................................................13
4.3 Packet Transmission...........................................................................................................18
4.4 Commands for monitoring and controlling the input and output network queues...............19
4.5 Interrupt Coalescence.........................................................................................................19
5 Network layer..........................................................................................................................20
5.1 IP 20
5.2 ARP....................................................................................................................................22
5.3 ICMP..................................................................................................................................23
6 TCP..........................................................................................................................................25
6.1 TCP Input...........................................................................................................................28
6.2 SACKs................................................................................................................................31
6.3 QuickACKs........................................................................................................................31
6.4 Timeouts.............................................................................................................................31
6.5 ECN....................................................................................................................................32
6.6 TCP output..........................................................................................................................32
6.7 Changing the congestion window.......................................................................................33
7 UDP.........................................................................................................................................34
8 The socket API.........................................................................................................................35
8.1 socket()...............................................................................................................................35
8.2 bind()..................................................................................................................................36
8.3 listen().................................................................................................................................36
8.4 accept() and connect().........................................................................................................36
8.5 write().................................................................................................................................36
8.6 close().................................................................................................................................37
9 Conclusion...............................................................................................................................37
Acknowledgments........................................................................................................................37
Acronyms.....................................................................................................................................38
References....................................................................................................................................39
Biographies...................................................................................................................................40
4
1 Introduction
When we investigated the performance of gigabit networks and end-hosts in the DataTAG
testbed, we soon realized that some losses occurred in end-hosts, and that it was not clear where
these losses occurred. To get a better understanding of packet losses and buffer overflows, we
gradually built a picture of how the networking code of the Linux kernel works, and instrumented
parts of the code where we suspected that losses could happen unnoticed.
This report documents our understanding of how the networking code works in Linux kernel
2.4.20 [1]. We selected release 2.4.20 because, at the time we began writing this report, it was the
latest stable release of the Linux kernel (2.6 had not been released yet), and because it was the
first sub-release of the 2.4 tree to support NAPI (New Application Programming Interface [4]),
which supports network interrupt mitigation and thereby introduces a major change in the way
packets are handled in the kernel. Until 2.4.20 was released, NAPI was one of the main novelties
in the development branch 2.5 and was only expected to appear in 2.6; it was not supported by
the 2.4 branch up to 2.4.19 included. For more introductory material on NAPI and the new
networking features expected to appear in Linux kernel 2.6, see Cooperstein’s online tutorial [5].
In this document, we describe the paths through the kernel followed by IP (Internet Protocol)
packets when they are received or transmitted from a host. Other protocols such as X.25 are not
considered here. In the lower layers, often known as the sub-IP layers, we concentrate on the
Ethernet protocol and ignore other protocols such as ATM (Asynchronous Transfer Mode).
Finally, in the IP code, we describe only the IPv4 code and let IPv6 for future work. Note that the
IPv6 code is not vastly different from the IPv4 code as far as networking is concerned (larger
address space, no packet fragmentation, etc).
The reader of this report is expected to be familiar with IP networking. For a primer on the
internals of the Internet Protocol (IP) and Transmission Control Protocol (TCP), see Stevens [6]
and Wright and Stevens [7]. Linux kernel 2.4.20 implements a variant of TCP known as
NewReno, with the congestion control algorithm specified in RFC 2581 [2], and the selective
acknowledgment (SACK) option, which is specified in RFCs 2018 [8] and 2883 [9]. The classic
introductory books to the Linux kernel are Bovet and Cesati [10] and Crowcroft and Phillips [3].
For Linux device drivers, see Rubini et al. [11].
In the rest of this report, we follow a bottom-up approach to investigate the Linux kernel. In
Section 2, we give the big picture of the way the networking code is structured in Linux. A brief
introduction to the most relevant data structures is given in Section 3. In Section 4, the sub-IP
layer is described. In Section 5, we investigate the network layer (IP unicast, IP multicast, ARP,
ICMP). TCP is studied in Section 6 and UDP in Section 7. The socket Application Programming
Interface (API) is described in Section 8. Finally, we present some concluding remarks in
Section 9.
5
2 Networking Code: The Big Picture
Figure 1 depicts where the networking code is located in the Linux kernel. Most of the code is in
net/ipv4. The rest of the relevant code is in net/core and net/sched. The header files can be found in
include/linux and include/net.
arch asm-*
drivers linux
fs math-emu
include net
init pcmcia
ipc scsi
/
kernel video
lib
mm 802
net …
scripts bridge
core
…
ipv4
ipv6
…
sched
…
wanrouter
x25
The networking code of the kernel is sprinkled with netfilter hooks [16] where developers can
hang their own code and analyze or change packets. These are marked as “HOOK” in the
diagrams presented in this document.
6
Figure 2 and Figure 3 present an overview of the packet flows through the kernel. They indicate
the areas where the hardware and driver code operate, the role of the kernel protocol stack and the
kernel/application interface.
DMA softirq
IP firewall
IP routing
NIC rx_ring interrupt
scheduled
hardware tcp_v4_rcv
socket backlog
kernel
rec recv
v buffer
TCP
process Appli-
recv_backlog rea cation
d
device
kernel user
driver
7
kernel
send
buffer
TCP
process Appli-
write cation
send_msg
IP csum
IP route
qdisc_run IP filter
qdisc_restcut
net_tx_action
DMA dev_xmit
qdisc
device
kernel user
driver
8
3 General Data Structures
The networking part of the kernel uses mainly two data structures: one to keep the state of a
connection, called sock (for “socket”), and another to keep the data and status of both incoming
and outgoing packets, called sk_buff (for “socket buffer”). Both of them are described in this
section. We also include a brief description of tcp_opt, a structure that is part of the sock structure
and is used to maintain the TCP connection state. The details of TCP will be presented in
section 6.
The transport section is a union that points to the corresponding transport layer structure (TCP,
UDP, ICMP, etc).
/* Transport layer header */
union
{
struct tcphdr *th;
struct udphdr *uh;
struct icmphdr *icmph;
struct igmphdr *igmph;
struct iphdr *ipiph;
struct spxhdr *spxh;
unsigned char *raw;
} h;
9
The network layer header points to the corresponding data structures (IPv4, IPv6, ARP, raw, etc).
/* Network layer header */
union
{
struct iphdr *iph;
struct ipv6hdr *ipv6h;
struct arphdr *arph;
struct ipxhdr *ipxh;
unsigned char *raw;
} nh;
The link layer is stored in a union called mac. Only a special case for Ethernet is included. Other
technologies will use the raw fields with appropriate casts.
/* Link layer header */
union
{
struct ethhdr *ethernet;
unsigned char *raw;
} mac;
Extra information about the packet such as length, data length, checksum, packet type, etc. is
stored in the structure as shown below.
char cb[48];
unsigned int len; /* Length of actual data */
unsigned int data_len;
unsigned int csum; /* Checksum */
unsigned char __unused, /* Dead field, may be reused */
cloned, /* head may be cloned (check refcnt
to be sure) */
pkt_type, /* Packet class */
ip_summed; /* Driver fed us an IP checksum */
__u32 priority; /* Packet queueing priority */
atomic_t users; /* User count - see datagram.c,tcp.c */
unsigned short protocol; /* Packet protocol from driver */
unsigned short security; /* Security level of packet */
unsigned int truesize; /* Buffer size */
unsigned char *head; /* Head of buffer */
unsigned char *data; /* Data head pointer */
unsigned char *tail; /* Tail pointer */
unsigned char *end; /* End pointer */
3.2 sock
The sock data structure keeps data about a specific TCP connection (e.g., TCP state) or virtual
UDP connection. Whenever a socket is created in user space, a sock structure is allocated.
The first fields contain the source and destination addresses and ports of the socket pair.
struct sock {
/* Socket demultiplex comparisons on incoming packets. */
__u32 daddr; /* Foreign IPv4 address */
__u32 rcv_saddr; /* Bound local IPv4 address */
__u16 dport; /* Destination port */
unsigned short num; /* Local port */
10
int bound_dev_if; /* Bound device index if != 0 */
Among many other fields, the sock structure contains protocol-specific information. These fields
contain state information about each layer.
union {
struct ipv6_pinfo af_inet6;
} net_pinfo;
union {
struct tcp_opt af_tcp;
struct raw_opt tp_raw4;
struct raw6_opt tp_raw;
struct spx_opt af_spx;
} tp_pinfo;
};
11
__u32 snd_wl1; /* Sequence for window update */
__u32 snd_wnd; /* The window we expect to receive */
__u32 max_window; /* Maximal window ever seen from peer */
__u32 pmtu_cookie; /* Last pmtu seen by socket */
__u16 mss_cache; /* Cached effective mss, not including SACKS */
__u16 mss_clamp; /* Maximal mss, negotiated at connection setup */
__u16 ext_header_len; /* Network protocol overhead (IP/IPv6 options) */
__u8 ca_state; /* State of fast-retransmit machine */
__u8 retransmits; /* Number of unrecovered RTO timeouts */
/* RTT measurement */
/* Slow start and congestion control (see also Nagle, and Karn & Partridge) */
12
__u8 rcv_wscale; /* Window scaling to send to receiver */
__u8 nonagle; /* Disable Nagle algorithm? */
__u8 keepalive_probes; /* num of allowed keep alive probes */
/* PAWS/RTTM data */
/* SACKs data */
};
13
4 Sub-IP Layer
This section describes the reception and handling of packets by the hardware and the Network
Interface Card (NIC) driver. This corresponds to layers 1 and 2 in the classical 7-layer network
model. The driver and the IP layer are tightly bound with the driver using methods from both the
kernel and the IP layer.
As well as containing data for the higher layers, the packets are associated with descriptors that
provide information on the physical location of the data, the length of the data, and extra control
and status information. Usually the NIC driver sets up the packet descriptors and organizes them
as ring buffers when the driver is loaded. Separate ring buffers are used by the NIC’s Direct
Memory Access (DMA) engine to transfer packets to and from main memory. The ring buffers
(both the tx_ring for transmission and the rx_ring for reception) are just arrays of skbuff’s,
managed by the interrupt handler (allocation is performed on reception and deallocation on
transmission of the packets).
14
IP layer ip_rcv()
rx_softirq (net_rx_action())
backlog queue (per CPU)
q (net_rx_action)
Figure 4: Packet reception with the old API until Linux kernel 2.4.19
15
1. packet count;
2. drop count;
3. the time squeeze counter, i.e. the number of times the softirq took too much time to handle
the packets from the device. When the budget of the softirq (i.e., the maximum number of
packets it can dequeue in a row, which depends on the device, max = 300) reaches zero
or when its execution time lasts more than one jiffie (10 ms, the smallest time unit in the
Linux scheduler), the softirq stops dequeuing packets, increments the time squeeze
counter of the CPU and reschedules itself for later execution;
4. number of times the backlog entered the throttle state;
5. number of hits in fast routes;
6. number of successes in fast routes;
7. number of defers in fast routes;
8. number of defers out in fast routes;
9. The right-most column indicates either latency reduction in fast routes or CPU collision,
depending on a #ifdef flag.
An example of backlog statistics is shown below:
$ cat /proc/net/softnet_stat
94d449be 00009e0e 000003cd 0000000e 00000000 00000000 00000000 00000000 0000099f
000001da 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000005b
000002ca 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000b5a
000001fe 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000010
4.2.3 Step 3
When the softirq is scheduled, it executes net_rx_action() (net/core/dev.c, line 1558). Softirqs are
scheduled in do_softirq() (arch/i386/irq.c) when do_irq is called to do any pending
interrupts. They can also be scheduled through the ksoftirq process when do_softirq() is
interrupted by an interrupt, or when a softirq is scheduled outside an interrupt or a bottom-half of
a driver. The do_softirq() function processes softirqs in the following order: HI_SOFTIRQ,
16
NET_TX_SOFTIRQ, NET_RX_SOFTIRQ and TASKLET_SOFTIRQ. More details about
scheduling in the Linux kernel can be found in [10]. Because step 2 differs between the older
network subsystem and NAPI, step 3 does too.
For kernel versions prior to 2.4.20, net_rx_action() polls all the packets in the backlog queue and
calls the ip_rcv() procedure for each of the data packets ( net/ipv4/ip_input.c, line 379). For other
types of packets (ARP, BOOTP, etc.), the corresponding ip_xx() routine is called.
For NAPI, the CPU polls the devices present in its poll_list (including the backlog for legacy
drivers) to get all the received packets from their rx_ring. The poll method of any device (poll(),
implemented in the NIC driver) or of the backlog (process_backlog() in net/core/dev.c, line 1496)
calls netif_receive_skb() (net/core/dev.c, line 1415) for each received packet, which then calls
ip_rcv().
The NAPI network subsystem is a lot more efficient than the old system, especially in a high
performance context (in our case, gigabit Ethernet). The advantages are:
17
Drop if full IP Layer
enqueue (dev_queue_xmit())
qdisc_restart():
while not (empty or stopped)
hard_start_xmit()
tx_ring
Kernel
memory
Data packet
The kernel provides multiple queuing disciplines (RED, CBQ, etc.) between the kernel and the
driver. It is intended to provide QoS support. The default queuing discipline, or qdisc, consists of
three FIFO queues with strict priorities and a default length of 100 packets for each queue
(ether_setup(): dev->tx_queue_len ; drivers/net/net_init.c, line 405).
Figure 6 shows the different data flows that may occur when a packet is to be transmitted. The
following steps are followed during transmission.
4.3.1 Step 1
For each packet to be transmitted from the IP layer, the dev_queue_xmit() procedure
(net/core/dev.c , line 991) is called. It queues a packet in the qdisc associated to the output
interface (as determined by the routing). Then, if the device is not stopped (e.g., due to link
failure or the tx_ring being full), all packets present in the qdisc are handled by qdisc_restart()
(net/sched/sch_generic.c, line 77).
4.3.2 Step 2
The hard_start_xmit() virtual method is then called. This method is implemented in the driver
code. The packet descriptor, which contains the location of the packet data in kernel memory, is
placed in the tx_ring and the driver tells the NIC that there are some packets to send.
18
4.3.3 Step 3
Once the card has sent a packet or a group of packets, it communicates to the CPU that the
packets have been sent out by asserting an interrupt. The CPU uses this information
(net_tx_action() in net/core/dev.c, line 1326) to put the packets into a completion_queue and to
schedule a softirq for later deallocating (i) the meta-data contained in the skbuff struct and (ii) the
packet data if we are sure that we will not need this data anymore (see Section 4.1). This
communication between the card and the CPU is card and driver dependent.
4.4 Commands for monitoring and controlling the input and output
network queues
The ifconfig command can be used to override the length of the output packet queue using the
txqueuelen option. It is not possible to get statistics for the default output queue. The trick is to
replace it with the same FIFO queue using the tc command:
to replace the default qdisc: tc qdisc add dev eth0 root pfifo limit 100
to get stats from this qdisc: tc -s -d qdisc show dev eth0
to recover to default state: tc qdisc del dev eth0 root
5 Network layer
The network layer provides end-to-end connectivity in the Internet across heterogeneous
networks. It provides the common protocol (IP – Internet Protocol) used by almost all Internet
traffic. Since Linux hosts can act as routers (and they often do as they provide an inexpensive
way of building networks), an important part of the code deals with packet forwarding.
The main files that deal with the IP network layer are located in net/ipv4:
ip_input.c – processing of the packets arriving at the host
ip_output.c – processing of the packets leaving the host
ip_forward.c – processing of the packets being routed by the host
Other files include:
19
ip_fragment.c – IP packet fragmentation
ip_options.c – IP options
ipmr.c – IP multicast
ipip.c – IP over IP
5.1 IP
5.1.1 IP Unicast
Figure 7 describes the path that an IP packet traverses inside the network layer. Packet reception
from the network is shown on the left hand side and packets to be transmitted flow down the right
hand side of the diagram. When the packet reaches the host from the network, it goes through the
functions described in Section 4; when it reaches net_rx_action(), it is passed to ip_rcv(). After
passing the first netfilter hook (see Section 2), the packet reaches ip_rcv_finish(), which verifies
whether the packet is for local delivery. If it is addressed to this host, the packet is given to
ip_local_delivery(), which in turn will give it to the appropriate transport layer function.
A packet can also reach the IP layer coming from the upper layers (e.g., delivered by TCP, or
UDP, or coming directly to the IP layer from some applications).The first function to process the
packet is then ip_queue_xmit(), which passes the packet to the output part through ip_output().
In the output part, the last changes to the packet are made in ip_finish_output() and the function
dev_queue_transmit() is called; the latter enqueues the packet in the output queue. It also tries to
run the network scheduler mechanism by calling qdisc_run(). This pointer will point to different
functions, depending on the scheduler installed. A FIFO scheduler is installed by default, but this
can be changed with the tc utility, as we have seen already.
The scheduling functions (qdisc_restart() and dev_queue_xmit_init()) are independent of the rest
of the IP code.
When the output queue is full, q->enqueue returns an error which is propagated upward on the IP
stack. This error is further propagated to the transport layer (TCP or UDP) as will be seen in
Sections 6 and 7.
20
fib_validate_source()
ip_local_delivery(
) ip_queue_x
route.c
fib_lookup()
mit()
ip_route_input_mc()
HOOK
fib_rules_map_destination()
ip_route_input_slow()
ip_queue_xmit2
()
fib_rules_policy
hash
() dst->output
fib*.c
ip_output()
ip_route_input() rt_hash_code()
ip_rcv_finish()
ip_finish_output(
ip_forwar )
HOOK d()
HOOK
HOOK
ip_rcv()
ip_forward_options() skb->dst.hh.output()
… dev_queue_xmit() qdisc_restart()
ip_forward.c
cpu_raise_softirq()
q->enqueue() dev_queue_xmit_init()
skb_queue_tail()
q->disc_run()
dev->hard_start_xmit()
netif_rx()
dev.c sch_generic.c
DEVICE_
rx()
5.1.2 IP Routing
If an incoming packet has a destination IP address other than that of the host, the latter acts as a
router (a frequent scenario in small networks). If the host is configured to execute forwarding
(this can be seen and set via /proc/sys/net/ipv4/ip_forward), it then has to be processed by a set of
complex but very efficient functions. If the ip_forward variable is set to zero, it is not forwarded.
21
The route is calculated by calling ip_route_input(), which (if a fast hash does not exist) calls
ip_route_input_slow(). The ip_route_input_slow() function calls the FIB (Forward Information
Base) set of functions in the fib*.c files. The FIB structure is quite complex [3].
If the packet is a multicast packet, the function that calculates the set of devices to transmit the
packet to is ip_route_input_mc(). In this case, the IP destination is unchanged.
After the route is calculated, ip_rcv_finished() inserts the new IP destination in the IP packet and
the output device in the sk_buff structure. The packet is then passed to the forwarding functions
(ip_forward() and ip_forward_finish()) which send it to the output components.
5.1.3 IP Multicast
The previous section dealt with unicast packets. With multicast packets, the system gets
significantly more complicated. The user level (through a daemon like gated) uses the
setsockopt() call on the UDP socket or netlink to instruct the kernel that it wants to join the group.
The set_socket_option() function calls ip_set_socket_option(), which calls ip_mc_join_group()
(or ip_mc_leave_group() when it wants to leave the group).
This function calls ip_mc_inc_group(). This makes a trigger expire and igmp_timer_expire() be
called. Then igmp_timer_expire() calls igmp_send_report().
When a host receives an IGMP (Internet Group Management Protocol) packet (that is, when we
are acting as a multicast router), net_rx_action() delivers it to igmp_rcv(), which builds the
appropriate multicast routing table information.
A more complex operation occurs when a multicast packet arrives at the host (router) or when the
host wants to send a multicast packet. The packet is handled by ip_route_output_slow() (via
ip_route_input() if the packet is coming in or via ip_queue_xmit() if the packet is going out),
which in the multicast case calls ip_mr_input().
Next, ip_mr_input() (net/ipv4/ipmr.c, line 1301) calls ip_mr_forward(), which calls
ipmr_queue_xmit() for all the interfaces it needs to replicate the packet. This calls
ipmr_forward_finish(), which calls ip_finish_output(). The rest can be seen on Figure 7.
5.2 ARP
Because ARP (Address Resolution Protocol) converts layer-3 addresses to layer-2 addresses, it is
often said to be at layer 2.5. ARP is defined in RFC 826 and is the protocol that allows IP to run
over a variety of lower layer technologies. Although we are mostly interested in Ethernet in this
document, it is worth noting that ARP can resolve IP addresses for a wide variety of technologies,
including ATM, Frame Relay, X.25, etc.
When an ARP packet is received, it is given by nt_rx_action() to arp_rcv() which, after some
sanity checks (e.g., checking if the packet is for this host), passes it on to arp_process(). Then,
arp_process() checks which type of ARP packet it is and, if appropriate (e.g., when it is an ARP
request), sends a reply using arp_send().
The decision of sending an ARP request deals with a much more complex set of functions
depicted in Figure 8. When the host wants to send a packet to a host in its LAN, it needs to
convert the IP address into the MAC address and store the latter in the skb structure. When the
host is not in the LAN, the packet is sent to a router in the LAN. The function ip_queue_xmit()
22
(which can be seen in Figure 7) calls ip_route_output(), which calls rt_intern_hash(). This calls
arp_bind_neighbour(), which calls neigh_lookup_error().
The function neigh_lookup_error() tries to see if there is already any neighbor data for this IP
address with neigh_lookup(). If there is not, it triggers the creation of a new one with
neigh_create(). The latter triggers the creation of the ARP request by calling arp_constructor().
Then the function arp_constructor() starts allocating space for the ARP request and calls the
function neigh->ops->output(), which points to neigh_resolve_output(). When
neigh_resolve_output() is called, it invokes neigh_event_send(). This calls neigh->ops->solicit(),
which points to arp_solicit(). The latter calls arp_send(), which sends the ARP message. The skb
to be resolved is stored in a list. When the reply arrives (in arp_recv()), it resolves the skb and
removes it from the list.
ip_queue_xmit()
ip_route_output()
neigh_lookup_error() neigh_resolve_output()
rt_intern_hash() neigh_create()
arp_solicit()
arp_bind_neighbour() arp_constructor()
arp_send()
Figure 8: ARP
5.3 ICMP
The Internet Control Message Protocol (ICMP) plays an important role in the Internet. Its
implementation is quite simple. Conceptually, ICMP is at the same level as IP, although ICMP
datagrams use IP packets.
Figure 9 depicts the main ICMP functions. When an ICMP packet is received, net_rx_action()
delivers it to icmp_rcv() where the ICMP field is checked; depending on the type, the appropriate
function is called (this is done by calling icmp_pointers[icmp->type].handler()). In Figure 10, we
can see the description of the main functions and types. Two of these functions, icmp_echo() and
icmp_timestamp(), require a response to be sent to the original source. This is done by calling
icmp_reply().
Sometimes, a host needs to generate an ICMP packet that is not a mere reply to an ICMP request
(e.g., the IP layer, the UDP layer and users—through raw sockets—can send ICMP packets). This
is done by calling icmp_send().
23
icmp_discard()
icmp_unreach()
icmp_redirect()
icmp_address()
icmp_echo()
icmp_address_reply()
UDP User IP
icmp_send()
24
ICMP function Description
6 TCP
This section describes the implementation of the Transmission Control Protocol (TCP), which is
probably the most complex part of the networking code in the Linux kernel.
TCP contributes for the vast majority of the traffic in the Internet. It fulfills two important
functions: it establishes a reliable communication between a sender and a receiver by
retransmitting non-acknowledged packets, and it implements congestion control by reducing the
sending rate when congestion is detected.
Although both ends of a TCP connection can be sender and receiver simultaneously, we separate
our code explanations for the “receiver” behavior (when the host receives data and sends
acknowledgments) and the “sender” behavior (when the host sends data, receives
acknowledgments, retransmits lost packets and adjusts congestion window and sending rate). The
complexity of the latter is significantly higher.
The reader is assumed to be familiar with the TCP state machine, which is described in [6].
The main files of the TCP code are all located in net/ipv4, except header files which are in
include/net. They are:
Figure 11 and Figure 12 depict the TCP data path and are meant to be viewed side by side. Input
processing is described in Figure 11 and output processing is illustrated by Figure 12.
25
tp->ucopy.iov
tp->out_of_order_queue
...COPY DATA TO
USER…
tcp_rcv_established()
tcp_data_queue()
tcp_check_sum_complete_user()
tcp_paws_discard()
tcp_send_ack()
tcp_sequence()
tcp_send_dupack()
tcp_reset()
tcp_replace_ts_recent() tcp_store_ts_recent()
tcp_urg()
tcp_send_delayed_ack()
tcp_data_snd_check()
tcp_clean_rtx_queue()
tcp_v4_rcv() tcp_may_raise_cwnd()
iproto->handler
tcp_ack()
tcp_ack_saw_tsamp()
tcp_cong_avoid()
tcp_rcv_state_process()
tcp_fixup_snd_buffer()
tcp_fixup_rcv_buffer()
tcp_init_buffer_space()
tcp_init_metrics() tcp_init_cwnd()
26
tcp_rmem_shcedule()
tcp_grow_window()
tcp_incr_quick_ack()
tcp_measure_rcv_mss() sys_write()
tcp_schedule_ack()
tcp_event_data_rcv() sock_write()
sock_sendmsg()
tcp_moderate_cwnd()
sock->ops->sendmsg()
tcp_fast_retrans_alert() tcp_sendmsg()
tcp_remove_reno_sacks()
tcp_add_reno_sack()
sk->prot->sendmsg()
tcp_sendmsg()
tcp_update_scoreboard()
tcp_check_reno_reordering() tcp_push()
tcp_head_timeout()
tcp_push_pending_frames()
skb_timed_out()
tcp_try_undo_partial()
tcp_undo_recovery()
tcp_time_to_recover() tcp_write_xmit()
tcp_may_undo()
tcp_transmit_skb()
tcp_fackets_out()
tcp_packet_delayed()
tcp_enter_loss()
tp->af_specific->queue_xmit
tcp_try_undo_loss() ip_queue_xmit()
tcp_clear_retrans()
tcp_output.c
27
6.1 TCP Input
TCP input is mainly implemented in net/ipv4/tcp_input.c. This is the largest portion of the TCP
code. It deals with the reception of a TCP packet. The sender and receiver code is tightly coupled
as an entity can be both at the same time.
Incoming packets are made available to the TCP routines from the IP layer by ip_local_delivery()
shown on the left side of Figure 11. This routine gives the packet to the function pointed by
ipproto->handler (see structures in Section 2). For the IPv4 protocol stack, this is tcp_v4_rcv(),
which calls tcp_v4_do_rcv(). The function tcp_v4_do_rcv() in turn calls another function
depending on the TCP state of the connection (for more details, see [6]).
If the connection is established (state is TCP_ESTABLISHED), it calls tcp_rcv_established().
This is the main case that we will examine from now on. If the state is TIME_WAIT, it calls
tcp_timewait_process(). All other states are processed by tcp_rcv_state_process(). For example,
this function calls tcp_rcv_sysent_state_process() if the state is SYN_SENT.
For some TCP states (e.g., CALL_SETUP), tcp_rcv_state_process() and tcp_timewait_process()
have to initialize the TCP structures. They call tcp_init_buffer_space() and tcp_init_metrics().
The latter initializes the congestion window by calling tcp_init_cwnd().
The following subsections describe the actions of the functions shown in Figure 11 and Figure
12. The function tcp_rcv_established() has two modes of operation: fast path and slow path. We
first describe the slow path, which is easier to understand, and present the fast path afterward.
Note that in the code, the fast path is dealt with first.
28
STEP 6: It checks the URG (urgent) bit. If this bit is set, it calls tcp_urg(). This makes the
receiver tell the process listening to the socket that the data is urgent.
STEP 7, part 1: It processes data on the packet. This is done by calling tcp_data_queue() (more
details in Section 6.1.2 below).
STEP 7, part 2: It checks if there is data to send by calling tcp_data_snd_check(). This function
calls tcp_write_xmit() on the TCP output sector.
STEP 7, part 3: It checks if there are ACKs to send with tcp_ack_snd_check(). This may result in
sending an ACK straight away with tcp_send_ack() or scheduling a delayed ACK with
tcp_send_delayed_ack(). The delayed ACK is stored in tcp->ack.pending().
6.1.3 tcp_ack()
Every time an ACK is received, tcp_ack() is called. The first thing it does is to check if the ACK
is valid by making sure it is within the right hand side of the sliding window ( tp->snd_nxt) or
older than previous ACKs. If this is the case, then we can probably ignore it with goto
uninteresting_ack and goto old_ack respectively and return 0.
If everything is normal, it updates the sender’s TCP sliding window with
tcp_ack_update_window() and/or tcp_update_wl(). An ACK may be considered “normal” if it
acknowledges the next section of contiguous data starting from the pointer to the last fully
acknowledged block of data.
If the ACK is dubious, it enters fast retransmit with tcp_fastretrans_alert() (see Section 6.1.4
below). If the ACK is normal and the number of packets in flight is not smaller than the
congestion window, it increases the congestion window by entering slow start/congestion
avoidance with tcp_cong_avoid(). This function implements both the exponential increase in
slow start and the linear increase in congestion avoidance as defined in RFC 793. When we are in
congestion avoidance, tcp_cong_avoid() utilizes the variable snd_cwnd_cnt to determine when to
linearly increase the congestion window.
29
Note that tcp_ack() should not be confused with tcp_send_ack(), which is called by the "receiver"
to send ACKs using tcp_write_xmit().
6.1.4 tcp_fastretransmit_alert()
Under certain conditions, tcp_fast_retransmit_alert() is called by tcp_ack() (it is only called by
this function). To understand these conditions, we have to go through the Linux {NewReno,
SACK, FACK, ECN} finite state machine. This section is copied almost verbatim from a
comment in tcp_input.c. Note that this finite state machine (also known as the ACK state machine)
has nothing to do with the TCP finite state machine. The TCP state is usually
TCP_ESTABLISHED.
The Linux finite state machine can be in any of the following states:
30
with tcp_copy_to_iovec(), the timestamp is stored with tcp_store_ts_recent(),
tcp_event_data_recv() is called, and an ACK is sent in case we are the receiver.
6.2 SACKs
Linux kernel 2.4.20 fully implements SACKs (Selective ACKs) as defined in RFC 2018 [8]. The
connection SACK capabilities are stored in the tp->sack_ok field (FACKs are enabled if the 2nd
bit is set and DSACKs (delayed SACKs) are enabled if the 3 rd bit is set). When a TCP connection
is established, the sender and receiver negotiate different options, including SACK.
The SACK code occupies a surprisingly large part of the TCP implementation. More than a
dozen functions and significant parts of other functions are dedicated to implementing SACK. It
is still fairly inefficient code, because the lookup of non-received blocks in the list is an expensive
process due to the linked-list structure of the sk_buff’s.
When a receiver gets a packet, it checks in tcp_data_queue() if the skb overlaps with the previous
one. If it does not, it calls tcp_sack_new_ofo_skb() to build a SACK response.
On the sender side (or receiver of SACKs), the most important function in the SACK processing
is tcp_sacktag_write_queue(); it is called by tcp_ack().
6.3 QuickACKs
At certain times, the receiver enters QuickACK mode, that is, delayed ACKS are disabled. One
example is in slow start, when delaying ACKs would delay the slow start considerably.
The function tcp_enter_quick_ack_mode() is called by tc_rcv_sysent_state_process() because, at
the beginning of the connection, the TCP state should be SYSENT.
6.4 Timeouts
Timeouts are vital for the correct behavior of the TCP functions. They are used, for instance, to
infer packet loss in the network. The events related to registering and triggering the retransmit
timer are depicted in Figure 13 and Figure 14.
tcp_push_pending_frames()
tcp_check_probe_timer()
tcp_reset_xmit_timer()
31
The setting of the retransmit timer happens when a packet is sent. The function
tcp_push_pending_frames() calls tcp_check_probe_timer(), which may call
SOFTWARE
tcp_reset_xmit_timer(). timer_bh()
This schedules a run_timer_list() tp->retransmit_timer.function
software interrupt, which is dealt with by non-
INTERRUPT
networking parts of the kernel. tcp_write_timer()
When the timeout expires, a software interrupt is generated. This interrupt calls timer_bh(),
which calls run_timer_list(). This calls timer->function(), which will in this case be pointing to
tcp_wite_timer(). This calls tcp_retransmit_timer(), which finally calls tcp_enter_loss(). The
state of the Linux machine is then set totcp_enter_loss() tcp_retransmit_timer()
CA_Loss and tcp_fastretransmit_alert() schedules the
retransmission of the packet.
6.5 ECN
Linux kernel 2.4.20 fully implements ECN (Explicit Congestion Notification) to allow ECN-
capable routers to report congestion before dropping packets. Almost all the code is in the
tcp_ecn.h in the include/net directory. It contains the code to receive and send the different ECN
packet types.
In tcp_ack(), when the ECN bit is on, TCP_ECN_rcv_ecn_echo() is called to deal with the ECN
message. This calls the appropriate ECN message handling routine.
When an ECN congestion notification arrives, the Linux host enters the CWR state. This makes
the host reduce the congestion window by one on every other ACK received. This can be seen in
tcp_fastrestrans_alert() when it calls tcp_cwnd_down().
ECN messages can also be sent by the kernel when the function TCP_ECN_send() is called in
tcp_transmit_skb().
32
Check sysctl() flags for timestamps, window scaling and SACK.
Build TCP header and checksum.
Set SYN packets.
Set ECN flags.
Clear ACK event in the socket.
Increment TCP statistics through TCP_INC_STATS (TcpOutSegs).
Call ip_queue_xmit().
If there is no error, the function returns; otherwise, it calls tcp_enter_cwr(). This error may
happen when the output queue is full. As we saw in Section 4.3.2, q->enqueue returns an error
when this queue is full. The error is then propagated until here and the congestion control
mechanisms react accordingly.
The sender receives a triple ACK. This is done in tcp_fastretrans_alert() using the
is_dupack variable.
A timeout occurs, which causes tcp_enter_loss() to be called (see Section 6.6). In this
case, the congestion window is set to 1 and ssthresh (the slow-start threshold) is set to
half of the congestion window when the packet is lost. This last operation is done in
tcp_recalc_ssthresh().
TX Queue is full. This is detected in tcp_transmit_skb() (the error is propagated from
q->enqueue in the sub-IP layer) which calls tcp_enter_cwr().
SACK detects a hole.
Apart from these situations, the Linux kernel modifies the congestion window in several more
places; some of these changes are based on standards, others are Linux specific. In the following
sections, we describe these extra changes.
33
6.7.2 Congestion Window Moderation
Linux implements the function tcp_moderate_cwnd(), which reduces the congestion window
whenever it thinks that there are more packets in flight than there should be based on the value of
snd_cwnd. This feature is specific to Linux and is specified neither in an IETF RFC nor in an
Internet Draft. The purpose of the function is to prevent large transient bursts of packets from
being sent out during “dubious conditions”. This is often the case when an ACK acknowledges
more than three packets. As a result, the magnitude of the congestion window reduction can be
very large at large congestion window sizes, and hence reduce throughput.
The primary calling functions for tcp_moderate_cwnd() are tcp_undo_cwr(),
tcp_try_undo_recovery(), tcp_try_to_open() and tcp_fastretrans_alert(). In all cases, the function
call is triggered by conditions being met in tcp_fast_retrans_alert().
7 UDP
This section reviews the UDP part of the networking code in the Linux kernel. This is a
significantly simpler piece of code than the TCP part. The absence of reliable delivery and
congestion control allows for a very simple design.
Most of the UDP code is located in one file: net/ipv4/udp.c
The UDP layer is depicted in Figure 15. When a packet arrives from the IP layer through
ip_local_delivery(), it is passed on to udp_rcv() (this is the equivalent of tcp_v4_rcv() in the TCP
part). The function udp_rcv() puts the packet in the socket queue for the user application with
sock_put(). This is the end of the delivery of the packet.
When the user reads the packet, e.g. with the recvmsg() system call, inet_recvmsg() is called,
which in this case calls udp_recvmsg(), which calls skb_rcv_datagram(). The function
skb_rcv_datagram() then gets the packets from the queue and fills the data structure that will be
read in user space.
When a packet arrives from the user, the process is simpler. The function inet_sendmsg() calls
udp_sendmsg(), which builds the UDP datagram with information taken from the sk structure
(this information was put there when the socket was created and bound to the address).
Once the UDP datagram is built, it is passed to ip_build_xmit(), which builds the IP packet with
the possible help of ip_build_xmit_slow(). If, for some reason, the packet could not be transmitted
(e.g., if the outgoing ring buffer is full), the error is propagated to udp_sendmsg(), which updates
statistics (nothing else is done because UDP is a non-reliable protocol).
34
Once the IP packet has been built, it is passed on to ip_output(), which finalizes the delivery of
the packet to the lower layers.
inet_rcvmsg() inet_sendmsg()
udp_rcvmsg() udp_sendmsg()
skb_rcv_datafram() ip_build_xmit()
… ip_build_xmit_slow()
sock_put()
skb->dst->output
udp_rcv() udp_queue_rcv_skb() ip_output()
ip_local_delivery()
8.1 socket()
When a user invokes the socket() system call, this calls sys_socket() inside the kernel (see file
net/socket.c). The sys_socket() function does two simple things. First, it calls sock_create(), which
allocates a new sock structure where all the information about the socket/connection is stored.
35
Second, it calls sock_map_fd(), which maps the socket to a file descriptor. In this way, the
application can access the socket as if it were a file—a typical Unix feature.
8.2 bind()
The bind() system call triggers sys_bind(), which simply puts information about the destination
address and port in the sock structure.
8.3 listen()
The listen() system call, which triggers sys_listen(), calls the appropriate listen function for this
protocol. This is pointed by sock->ops->listen(sock, backlog). In the case of TCP, the listen
function is inet_listen(), which in turn calls tcp_listen_start().
8.5 write()
Every time a user writes in a socket, this goes through the socket linkage to inet_sendmsg(). The
function sk->prot->sendmsg() is called, which in turn calls tcp_sendmsg() in the case of TCP or
udp_sendmsg() in the case of UDP. The next chain of events was described in the previous
sections.
36
8.6 close()
When the user closes the file descriptor corresponding to this socket, the file system code calls
sock_close(), which calls sock_release() after checking that the inode is valid. The function
sock_release() calls the appropriate release function, in our case inet_release(), before updating
the number of sockets in use. The function inet_release() calls the appropriate protocol-closing
function, which is tcp_close() in the case of TCP. The latter function sends an active reset with
tcp_send_active_reset() and sets the state to TCP_CLOSE_WAIT.
9 Conclusion
In this technical report, we have documented how the networking code is structured in release
2.4.20 of the Linux kernel. First, we gave an overview, showing the relevant branches of the code
tree and explaining how incoming and outgoing TCP segments are handled. Next, we reviewed
the general data structures (sk_buff and sock) and detailed TCP options. Then, we described the
sub-IP layer and highlighted the difference in the handling of interrupts between NAPI-based and
pre-NAPI device drivers; we also described interrupt coalescence, an important technique for
gigabit end-hosts. In the next section, we described the network layer, which includes IP, ARP
and ICMP. Then we delved into TCP and detailed TCP input, TCP output, SACKs, QuickACKs,
timeouts and ECN; we also documented how TCP’s congestion window is adjusted. Next, we
studied UDP, whose code is easier to understand than TCP’s. Finally, we mapped the socket API,
well-known to Unix networking programmers, to kernel functions.
The need for such a document arises from the current gap between the abundant literature aimed
at Linux beginners and the Linux kernel mailing list where Linux experts occasionally distil some
of their wisdom. Because the technology evolves quickly and the Linux kernel code frequently
undergoes important changes, it would be useful to keep up-to-date descriptions of different parts
of the kernel (not just the networking code). We have experienced that this is a time-consuming
endeavor, but documenting entangled code (the Linux kernel code notoriously suffers from a lack
of code clean-up and reengineering) is the only way for projects like ours to understand in detail
what the problems are, and to devise a strategy for solving them.
For the sake of conserving time, several important aspects have not been considered in this
document. It would be useful to document how the IPv6 code is structured, as well as the Stream
Control Transmission Protocol (SCTP). The description of SACK also deserves more attention,
as we have realized that this part of the code is sub-optimal and causes problems in long-distance
gigabit networks. Last, it would be useful to update this document to a 2.6.x version of the kernel.
Acknowledgments
We would like to thank Antony Antony, Gareth Fairey, Marc Herbert, Éric Lemoine and Sylvain
Ravot for their useful feedback. Part of this research was funded by the FP5/IST Program of the
European Union (DataTAG project, grant IST-2001-32459).
Acronyms
ACK Acknowledgment
37
API Application Programming Interface
I/O Input/Output
IP Internet Protocol
IPv4 IP version 4
IPv6 IP version 6
38
PAWS Protect Against Wrapped Sequence
numbers
References
[1] Linux kernel 2.4.20. Available from The Linux Kernel Archives at:
http://www.kernel.org/pub/linux/kernel/v2.4/patch-2.4.20.bz2
[2] M. Allman, V. Paxson and W. Stevens, RFC 2581: TCP Congestion Control, IETF, April
1999.
[3] J. Crowcroft and I. Phillips, TCP/IP & Linux Protocol Implementation: Systems Code for the
Linux Internet, Wiley, 2002.
[4] J.H. Salim, R. Olsson and A. Kuznetsov, “Beyond Softnet”. In Proc. Linux 2.5 Kernel
Developers Summit, San Jose, CA, USA, March 2001. Available at
<http://www.cyberus.ca/~hadi/usenix-paper.tgz>.
[5] J. Cooperstein, Linux Kernel 2.6 – New Features III: Networking. Axian, January 2003.
Available at <http://www.axian.com/pdfs/linux_talk3.pdf>.
[6] W.R. Stevens. TCP/IP Illustrated, Volume 1: The Protocols, Addison-Wesley, 1994.
[7] G.R. Wright and W.R. Stevens, TCP/IP Illustrated, Volume 2: The Implementation, Addison-
Wesley, 1995.
39
[8] M. Mathis, J. Mahdavi, S. Floyd and A. Romanow, RFC 2018, TCP Selective
Acknowledgment Options, IETF, October 1996.
[9] S. Floyd, J. Mahdavi, M. Mathis and M. Podolsky, RFC 2883: An Extension to the Selective
Acknowledgement (SACK) Option for TCP, IETF, July 2000.
[10] Daniel P. Bovet and Marco Cesati, Understanding the Linux Kernel, 2nd Edition, O’Reilly,
2002.
[11] A. Rubini and J. Corbet, Linux Device Drivers, 2nd Edition, O’Reilly, 2001.
[12] http://tldp.org/HOWTO/KernelAnalysis-HOWTO-5.html
[13] http://www.netfilter.org/unreliable-guides/kernel-hacking/lk-hacking-guide.html
[14] V. Jacobson, R. Braden and D. Borman, RFC 1323: TCP Extensions for High
Performance, IETF, May 1992.
[15] M. Handley, J. Padhye and S. Floyd, RFC 2861: TCP Congestion Window Validation,
IETF, June 2000.
[16] http://www.netfilter.org/
[17] J. C. Mogul and K. K. Ramakrishnan. “Eliminating Receive Livelock in an Interrupt-
Driven Kernel”. In Proc. of the 1996 Usenix Technical Conference, pages 99–111, 1996.
Biographies
Miguel Rio is a Lecturer at the Department of Electronic and Electrical Engineering, University
College London. He previously worked on Performance Evaluation of High Speed Networks in
the DataTAG and MBNG projects and on Programmable Networks on the Promile project. He
holds a Ph.D. from the University of Kent at Canterbury, as well as M.Sc. and B.Sc. degrees from
the University of Minho, Portugal. His research interests include Programmable Networks,
Quality of Service, Multicast and Protocols for Reliable Transfers in High-Speed Networks.
Mathieu Goutelle is a Ph.D. student in the INRIA RESO team of the LIP Laboratory at ENS
Lyon. He is a member of the DataTAG Project and currently works on the behavior of TCP over
a DiffServ-enabled gigabit network. In 2002, he graduated as a generalist engineer (equiv. to an
M.Sc. in electrical and mechanical engineering) from Ecole Centrale in Lyon, France. In 2003, he
received an M.Sc. in Computer Science from ENS Lyon.
Tom Kelly received a Mathematics degree from the University of Oxford in July 1999. His Ph.D.
research on "Engineering Internet Flow Controls" was completed in February 2004 at the
University of Cambridge. He has held research positions as an intern at AT&T Labs Research in
1999, an intern at the ICSI Center for Internet Research in Berkeley during 2001, and an IPAM
research fellowship at UCLA in 2002. During the winter of 2002–03 he worked for CERN on the
EU DataTAG project implementing the Scalable TCP proposal for high-speed wide area data
transfer. His research interests include middleware, networking, distributed systems and computer
architecture.
Richard Hughes-Jones leads e-science and Trigger and Data Acquisition development in the
Particle Physics group at Manchester University. He has a Ph.D. in Particle Physics and has
worked on Data Acquisition and Network projects for over 20 years, including evaluating and
40
field-testing OSI transport protocols and products. He is secretary of the Particle Physics
Network Coordinating Group which has the remit to support networking for PPARC funded
researchers. Within the UK GridPP project he is deputy leader of the network workgroup and is
active in the DataGrid networking work package (WP7). He is also responsible for the High
Throughput investigations in the UK e-Science MB-NG project to investigate QoS and various
traffic engineering techniques including MPLS. He is a member of the Global Grid Forum and is
co-chair of the Network Measurements Working Group. He was a member of the Program
Committee of the 2003 PFLDnet workshop, and is a member of the UKLIGHT Technical
Committee. His current interests are in the areas of real-time computing and networking
including the performance of transport protocols over LANs, MANs and WANs, network
management and modeling of Gigabit Ethernet components.
J.P. Martin-Flatin is Technical Manager of the European FP5/IST DataTAG Project at CERN,
where he coordinates research activities in gigabit networking, Grid networking and Grid
middleware. Prior to that, he was a principal technical staff member with AT&T Labs Research
in Florham Park, NJ, USA, where he worked on distributed network management, information
modeling and Web-based management. He holds a Ph.D. degree in Computer Science from the
Swiss Federal Institute of Technology in Lausanne (EPFL). His research interests include
software engineering, distributed systems and IP networking. He is the author of a book, Web-
Based Management of IP Networks and Systems, published in 2002 by Wiley. He is a senior
member of the IEEE and a member of the ACM. He is a co-chair of the GGF Data Transport
Research Group and a member of the IRTF Network Management Research Group. He was a co-
chair of GNEW 2004 and PFLDnet 2003.
Yee-Ting Li received an M.Sc. degree in Physics from the University of London in August 2001.
He is now studying for a Ph.D. with the Centre of Excellence in Networked Systems at
University College London, UK. His research interests include IP-based transport protocols,
network monitoring, Quality of Service (QoS) and Grid middleware.
41