0% found this document useful (0 votes)
35 views

Cns Mod 4

The document discusses key aspects of transport layer protocols including the transport service, elements of transport protocols such as addressing, connection establishment, connection release, and congestion control. It also covers Internet transport protocols UDP and TCP as well as services provided to upper layers like applications.

Uploaded by

Vivek Tg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Cns Mod 4

The document discusses key aspects of transport layer protocols including the transport service, elements of transport protocols such as addressing, connection establishment, connection release, and congestion control. It also covers Internet transport protocols UDP and TCP as well as services provided to upper layers like applications.

Uploaded by

Vivek Tg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

CHAPTER 6:

THE TRANSPORT
LAYER

• The Transport Service


• Elements of Transport Protocols
• Congestion Control
• The Internet Transport Protocols : UDP
• The Internet Transport Protocols : TCP
• Services Provided to the Upper Layers
• Transport Service Primitives
• Berkeley Sockets
• An Example of Socket Programming:
– An Internet File Server
• Transport layer services :
– To provide efficient, reliable, and
cost-effective service to its users, normally
processes in the application layer.
– To make use of the services provided by
the network layer.
• The transport entity: the hardware and/or software
within the transport layer that does the work. Its
positions:
– In the OS kernel, in a separate user process, in a
library package bound to network applications, or
– On the network interface card.
The network, transport and application layers
• There are two types transport services
– Connection-oriented transport service
– Connectionless transport service
• The similarities between transport services and
network services
– The connection–oriented service is similar to the
connection-oriented network service in many
ways:
• Establishment, □data transfer, □release;
• Addressing;
• Flow control
– The connectionless transport service is also similar
to the connectionless network service.
• The differences between transport services and network

services. Why are there two distinct layers?

– The transport code runs entirely on the user’s


machines, but the network layer mostly runs on the
routers which are usually operated by the carrier.
– Network layer has problems (losing packets,
router crashing, …)
– The transport layer improves the QOS of the network
layer.
– □ The transport service is more reliable than the
network service.
– □Applications programmers can write code according
to a standard set of transport service primitives and
have these programs work on a wide variety of
networks.
• The transport layer provides some operations to
applications programs, i.e., a transport service interface.
• Each transport service has its own interface.
• The transport service is similar to the network service,
but there are also some important differences:

– Network service models real network □ unreliable.

Transport services improves real network □ reliable.


– Network services are for network
developers. Transport services are for
application developers.
How to use these primitives for an application?
Server App: LISTEN

• Client App: CONNECT
• Client Server
□SEND/RECEIVE□
• Client Transport Server Transport
□ TPDU □
– Client network layer Server network layer
□ Packets□
– Client data link layer Server data link layer
□ Frames □

• Client/Server: DISCONNECT
Nesting of Segments, packets, and frames
Connection establishment and connection release
• The C/S Application
– Server: SOCKET/BIND/LISTEN/ACCEPT
– Client: SOCKET/CONNECT

– C/S: SEND/RECEIVE

– C/S: CLOSE (symmetric)


• IFServer.
c
• IFClient.c
ELEMENTS OF
TRANSPORT
PROTOCOLS
• Addressing
• Connection Establishment
• Connection Release
• Error Control and Flow Control
• Multiplexing
• Crash Recovery
Elements of Transport Protocols
Transport protocol and data link protocol
• Similarities: Error control, sequencing, and flow control
• Differences
– Environment: physical communication channel/subnet
– Addressing: implicit/explicit
– Connection establishment: simple/complicated
– Storage capacity: no/yes (unpredictable/predictable)
– Buffering and flow control: amount
How to address?
• NSAP: Network service access points
– IP (32 bits)
• TSAP: Transport service access
point – Port (16 bits)
• How to find out the server’s TSAP
– Some TSAP addresses are so famous that they
are fixed.
– Process server
• To listen to a set of TSAPs at the same time.
• To dispatch the request to the right server.
– Process server + name server (directory server,
114)
Connection establishment sounds easy, but …
• Naïve approach:
– One sends a CONNECTION REQUEST TPDU to the
other and wait for a CONNECTION ACCEPTED
reply.
– If ok, done; otherwise, retry.
– Possible nightmare:
• A user establishes a connection with a bank, sends
messages telling the bank to transfer a large amount
of money to the account of a not-entirely-trustworthy
person, and then releases the connection.
• Moreover, assume each packet in scenario is
duplicated and stored in the subnet.

– □ delayed duplicates.
Possible solutions for solving delayed duplicates
• To give each connection a connection identifier (i.e.,
a sequence number incremented for each connection
established) chosen by the initiating party and put in
each TPDU, including the one requesting the
connection.
• To use throw away transport addresses. Each
time a transport address is needed, a new one is
When a connection is the address is discarded
generated.
released, and never used
again.
• To use sequence number and
age
– To limit packet lifetime.
– To use the sequence number
• Packet lifetime can be restricted to a known
maximum using one of the following techniques
– Restricted subnet design.
– Putting a hop counter in each packet.
– Timestamping each (router synchronization
packet. problem)

• In practice, we will need to guarantee not only that a


packet is dead, but also that all acknowledgements to
it are also dead, so we will now introduce T, which is
some small multiple of the true maximum packet
lifetime.
To ensure that two identically numbered
TPDUs are never outstanding at the same
1. To equip each host with a time-of-day clock
time:

• Each clock is assumed to take the form of a


binary counter that increments itself at uniform
intervals.
• The number of bits in the counter must equal or
exceed the number of bits in the sequence
numbers.
• The clock is assumed to continue running even
• The clocks at different hosts need not
be synchronized.
To ensure that two identically numbered
TPDUs are never outstanding at the same
2. When a connection is setup, the low order k bits of
time:
the clock are used as the initial sequence number
( also k bits). Each connection starts numbering its
TPDUs with a different initial sequence number.
The sequence space should be so large that by the
time sequence numbers wrap around, old TPDUs
with the

same sequence number are long gone.


sequence number, any sliding window protocol
can be used for data flow control.
A problem occurs when a host crashes. When
it comes up again, its transport entity does
not know where it was in the sequence space.
– To require the transport entities to be idle for T
seconds after a recovery to let all old TPDUs die
off. (In a complex internetwork, T may be large,
so this strategy is unattractive.)
– To avoid requiring T sec of dead time after a crash,
it is necessary to introduce a new restriction on the
use of sequence numbers.
Restriction on the sequence numbers?
• Let T (the maximum packet lifetime) be 60 sec and let
the clock tick once per second. The initial sequence
number for a connection opened at time x will be x.
• At t=30sec, an ordinary data TPDU being sent on (a
previously opened) connection 5 (called as TPDU X) is
given sequence number 80.
• After sending TPDU X, the host crashes and then
quickly restarts.
• At t = 60sec, it begins reopening connections 0 through 4.
• At t = 70sec, it reopens connection using initial sequence
number 70 as required.
• During the next 15 sec it sends data TPDUs 70 through 80.
Thus at t=85sec, a new TPDU with sequence number 80 and
connection 5 has been injected into the subnet.
• □TPDU X and TPDU 80.
• To prevent sequence numbers from being used for a
time T before their potential use as initial sequence
numbers. The illegal combinations of time and sequence
number are called as the forbidden region. Before
sending any
the clock and check to see that it is no in the forbidden
region.
Connection Establishment
Connection Establishment: The forbidden region
• Too fast
• Too slow

• □ (Solution for the delayed TPDU)


Before sending every TPDU,
the transport entity must check to see if it is about to
enter the forbidden region,
and if so, either delay the TPDU for T sec
or resynchronize the sequence
numbers.
Connection Establishment
• Three-way handshake for connection establishment
– Normal operation.
– Old duplicate CONNECTION REQUEST
appearing out of nowhere.
– Duplicate CONNECTION REQUEST and
duplicate ACK.
• Conclusion: there is no combination of old
CONNECTION REQUEST,
CONNECTION
ACCEPTED, or other TPDUs that can cause the
protocol to fail and have a connection setup by
accident when no one wants it.
Connection Release
• Asymmetric release and symmetric release
– Asymmetric release: When one part hangs up,
the connection is broken.
– Symmetric release: to treat the connection as two
separate unidirectional connections and require
each one to be released separately.
• Asymmetric release is abrupt and may result in data
loss (See the next slide)
• One way to avoid data loss is to use symmetric release.
• Abrupt disconnection with loss of data
• Symmetric release does the job when each process has a
fixed amount of data to send and clearly knows when it has
sent it.
• Symmetric release has its problems if determining that all
the work has been done and the connection should be
terminated is not so obvious.
• Two army-problem: A white army is encamped in a
valley. On both of the surrounding hillsides are blue
armies.
one blue army < white army < two blue armies
• Does a a protocol exist that allows the blue armies to win?
– Two-way handshake
The commander of blue army #1: “I propose we attack at
down on March 29. How about it?”
The commander of blue army #2: “OK.”
□ Will the attack happen?

– Three-way handshake,

–…
– N-way handshake
– □No protocol exists that works.
• Substitute disconnect for attack . If neither side is prepared to
disconnect until it is convinced that the other side is prepared to
disconnect too, the disconnection will never happen.
(a) Normal case of three-way handshake
(b) Final ACK lost
(c) Response lost
(d) Response lost and subsequent DRs Lost
• Automatic disconnect rule
– If no TPDUs have arrived for a certain number
of seconds, the connection is then automatically
disconnect.
– Thus, if one side ever disconnects, the other side
will detect the lack of activity and also disconnect.
• Conclusion: releasing a connection without data loss
is not nearly as simple as it first appears.
Error control is ensuring that the data is delivered with the
desired level of reliability, usually that all of the data is
delivered without any errors.
• Similarity: In both layers, error control has to be performed.
• Difference: The link layer checksum protects a frame while
it crosses a single link. The transport layer checksum
protects a segment while it crosses an entire network path. It
is an end-to-end check, which is not the same as having a
check on every link.
Flow control in data link layer and transport layer
• Similarity: In both layers a sliding window or other scheme
is needed on each connection to keep a fast transmitter
from overrunning a slow receiver.
• Difference: A router usually has relative few lines, whereas
a host may have numerous connections.
Buffering
• The sender: The sender must buffer all TPDUs sent if the
network service is unreliable. The sender must buffer all
TPDUs sent if the receiver cannot guarantee that every
incoming TPDU will be accepted.
• The receiver: If the receiver has agreed to do the buffering,
there still remains the question of the buffer size.
Flow control and buffering: (The receiver
buffering)
Buffer sizes
(a) Chained fixed-size buffers.

(b) Chained variable-sized buffers.

(c) One large circular buffer per connection.


Flow control and buffering
• The optimum trade-off between sender buffering
and receiver buffering depends on the type of
traffic carried by the connection.
– For low-bandwidth bursty traffic, it is better to
buffer at the sender,
– For high-bandwidth smooth traffic, it is better
to buffer at the receiver.
– As connections are opened and closed, and as the
traffic pattern changes, the sender and receiver need
to dynamically adjust their buffer allocation.
– Dynamic buffer allocation.
Flow control and buffering
Dynamic buffer allocation (receiver’s buffering
capacity).
Flow control and buffering
• When buffer space no longer limits the maximum
flow, another bottleneck will appear: the carrying
capacity of the subnet.
– The sender dynamically adjusts the window size
to match the network's carrying capacity.
– In order to adjust the window size
periodically, the sender could monitor both
parameters and then compute the desired
window size.
Multiplexing
• Multiplexing and demultiplexing

– Multiplexing: Application layer □Transport layer

□Network layer □Data link layer □Physical layer

– Demultiplexing: Physical layer □Data link layer

□Network layer □Transport layer □Application


layer
• Two multiplexing:
– Upward multiplexing
– Downward multiplexing (See the next slide)
Multiplexing
• Recovery from network and router crash is
straightforward.
• Recovery from host crash is difficult.
• Assume that a client is sending a long file to a
file server using a simple stop-and -wait protocol.
– Part way through the transmission, the server crashes.
– The server might send a broadcast TPDU to all other
hosts, announcing that it had just crashed and
requesting that its clients inform it of the status of
all open connections.
– Each client can be in one of two states: one
TPDU outstanding, S1, or no TPDUs outstanding,
• Some situations
– The client should retransmit only if has an
unacknowledged TPDU outstanding (S1) when
it learns the crash.
• If a crash occurs after the acknowledgement has
been sent but before the write has been done, the
client will receive the acknowledgement and
thus
be in state S0. □ LOST. Problem!
• If a crash occurs after the write has been done but

before the acknowledgement has been sent, the

client will not receive the acknowledgement and


thus be in state S1. □ Dup. Problem!

• For more, see the next slide


Crash Recovery
Crash Recovery

• No matter how the sender and receive are


programmed, there are always situations where the
protocol fails to recover properly.
• In more general terms, recovery from a layer N
crash can only be done by layer N+1 and only if the
higher layer retains enough status information.
Congestion Control*

• Desirable Bandwidth Allocation

• Regulating the sending rate

• Wireless Issues
• Efficiency and Power
(a) Goodput and (b) delay as a function of offered load
• Power
– will initially rise with offered load, as delay remains small
and roughly constant,
– but will reach a maximum and
– fall as delay grows rapidly.
• The load with the highest power represents an
efficient load for the transport entity to place on the
network.
Max-Min Fairness
• How to divide bandwidth between different transport senders:
– The first consideration is to ask what this problem has to
do with congestion control.
– A second consideration is what a fair portion means for
flows in a network. The form of fairness that is often desired
for network usage is max-min fairness.
Desirable Bandwidth Allocation
• Convergence: A final criterion is that the
congestion control algorithm converge quickly to a
fair and efficient allocation of bandwidth.
(a) A fast network feeding a low-capacity
receiver.
(b) A slow network feeding a high-capacity
receiver.
• Signals of some congestion control protocol
• Used in TCP
• Analyses by Padhye et al. (1998) show that the throughput
goes up as the inverse square-root of the packet loss rate.
• What this means in practice is that the loss rate for fast TCP
connections is very small; 1% is a moderate loss rate, and by
the time the loss rate reaches 10% the connection has
effectively stopped working.
• However, for wireless networks such as 802.11 LANs,
frame loss rates of at least 10% are common.
• This difference means that, absent protective measures,
congestion control schemes that use packet loss as a signal will
unnecessarily throttle connections that run over wireless links
to very low rates.
• There are two aspects to note. First, the sender does not necessarily know that
the path includes a wireless link, since all it sees is the wired link to which it
is attached.
THE INTERNET
TRANSPORT
PROTOCOLS: UDP, RPC,
• RTPtransport protocols in the
Two main
Internet
– Connectionless protocol (UDP)
– Connection-oriented protocol (TCP)
• UDP (User Datagram protocol)
• RPC (Remote Procedure Call)
• RTP (Real-time Transport Protocol)
The Internet Transport Protocols: UDP
• UDP (RFC768) provides a way for applications to
– send encapsulated IP datagrams and
– send them without having to establish a connection.
• UDP transmits segments consisting of an 8-byte
header followed by the payload.

■ UDP
◆ Multiplexing and demultiplexing using ports.
◆ No flow control, error control or retransmission.
◆ Applications: RPC, RTP, DNS (Domain Name
System)
• RPC (Remote Procedure Call) allows programs to
call procedures located on remote hosts.
– When a process on machine 1 calls a procedure on
machine 2, the calling process on 1 is suspended
and execution of the called procedure takes place on
2
– Information can be transported from the caller to
the callee in the parameters and can come back in
the procedure result.
– No message passing is visible to the programmer.
• The idea behind RPC is to make a remote
procedure call look as much as possible like a local
one.
C Proc □□ C Stub □□ S Stub □□ S Proc
The steps in making an RPC
• Step 1 is the client calling the client stub.
• Step 2 is the client stub packing the parameters into a
message (marshaling) and making a system call to
send the message.
• Step 3 is the kernel sending the message from the
client machine to the server machine.
• Step 4 is the kernel passing the incoming packet to
the server stub and unpacking the packet to extract the
parameters (unmarshaling).
• Step 5 is the server stub calling the server procedure
with the unmarshaled parameters.

• The reply traces the same path in the other direction.


• A few snakes hiding under the grass (RPC)
– The use of pointer parameters.
– Some problems for weakly-typed
languages (The length of an array).
– It is not always possible to deduce the types of the
parameters, not even from a formal specification
or
the code itself. (printf)
– The use of global variables.
• □Some restrictions are needed to make RPC
work well in practice.
• RPC/TCP vs RPC/UDP
• Multimedia applications such as Internet radio, Internet
telephony, music-on-demand, videoconferencing,
video-on-
demand, require real-time transport protocols. □
RTP (Real-time Transport Protocol) (RFC1889)
• The RTP is in user space and runs over UDP. The RTP
Ops:
– The multimedia applications consists of multiple audio,
video, text, and possibly other streams. These are fed
into the RTP library.
– This library then multiplexes the streams and encodes
– UDP packets are generated and embedded in IP packets.
– The IP packet are then put in frames for transmission.
–…
(a) The position of RTP in the protocol
stack
(b) packet nesting
• RTP is a transport protocol realized in the application layer.
• RTP is to multiplex several real-time data streams onto a
single stream of UDP packets and unicast or multicast the
UDP packets.
• Each RTP packet is given a number one higher than its
predecessor. RTP has no flow control, no error control,
no acknowledgements, and no retransmission support.
• Each RTP payload may contain multiple samples and they
can be coded any way that the application wants. For
example a single audio stream may be encoded as 8-bit
PCM samples at 8kHz, delta encoding, predictive encoding,
GSM encoding, MP3, and so on.
• RTP allows timestamping.
• Version: 2 bits, already at 2.
• P bit: padded to a multiple of 4 bytes or not.
• X bit: an extension header or not.
• CC: how many contributing sources are present (0-15).
• M bit: marker bit.
• Payload type: which encoding algorithm has been used.
• Sequence number: incremented on each RTP packet
sent.
• Timestamp: reducing jitter.
• Synchronization source identifier: which stream
the packet belongs to.
• Contributing source identifiers.
• RTCP (Real-time Transport Control Protocol) is a
little sister protocol (little sibling protocol?) for
RTP.
– Does not transport any data
– To handle feedback, synchronization, and
the user interface
– To handle inter stream synchronization.
– To name the various sources.
• For more information about RTP, RTP: Audio and
Video for the Internet (Perkins, C.E. 2002,
Addison- Wesley)
Smoothing the output stream by buffering packets
THE INTERNET TRANSPORT
PROTOCOLS: TCP
• Introduction to TCP
• The TCP Service Model
• The TCP Protocol
• The TCP Segment Header
• TCP Connection Establishment
• TCP Connection Release
• TCP Connection Management Modeling
• TCP Sliding Window
• TCP Timer Management
• TCP Congestion Control
• The Future of TCP
TCP: Introduction
• TCP (Transmission Control Protocol) provides a
reliable end-to-end byte stream over an unreliable
internetwork. For TCP, see RFC 793, 1122, 1323,
2108,2581, 2873,
2988, 3168, 4614.
• The communication between TCP entities
• A TCP entity accepts user data streams from local
processes, breaks them up into pieces not exceeding
64KB (in practice, often 1500 − 20 − 20 data
bytes), sends each piece as a separate IP datagram.
• When datagrams containing TCP data arrive at a
machine, they are given to the TCP entity,
which constructs the original byte streams.
• TCP must furnish the reliability that most users want and
• For any TCP service to be obtained, a connection must
be explicitly established between a socket on the
sending machine and a socket on the receiving
machine.
– Connections are identified by the socket identifiers
at both ends, that is, (socket1, socket2)
– A socket number (address) consisting of the IP
address of the host and a 16-bit number local to
that host, called a port.
– Port numbers below 1024 are called well-known
ports and are reserved for standard services. (see
the next slide.)
• Some well-known ports
• To have many daemons standby
• To have one master daemon inetd or xinetd standby
– The master daemon attaches itself to multiple ports
and wait for the first incoming connection
– When one incoming connection request arrives,
inetd or xinetd forks off a new process and executes
the appropriate daemon on it, letting that daemon
handle the request.

• All TCP connections are full duplex and point-point.


• A TCP connection is a byte stream, not a message
queue. Message boundaries are not preserved end to
end.
• All TCP connections are full duplex and point-point.
• A TCP connection is a byte stream, not a message
queue. Message boundaries are not preserved end to
end.

(a) Four 512-byte segments sent as separate IP datagrams.


(b) The 2048 bytes of data delivered to the application in a
single READ CALL.
(a) Four 512-byte segments sent as separate IP datagrams.

(b) The 2048 bytes of data delivered to the application in a


single READ CALL.
• When an application passes data to TCP, TCP may
send it immediately or buffer it at its own discretion.
– To force data out, applications can use the PUSH
flag, which tells TCP not to delay the transmission.
• TCP supports urgent data (now rarely used).
– When the urgent data are received at the destination,
the receiving application is interrupted (e.g, given a
signal in UNIX terms) so it can stop whatever it was
doing and read the data stream to find the urgent
data.
• The end of the urgent data is marked so the
application know when it is over.
• The start of the urgent data is not marked. It is
up to the application to figure that out.
• Every byte on a TCP connection has its own
32-bit sequence number.
• The sending and receiving TCP entities exchange
data in the form of segments.
– Each segment, including the TCP header, must
fit in the 65515 (=65535-20) byte IP payload.
– Each network has a maximum transfer unit or
MTU (1500 for the Ethernet).
• The TCP entities use the sliding window protocol
– Segments can arrive out of order.
– Segments can be delayed.
– The retransmissions may include different
byte ranges than the original transmission.
• Source port and destination port: to identify the local

end points of the connection. A port plus its host’s


IP

address forms a 48-bit unique end point. The source


and destination end points together identify the
connection.
• Sequence number and acknowledgement number:
32 bits long (every byte of data is numbered in a
TCP stream).
• TCP header length: how many 32-bit words
are contained in the TCP header.
• 4-bit field not used.
• ECE: ECN (Explicit Congestion Notification)-Echo to
a sender
• URG bit: the Urgent pointer is in use or not.
• ACK bit: the Acknowledgement number is valid or not.
• PSH bit: PUSHed data or not.
• RST bit: to reset a connection that has become
confused due to a host crash or some other reason.
• SYN bit: to used to establish connections.
– SYN for CONNECTION REQUEST,
– SYN+ACK for CONNECTION ACCEPTED.
• FIN bit: used to release a connection.
• Window size: to tell how many bytes may
be sent starting at the byte acknowledged.
• Urgent pointer: used to indicate a byte offset from the
current sequence number at which urgent data are to be
found.
• Options:
– To allow each host to specify the maximum TCP payload
it is willing to accept
– Window scale option
– To use selective repeat instead of go back n protocol
• TCP connections are full duplex and can be treated as
a pair of simplex connections.
• Each simplex connection is released independently of
its sibling.
– To release a connection, a party can send a TCP
segment with the FIN bit set, which means that it
has no more data to transmit.
– When the FIN is acknowledged, that direction is
shut down for the new data.
– When both directions have been shut down,
the connection is released.
• To avoid the two-army problem, timers are used.
TCP: Connection Management Policy
TCP connection
management finite
state machine (FSM).

The heavy solid line is the


normal path for a client.

The heavy dashed line is


the normal path for a
server.

The light lines are unusual


events.

Each transition is labeled


separated by a slash.
• 6.15
Why does UDP exist? Would it not have been enough to just let
user processes send raw IP packets?

• 6.17
A client sends a 128-byte request to a server located 100 km away
over a 1-gigabit optical fiber. What is the efficiency of the line
during the remote procedure call?

• 6.23
Datagram fragmentation and reassembly are handled by IP and are
invisible to TCP. Does this mean that TCP does not have to worry
about data arriving in the wrong order?

• 6.28
The maximum payload of a TCP segment is 65,495 bytes. Why
was such a strange number chosen?
• 6.32
If the TCP round-trip time, RTT, is currently 30 msec and the
following acknowledgements come in after 26, 32, and 24 msec,
respectively, what is the new RTT estimate using the Jacobson
algorithm? Use α=0.9.

• 6.36
In a network whose max segment is 128 bytes, max segment lifetime
is 30 sec, and has 8-bit sequence numbers, what is the maximum
data rate per connection?

• 6.39
To get around the problem of sequence numbers wrapping around

while old packets still exist, one could use 64-bit sequence numbers.

However, theoretically, an optical fiber can run at 75 Tbps. What


maximum packet lifetime is required to make sure that future 75-Tbps
networks do not have wrap around problems even with 64-bit
sequence numbers? Assume that each byte has its own sequence
number, as TCP does.

•6.4
In both parts of Fig 6 In both parts of Fig. 6-6 there is a comment that
the value 6, there is a comment that the value of SERVERPORT must be
the same in both client and server. Why is this so important Why is this
so important?

You might also like