Internet Protocol Television (IPTV) Services: Fred Biko Otieno Mboya
Internet Protocol Television (IPTV) Services: Fred Biko Otieno Mboya
Thesis
20 April 2016
Abstract
Instructor
Dr. Tero Nurminen, Principal Lecturer
This thesis mainly deals with IPTV (Internet Protocol TV) technology and how it changes
the business of television; its development and architectural design; its applications and
progress into the future. The goal of the project is to enhance professional networking on
both live TV and radio platform, know how the IPTV functions and how it differs from inter-
net TV, how it is formatted, transported and delivered to the end users. Equally important,
how providers charge for it and make a living.
The study itself was carried out by retrieving information from different sources such as the
library, the Internet, through self-observation, and discussions with the chief supervisor
and instructor. Different aspects of IPTV are discussed in different phases of the thesis.
First, the study introduces IPTV technology, its background and means of transmission.
Then, the study entails the architectural design of IPTV, multimedia methods and applica-
tions, compression techniques and finally its purpose and role to the growing technology
services.
The purpose of the project was to gain adequate practical experience, skills, techniques,
and theory by applying previous classroom knowledge to actual principal-like situations in
a strategic, organized and supervised environment. The end result came from the fact that
in the near future it is likely that IPTV can replace traditional TV technology since it deliv-
ers a good supplement business model for service providers, offers better quality of ser-
vice to consumers, and play a significant role on the fast growing and evolving interactive
TV applications such as VOD.
2 Theoretical Background 2
4 Multimedia over IP 19
1 Introduction
Over the past decade, the only way to watch television was through over-the-air broad-
cast and cable signals. The emerging of satellite, digital cable, and High Definition Tel-
evision (HDTV) services have made it possible for telecommunication providers to dis-
cover a new technology in the television broadcast system. The innovation of digitiza-
tion of television technology around the globe has facilitated access to multiple ser-
vices, with better quality of service on all devices at all point of time.
Internet Protocol Television (IPTV) has provided the means to securely deliver high
quality triple play services to the end usersover a private or managed network. IPTV
functions just like a standard pay TV (Television) service and one of its key benefits is
to offer IP (Internet Protocol)based services in one integrated package, for example
receiving and displaying live or pre-recorded audio and video, as well as covering live
TV or Video on Demand (VOD).
This thesis is based on a project the author worked on during an internship period at
Streamafrik. The study itself was carried out successfully by retrieving information from
different sources such as the library, the Internet, through self-observation, and discus-
sions with the chief supervisor.
2
2 Theoretical Background
The aim of the project was to explore the development of internet protocol television
and its different phases, as well as the transmission distribution mechanism that allows
for immediate interactivity and multimedia experience.
IPTV is often mistaken for internet TV since they both use IP technology for video de-
livery. This section discusses the key differences between this two IP technology ser-
vices as shown in Table 1. IPTV services are delivered via private and managed net-
work using the internet protocol suite whilst Internet TV services are distributed over
open, public or global internet [1, 21-25]. This enables IPTV delivery to allow for higher
quality of delivery with secure delivery of content to the end users. Internet TV video
delivery, by contrast, can be subjected to longer waiting times due to lower bandwidth,
high traffic or poor connection quality.
Table 1 illustrates some of the key difference between IPTV and Internet TV. Modified
from [1, 26].
3
In Table 1 above it is clearly arguable to note that both IPTV and internet TV play a
significant role in delivering video across a network platform. They both rely on IP tech-
nology for delivery, their approaches in delivery differs in the way in which the signal
travels and how the content is delivered over the internet. Internet TV model is open to
any rights holder and anyone can create an endpoint and publish on a global basis
offering a direct communication between the provider and the consumers.
Mobile IPTV is a wireless mobile transmission platform that enables users to receive
multimedia content such as audio, graphics, video, and text over a wireless IP network
to a mobile medium with support for mobility, security, quality of service (QoS), quality
of experience, and reliability functions [2].
The Next Generation Networks (NGN) enable unrestricted access for users to networks
with a wide range of services offered by different service providers as shown in Figure
1. In this scenario both the sender and receiver are assumed to using a mobile device
[33]. This mobility capability enables communication between the sender (service pro-
vider) and the receiver (at the mobile terminal) over a wireless interface.
There are two main approaches used to deliver mobile TV, that is, across a cellular
network and across a dedicated broadcast network.
There are many advantages that mobile IPTV offers, such as:
Mobile IPTV provides variety of real-time streaming data such as VOD and vid-
eo services much better on mobile phones.
It allows for mobility of services based on Multimedia Subsystems (IMSP) and
wireless characteristics to IPTV.
It provides digital television experience via interactivity.
Mobile IPTV allows channel switching and casting.
It provides Information access and entertainment for the users.
It allows for content synchronization and offers an opportunity for watching TV
everywhere and anytime.
It is easy to see that mobile IPTV provides variety of new interaction level between In-
ternet, voice and video. Its wireless capability help to speed deployments and reduce
costs in the best possible way to reach out to users. The users only need a normal col-
ored screen phone with fair display resolution and mobile data connectivity to use this
technology service.
Signaling is carried out with the help of signals that indicate to the connected end de-
vice what data is requested across a media platforms. It plays a significant role in re-
ceiving information such as video, audio or encoded data.
Dubbing analog signals can often be skewed by just a few frames or by service sec-
onds. Every dub might be different due to generational loss. For example, in Figure 2,
digital clock specifically indicates time as 7.00, but analog signal time appears to be
either 7.00 or closer to 7.01 [5].
An analog signal is a continuous signal that contains time varying amplitude, voltage,
current, and frequency. Information is converted into an analog signal of varying ampli-
tude of high and low physical property (such as voltage or current) over time. For ex-
amples sound, voice, temperature vary continuously in frequency and amplitude.
In Figure 3, the sine wave's amplitude value can be seen to be either positive or nega-
tive between higher and lower points of the wave respectively, while the frequency
(time) value is amplified in the sine wave's physical length from left to right. Each time
the signal is amplified, the noise is also amplified.
7
Varying Amplitude
0 frequency(time)
Signals can be periodic or non-periodic analyzed in the frequency (time). Sine wave
and square wave are the common representations of analog signals. Square wave is
distinctive from digital signal by a negative minimum value.
Analog signaling suffers less attenuation than digital signal over long distances
[6].
Analog signaling defines infinite amount of signal resolution. Analog devices are
equipped to handle the infinite values between 1 and 0 [6].
It is a simpler implementation for easy processing and reproducibility.
Analog signaling has a much higher density which can be multiplexed to in-
crease bandwidth at the same time make good use of the bandwidth [34, 8].
Analog is better for higher frequency applications, where low cost and computa-
tion portable are required in real time.
Most sounds such as music and speech are analog signals. The main advantage of
analog signal is the potential for an infinite amount of signal resolution. Compared to
digital signals, analog signals are of higher density.
8
Analog systems are less immune to noise, that is random unwanted variation,
over long distances. The noise becomes dominant creating disturbance and
distortion [34, 8].
Analog systems are more likely to get affected by generation loss.
Even though analog signals is used in many systems today, its uses are declining with
the introduction of the more reliable digital signal.
Each pulse represents a signal element and monitored periodically by network. Binary
data are transmitted by the presence or absence of signal elements. '1' represents the
presence of transition and '0'represents the absence of transition [6].
9
In digital signaling, the quality of the signal is maintained due its higher interfer-
ence immunity to external background noise. Noise does not accumulate on a
digital signal as it does on an analog signal during transmission.
Digital signaling is compatible with integrated digital data and telephone signal-
ing which can be implemented with a relatively low equipment cost.
Digital signaling offers various transmission options over long distances due to
its linear and nonlinear capabilities.
It is often recommended to convert analog signals to digital signals for more effective
signal processing. Video and audio transmissions are often transferred or recorded
using analog signals.
In DSL the computer connects to the phone line which then connects to the DSL mo-
dem that has filters in place for different frequencies of voice and data. In other words,
the data connection is on the same line as the phone while travelling at different fre-
quencies. The data and the voice goes back and forth through the internet at the same
time. This way, DSL makes it is possible for the user to experience high speed internet
connection even when talking on the phone.
DSL modems establish a connection from one end of a copper wire by utilizing more of
the bandwidth on the analog line, thus allowing for greater bandwidth to the other end
of that copper wire and connecting digitally on both the uplink and downlink connection.
DSL modems can enable downlink connection speeds greater than 6Mbps and uplink
speeds up to 1MHz of bandwidth in both directions, thereby preventing the signals from
interfering with each other [7,2]. Figure 5 illustrates a connection setup of the DSL net-
work.
DSL modem's digital signal is not limited to 4 kHz of voice frequencies, making it much
faster than 56K analog modems in bandwidth capacity. The bandwidth rate available
are more consistent to the end user that are within 18000 feet [7,2]. Longer distances
must operate at lower bit rates to allow more subscribers to be served from a single
central office at a lower price.
11
Variants/subtype of DSL
There are many different DSL service types’ options for broadband and IPTV, such as
ADSL, HDSL, SDSL, and VDSL referred collectively as XDSL. These variants of DSL
technology provide different data communication capabilities to different users. The
following section briefly states how each variant functions.
ADSL is a subtype of DSL technology that uses a single pair and transmits higher data
rate downstream than upstream, implying that the download speed is greater than the
upload speed [17, 273].Upstream in this scenario refers to the transmission of data
from the subscriber back to the network or central office, and downstream refers to the
transmission of data in the direction towards the subscriber. An example of this type of
application is VOD. Figure 6 shows a connection setup of the ADSL network.
Figure 6 connection setup of the ADSL network. Copied from [16, 11]
International Telecommunication Union (ITU). ADSL Lite does not require a filter in the
customer premises, it can reach speeds of up to 1.5Mbps downstream and an up-
stream rate of 640Kbps [17, 275], enough to provide internet surfing access, remote
LAN access, multimedia access, software downloads, video-on-demand and home
shopping. ADSL is ideally suited to home and small office users who are downloading
more rather than uploading contents.
ADSL is the commonest form of DSL and by farther the most stable and affordable way
to access broadband internet. It uses existing telephone lines but splits it into two
channels, one for voice and one for data. This way the user can use the phone while
accessing the web at the same time.
HDSL is a particular type of SDSL, symmetrically delivering 1.544 Mb/s in both down-
stream and upstream directions over two sets of copper twisted pair lines of up to
12000 feet, which is the same rate as T1 digital line type connection. It is possible to
extend the distance by using repeaters along the line to the customer.
HDSL is a better way of provisioning and transmitting T1/E1 over copper wires, using
less bandwidth and requires no repeaters up to the standard range [9]. It is heavily
used in cellular telephone build outs.
SDSL is similar to HDSL since it transmits same data rates (1.544 Mb/s) for the up-
stream and downstream channels simultaneously in both directions across a single
telephone line. SDSL connections typically allow transmission of up to 6 Mbps in both
directions, but usually require a 4-wire connection. This limits SDSL's reach of approx-
imately 3km. SDSL service is more expensive than ADSL, therefore it is ideally suited
to individual subscriber premises for connecting LAN's over short distances and video
conferencing.
13
VDSL provides the highest data rates of the DSL technologies, being able to deliver
data at a transmission speed of up to 52 Mbps across a single copper cable but only
over short distances. VDSL is limited to distances of up to 2 km [17, 276]. This type of
DSL technology is particularly useful for supplying high data rate services for hotels,
university campuses and business parks that are closer to the telephone company’s
central office.
VDSL can also be used to connect premises distribution network to the optical network
unit to handle a whole range of high bandwidth applications, such as multichannel of
high definition TV broadcasting, VPNs, file downloading or uploading, video on demand
and surveillance systems.
This section covers the requirements of QoS, conceptual model, implementations and
management of various QoS mechanisms to enable network administrators and archi-
tectures to deliver good quality traffic, full duplex communication, low levels of delay,
and allocate bandwidth in a way that improves applications performances across a
network in both directions.
QoS is a crucial element of any administrative policy since it measures the ability of
network to deliver data (end-to-end) with predicted results and computing systems to
provide different levels of services to networked applications such as, video conferenc-
ing, internet telephony, or voice over IP applications and associated network flows.
This services include error rates, network traffic loads, up-time, latency, and band-
width[12, 89].These applications require explicit quality of service guarantees in terms
of good quality traffic, full duplex communication, and low levels of delay.
The Internet Engineering Task Force (IETF) started working on a new group to develop
a framework for defining the services and service model and at the same time, an ar-
chitecture for an internet which can give quality of service guarantee [12, 89]. There are
14
three different types of service models for providing QoS on a network namely Best-
effort, IntServ, and DiffServ.
The internet routing architecture is based on a best effort approach in which network
delivery of IP packets are treated in the same way to provide a scalable and reliable
network foundation. In best effort model, QoS is not applied to packets, the packets
arrives anytime in any order and no preferential treatment is guaranteed. For example,
critical data is treated the same as email. Best effort is the best suitable model for non-
real time applications such as telnet, file transfer or web browsing email in which QoS
is not necessarily needed.
The Integrated Services (IntServ) model was designed to supplement the best-effort
delivery by reserving bandwidth, buffer and central processing unit time for applications
that require bandwidth and guaranteed packet delivery to end-to-end QoS over the
network [12, 89]. IntServ expects applications to signal their requirements to the net-
work to provide very high QoS to IP packets.
The signaling protocol which is used to set up the resource reservation in the IntServ
model is known as Resource Reservation Protocol (RSVP). RSVP is a general signal-
ing protocol where the receiver can specify its traffic characteristics and reserve net-
work resources through a network for an IntServ service. This quality of service attrib-
utes can be either at the individual application flow level or at aggregate level.
One drawback of this type of service model is that the individual applications and flows
must be maintained in the intermediate nodes and routers, making it impossible to
maintain large number of states. A new architectural framework known as differentiated
services model was introduced to solve this problem.
15
DiffServ framework was designed to overcome the limitations of both the best-effort
and IntServ models. It provides an almost guaranteed implementation QoS to a variety
of end-to-end services across the network IP packets while still being flexible, cost-
effective and highly scalable [31, 585]. The Differentiated Service model provides the
ability to assign different levels of services and QoS treatment to different network traf-
fic.
With Differentiated Services, the scaling properties are achieved by marking each
packet's header with one of the standardized code point to deliver a particular kind of
service based on the QoS specified by each packet.
The main VOD system level consist of; a local database and server to store and pro-
vide access to programs, and a standard TV receiver along with a set-top box that al-
lows users to browse and play back a selected video as if they are watching from vide-
otape or a video player. Some of the main types of VOD systems are Quasi Video-On-
Demand (QVOD), True Video-On-Demand (TVOD), and Near Video-On-Demand
(NVOD). These systems and based on the amount of interactivity allowed[11, 4].
Quasi Video-on-Demand
QVOD is a service in which users are grouped based on their interest. Programming
will only be presented if a minimum number of subscribers sign up for it. Users can
choose between different programs by switching to a different group.
True video on demand is a service where the user receives an individual video stream
and has full control over requested playback media item. The user has full control of
continuous interactions such as start, stop, pause, forward, reverse at different speeds,
and full-function virtual video cassette recording capabilities [1, 36]. True video-on-
demand is achieved by paying a fee for each service request.
17
Near video-on-demand
The concept of triple play service is considered as delivering telephony Voice over
IP(VOIP), IPTV, blended IP multimedia streaming services, and high speed internet
services over a single network using either fiber optic cable, copper cable, or satellite
transmitter.
Triple play services can be useful in multiple ways that include:
Delivery of multiple services such as voice, video, and data over one single
network [12, 203-204].
Flexibly to adapt to the next generation of multimedia-enabled networks and
scalable for future upgrade and maintenance.
Triple play is cost effective enough to reduce operational and management
costs. This has made investors to maximize profit and increase return on in-
vestment.
Triple play ensures a flawless user experience by offering mobility to enable
subscribers to do what they want anytime, anywhere.
To deploy a viable triple play service, the network must be more distributed to cost ef-
fectively deliver video and broadband; reliable, to allocate bandwidth to provide optimal
quality of experience for the subscriber; and flexible, to adapt to the next generation of
multimedia-enabled networks.
18
STB is a device that decrypt incoming signals into a synchronized format that directly
connects to an end-to-end IPTV services to enable subscribers to access a variety of
different types of digital entertainment content and video-on-demand programming con-
tent [12, 53-56]. A STB has a variety of TV interfaces at the back and front for connec-
tivity to a variety of different networking infrastructure [14].The set-top box back chan-
nel allow two-way communication to support interactive features like, adding premium
channels, option to play or stop live transmission, and the ability to record or save pro-
grams for future watchable purpose. Figure 7 shows a typical example of IPTV set-top
box.
A typical digital set-top box has a physical height of 2.5 inches and a width of 18 inch-
es. As can be seen in Figure 7,the installation of STB is simple and involves plugging
one HDMI cable into the TV set and another into STB interface. Subscribers are pro-
vided with a handheld remote or wireless keyboards to choose what they want and
gain access to different channels and contents supported by the STB.
There are many different types of STBs based on different standards and geographical
locations. The most commonly known types are IP Set-Top Boxes(STBs), Hybrid IP
STBs, Hybrid IP Satellite STBs, Hybrid IP cable STBs, Multicast and Unicast IP STBs,
Digital STBs [14].
19
4 Multimedia over IP
The benefits and the most common purposes of videoconferencing are listed below.
Video conferencing enhances face-to-face communication and collaboration in
real-time between two or more people regardless of location.
It reduces cost of travel, maximizes productivity, improves work life and learning
experience, and accelerates decision making.
Video conferencing provides live sharing of full-motion video images, text, and
high quality audio between two or more geographical locations providing an ex-
perience that is effective.
20
It is easy to see that there is a growing need for videoconferencing both in business
and the educational fields.
1. Camera
A webcam is used to record and send video signal that is required for adequate inter-
activity between other distant people during a live video conferencing session. These
can range from a simple desktop camera, small Universal Serial Bus (USB) camera to
other more high definition camera quality systems equipped with remote control pan,
autofocus, status indicator, automatic pan and zoom features.
The monitor displays the far end images to connected distant people received from the
videoconferencing codec. The monitor devices come in multiple options that is plasma
screens, liquid crystal display, projectors, and cathode ray tube.
The codec unit is used to digitize and compress video information into a digital signal
using encoding program to decompress the received transmission for play-
back[18].Common video codec's used in video conferencing applications are H.261,
H.263, H.264, MPEG2, and MPEG4.
21
Many of stand-alone video conference systems automatically come with either a small
USB or analog microphones attached to a computer to enhance the audio capabilities
of the system and help with larger group interaction. Microphones can be of two types;
a unidirectional microphone picks up sound from one direction and an omnidirectional
microphone picks up sound from all directions [19, 68].
A computer with fast processor speed is needed to run the videoconferencing software.
It compresses and decompresses video streams and maintains the data link to the
network [18].
6. Speakers
A good set of speakers is essential to enable one to hear the audio from the far end of
the videoconference. Videoconferencing systems require a guaranteed symmetric
bandwidth for a point-to-point connection and multiple video performance (explained in
section 4.1.3).
There are two main types of videoconferencing systems; point to point videoconferenc-
ing and multipoint videoconferencing. This section briefly covers how these two types
of videoconferencing systems function.
Multipoint video conferencing are mostly commonly designed for more than two levels
of interactivity. It includes integrated audio, life-sized images, large flat panel display
devices and visual enhancements, which enhances the reality of the interaction. Mul-
tipoint conferences give the participants the feeling of being present in an actual meet-
ing, as well as the ability to see any content being shared during the meeting even
though the participants are geographically dispersed.
Multipoint conferences are created using a multipoint conference unit (MCU). The mul-
tipoint control unit either sends or receives calls from participants who dial the network
ID of the MCU to initiate a multipoint videoconference [20].
There are three methods used to transmit packets over a network: unicast, multicast,
and broadcast. They are introduced in detail below.
4.2.1 Unicast
The end users during the VOD sessions will have the ability to pause, rewind and have
overall control to the video being steamed because of the established direct session
with the source server. A big concern for the video of demand services is higher band-
width demand requirements due to multiple individual direct connection between the
source server and the requesting end user. Figure 8 illustrates how data flows under
unicasting.
23
The information source sends a separate packet to a single host over the IP network as
shown in Figure 8. In this scenario, the server creates a separate transmission channel
for each host (E, D, and B) and the packets are received only if the destination address
matches one of its own IP addresses.
Unicast routing
Unicast routing is the process of forwarding unicasted traffic from a source to a unique
address on an internetwork [21]. Its goal is to determine a good path (sequence of
routers) through the network from source to destination. There are two main types of
unicast routing used. In distance vector routing, each node can only know the distance
between itself and shares its routing table with its immediate neighbors periodically.
In link state routing everyone gets a copy of a topology and computes their own routes,
including the type, cost (metric), and the condition of the links (up or down). All nodes
run the same algorithm (Dijkstra algorithm) concurrently to compute their forwarding
table in the same distributed setting as for distance vector.
The nodes know only who they are connected to, their neighbors and the cost to their
neighbors. They do not know the whole topology. The nodes can talk only to their
neighbors using messages to find what is going on for the network at large since they
24
have no other ways to gain information about the network. The Open Shortest Path
First (OSPF) protocol is based on link state routing.
4.2.2 Multicast
IPTV provides the ability to set up multicast and stream videos via multicast as op-
posed to unicast. This capability maintains a session with same stream of data which
will also result to lower network congestion, lower bandwidth requirement and a reduc-
tion in the load of the sender and the overall load demands on the source server.
Multicast is a controlled and managed network thus making it much easier to predict
the bandwidth capability by having a certain backbone going through the network. Mul-
ticast sessions offers a couple of advantages over unicast sessions. Multicasting sup-
port is optional in IPv4, but mandatory in IPv6. It supports User Datagram Protocol
(UDP) only and its address can only be used as a destination address. Figure 9 illus-
trates how data flows under multicasting.
25
Suppose that Hosts E, D and B need receive information from the source server, they
need to join a receiver set or a multicast group as shown in Figure 9. The routers on
the network duplicate and forward the information based on the distribution of the re-
ceivers in this set [23].
The information source sends packets over the IP network to hosts (E, D, and B) that
joins the multicast group as shown in Figure 9. In this scenario, the server creates a
single transmission channel for all the hosts.
IP multicast addresses
IP addresses are defined in RFC 1112 and class D address are used as multicast in
the destination IP addresses field to identify a multicast packet. The Class D address
range is 224.0.0.0 to 239.255.255.255 and the numeric overall range of multicast ad-
dresses is 224.0.0.0 through 224.0.0.
Table 2 shows common multicast addresses reserved for various communications pro-
tocols. Modified from [23].
Reserved multicast addresses are used by network protocols on network routers for
different purposes as listed in Table 2. For example, an Open Shortest Path First
(OSPF) router must sends a "hello" packet to an assigned multicast address, which
is 224.0.0.5, and the other routers will respond. In case of multicast communication , by
pinging 224.0.0.1 address, all multicast capable hosts on the network must join that
group at start-up on all its multicast capable interfaces. A group of clients listening to
same multicast address is known as host group.
In broadcasting a single packet is delivered from one sender to all connected receivers
on the local network simultaneously. Each device that receives a broadcast packet
must process the packet in case there is a message for the device [24, 250].
Broadcast packets are normally restricted to the cable network and are undesirable for
streaming media, since even a small stream could flood every device on the local net-
work with packets that are not of interest to the device.
27
In the diagram shown above, the source server sends a packet to all hosts (A, B, C, D,
and E) on the network segment. Suppose that hosts A,C and D do not need the infor-
mation and only B and E needs the information, the source will still broadcast the in-
formation to all the host, but only the one with that IP address will respond. To use a
broadcast transmission, map upper layer addresses to lower addresses, send a query
to request an address and then exchange routing information by routing protocols [23].
IP Broadcast Addresses
IP broadcast addresses can be used only as the destination IP address for single-
packet one-to-everyone delivery under the same LAN [24]. There are two different
types of IP broadcast addresses;
1. Limited Broadcast
The limited address is the broadcast limited to single LAN and represented by setting
all 32 bits of the IP address to the 255.255.255.255 [24]. It is used as the destination
address of an IP datagram during the automatic configuration process such as Boot
Protocol (BOOTP) or DHCP, and when the host does not know its subnet mask or net-
work ID. For example, with DHCP packets, the client must use the limited broadcast
address for all traffic sent until the DHCP server acknowledges the IP address lease
[25].This datagram is never forwarded by routers, it will only appear on the local net-
28
work segment. The destination MAC address for such frames will be
FF:FF:FF:FF:FF:FF [25].
2. Directed Broadcast
Directed Broadcast address is the local subnet broadcast address sent to all hosts
FF:FF:FF:FF:FF:FF on an Ethernet interface [24]. The broadcast address uses the
highest address in the Ethernet interface range. For instance, if the subnet network ID
is 192.168.0.0, the directed broadcast address will be 192.168.255.255, which will be
heard by all in the same subnet hosts. NetBIOS Name Service (NBNS) uses directed
broadcast packets.
There are many types of IPTV protocols used. The most commonly used are intro-
duced below.
RTP usually runs over UDP and does not reserve bandwidth or guarantee QoS. RTP is
designed to support end-to-end delivery of real-time data such as voice and video from
the source to the receiver, and also supports a wide variety of media-on-demand appli-
cations such as internet telephony, IPTV services and online games.RTP randomly
picks even ports from UDP port or transport layer ports and encapsulate voice or video
data packets.
29
Bits 0 2 34 5 678 9 0 1 2 3 4 5 16 7 8 9 0 1 2 3 4 5 6 7 8 9 0 31
V P X CC M PT Sequence Number
Timestamp
RTP is therefore responsibility for payload type identification, source identification, se-
quence numbering and time stamping. Translators and the mixers usually resides in
between senders and receivers to translate and forward RTP packets from one pay-
load to another. Mixer assigns itself as the sender of the packet, combines RTP
streams from different sources into a single stream and then forwards a new RTP
packet.
RTCP is used during multicast audio or video transmission to receive streams of RTP
data packets from one or more sources and combines them into a single stream. RTCP
packets are distributed to all the participants using IP multicast. It is distinguished from
RTP through the use of distinct port numbers.
It is up to the application to make use of RTCP packets. Different application may come
up with different algorithms and mechanisms to best use of these applications.
RTCP is commonly used to create links on web sites that point to streaming media files
[1, 225].RTCP monitors periodic transmission statistics of control packets, reports qual-
ity of service (QoS) feedback, and helps to synchronize multiple streams. For example;
if RTCP packets are getting lost from the receiver, then it is obvious that the internet is
congested and it is recommended to appropriately adapt the sending rate or change
quantization levels.
32
RTSP establishes and controls the delivery of multimedia streams with real-time prop-
erties, such as audio and video across IP networks between client and server. RTSP
allows the client using a network remote control to communicate to the server infor-
mation to deliver channels such as UDP, multicast UDP and Transmission Control Pro-
tocol(TCP) [12, 92].
2. Option
An option request tells the client what request types the server accepts.
3. Describe
A describe request provides the client with a description of the media to start the ap-
propriate media applications. Figure 11 shows an example of a describe request.
33
4. Play
Tells the server to begin sending the bit stream for playing the media file. Figure 12
illustrates an example of a play request
5. Pause
Announce
Announce control request is used to send information to the client to register a new
entry or description of a media stream. From the client's side, it displays the description
of the media while from server's side, it updates the description in real time.
34
6. Setup
Tells the server how to transport the media for an identified media stream and which
port to use.
7. Teardown
Terminates the media streaming session delivery and frees all associated network re-
sources associated with the session.
8. Get_parameter
9. Set_parameter
Changes a parameter of a stream from a URL by enabling the client to issue request to
set the value of a parameter of a specified stream.
10. Redirect
Informs the client that it must connect to a different server and then moves it to that
server.
RTSP enhances interactions between the client and the server by using a network re-
mote control and adds a number of new requests to the existing HTTP requests as
shown in the list above. The client first requests the description of the media using the
DESCRIBE method, then requests that the session is SETUP and receives a session
identifier in return. The client requests that the media streams of the session are
PLAYED and at any point the client may PAUSE the media stream temporarily. When
the client has completed, the client issues a TEARDOWN request to terminate the me-
dia streaming session.
The protocol independent multicast is a type of multicast routing protocol that does not
depend on any particular protocol for unicast traffic for its operation but it can leverage
35
whichever unicast routing protocols used to populate the unicast table. All PIM routers
multicast group, or unicast to a specific destination [30].There are two main modes of
PIM that allow one-to-many and many-to-many transmission of information:
Figure 13 PIM-SM mode design with sparsely distributed streams being sent. Cop-
ied from [35].
One router is elected the 'querier' on each local/physical network, querier periodi-
cally sends membership query messages to 'all system group' (224.0.0.1) with TTL
= 1.
When active receivers actively request to join a specific multicast group, routers
along the path of these receivers register to join that group. Host sends a leave
group message to group address G if it was the most recent host to report mem-
36
bership in that group. Router sends join messages to RP and sources register with
RP, intermediate routers update state and forward join messages. RP can send
stop messages to source if no receivers joined the group.
1. Designated Router
There must be one PIM Designated Router (DR) in each subnet in the network.
Any PIM-SM interfaces on the subnet elect the DR with the highest DR priority. If
there is more than one router with the same priority, or no priority, they choose the
interface with the highest IP address. If the current DR becomes unavailable, the
remaining switches elect a new DR for the subnet by DR priority or IP address [32].
The DR on the subnet containing a multicast source sends multicast packets to-
wards the Rendezvous Point (RP). DRs with group members connected RP sends
join messages towards the group’s RP.
2. Rendezvous Point
Each multicast group must have a RP. The RP for a group or range of groups is
found by an election process. The lowest preference value is elected from all the
RP group range of multicast addresses.
To create a routing tree for a group with rendezvous point as a root for the tree a
receiver send join messages towards the RP and the sender sends register to-
wards the RP.
3. Bootstrap Router
Each PIM-SM network must have at least one Bootstrap Router (BSR) candidate.
The Bootstrap Router for a network is chosen by election. The highest priority is
elected to be the BSR. The elected BSR listens to PIM Candidate RP bootstrap
messages to determine the RP for each multicast groups.
37
PIM dense mode also known has push mode is assumed that all downstream sys-
tems wants to receive multicast feed or view the multicast feed.PIM dense mode
flooded across the network uses Reverse Path Forwarding (RPF) interface to receive
multicast traffic. It forwards the multicast traffic through every single segment
whereby some segments don't have group members interested in a multicast feed.
Packets arriving via the non-RPF interface are discarded. PIM-DM will prune off the
data packet destined for the group and forwarded by instantiating prune state.
PIM dense mode is recommended for small networks, to avoid more configurations
and easy management. Figure 14 shows the PIM-DM mode flooding example,
pruning unwanted traffic.
Figure 14 PIM-DM mode flooding example, pruning unwanted traffic. Copied from
[35].
Figure 14, the (S, G) state is created in every router in the network, multicast traffic
is flooded throughout the entire network. Figure 15, on the other hand, illustrates
the PIM-DM results after pruning.
38
Other modes of PIM are source-specific multicast (SSM) and bidirectional, which
are not widely used throughout a multicast domain.
The Internet Group Management Protocol (IGMP) is used on secure stack to snoop the
multicast traffic only to those ports that need it. IGMP operates on a physical network
that is a single Ethernet segment. It is used by multicast router to manage membership
in IP multicast groups. It supports joining a multicast group, query membership and
sending membership reports.
Multicast router will send queries to the host from time to time when it joins a multicast
group. Report is sent only for the first process about multicast group membership relat-
ed to neighboring router interface. This means that the host has the right to respond or
not respond in receiving transmissions addressed to a specific multicast group. If there
is no response or response time expires, then it is treated as if the host left the group
and does not respond to the next query. Therefore, it will remove that host’s router in-
terface from the group.
39
IGMP protocol is widely used in online streaming video and gaming. In IPTV it is used
to connect to a TV channel and to change from one TV channel to another.
IGMP
Messages
Membership Leave
Query Report
Report
General Special
The host may join a multicast group at their own will by sending a report message.
There is no restriction as the host can choose to leave a group at any time by sending
a leave report. Hosts can join as many group as they want to at a time. Membership
query are used to discover which hosts are members of a particular multicast group.
ICM Presides in the IP layer and it is part of the TCP protocol used for error handling in
the network layer. Since different types of errors can occur in the network, ICMP pro-
vide information messages concerning the routing of IP datagrams. It monitors, con-
trols network traffic and reports the errors after network debugging to diagnose prob-
lems within the network layer.
For example; if the router cannot send the packet to a particular destination, ICMP
communicate layer information between end hosts and routers sends an error mes-
sage indicating that it cannot deliver the message to the end destination. Error report-
ing messages are used when an end host or router wants to report an error using
41
ICMP, it puts the information that it wants to send back to the source into an ICMP pay-
load and delivers it to IP to be sent as a datagram.
The goal of this section is to provide an understanding of how compression works and
types of compression techniques used. A good compression involves removing infor-
mation from a file without someone hardly telling the difference between the com-
pressed file and the original file. The compression techniques have made it possible to
transmit multimedia signals via the internet.
The compression algorithm consists of two different kinds of compression, lossless and
lossy compression. They are both detailed below.
In lossless compression, data is compressed and the algorithm does not lose any sin-
gle bit of data when the file is uncompressed. Algorithms stores and transmits data into
smaller encoded files to restore the original information when uncompressed. Lossless
data compression is ideal for situations where any loss of textual information cannot be
tolerated. Example of this type of compression are ZIP compression and LZW com-
pression. Formats such as GIF and PNG use lossless compression.
42
This compression technique is used Digitally Sampled Analog Data (DSAD), where a
loss of quality of data can be tolerated. DSAD consists of picture files, audio or video
and graphics. Lossy compression is delivered in the form of Advanced Audio Cod-
ing (AAC), MP3,MP2, and many more. Digital audio is most often served in formats
that use lossy compression to save bandwidth transmission cost and storage space.
The following compression types, spatial and temporal compression, are commonly
used to compress audio and video.
Spatial compression is applied only to individual frames and mainly used to compress
still images such as JPEG by removing spatial redundancy that exist in each individual
frame. When a JPEG is created, color information of the image is reduced in a process
called chroma sub-sampling and then the image is split in a session of 8 by 8 pixels
called macro-blocks. Discrete cosine, transform and quantization is done to further re-
duce the file size.
Freeze frame is the pausing of a moving video which results in the view resting on one
frame. The computer reviews video in slow motion and looks at each frame individually,
and goes through a process of compare and contrast. It then reviews and take notes of
the elements that are similar or not and on the new changes that are taking place.
43
Each frame does not have to be repeated making it much easier to reduce the overall
file size for upload.
In temporal compression a series of frames or pixels are looked at and see what is
different through the next ten frames. This compression technique looks at the data of
the current frame and then goes to the succeeding frame. It does not keep track of eve-
ry single pixel if the succeeding frames are identical or correlated, but only keeps track
of the changes over time by reducing redundancy. Temporal compression is to motion
compress and mainly uses lossy compression techniques such as temporal prediction
to reduce the file size significantly without too much quality loss.
The key goal of compression is to get the highest possible image and video quality
from the smallest possible bit rate. Image and video quality is a balance of five factors:
Codec: Different codecs require different levels of efficiency settings for signifi-
cantly similar quality.
Larger frame sizes require faster bit rates.
Faster frame rates require faster bit rates.
More movement between frames in the master file from camera, transitions,
and effects require higher bit rate needed to properly compress the frames for
higher quality.
Video compression yields the smallest file sizes when there is very little move-
ment between the frames.
Higher bit rates yield higher quality, but much larger file sizes.
The majority of television broadcast consists of 30 frames per seconds. YouTube for
example broadcasts video at 25 frames per second. Discarding those 5 frames per
second leads to a reduction of the overall size of video file. Many of live video streams
such as Skype are 15 frames per second which is a reduction of half of the original
frames.
In general, Intra-frame will always be better than inter-frame as long as the bandwidth
is affordable. Inter-frame normally comes of importance in situations where there is not
adequate bandwidth.
44
Audio and video compression is a very important aspect of IPTV that indicates how
audio and video streaming aspects e.g. in YouTube works. Video compression works
by minimizing redundancy in the video data and offers a number of standards encod-
ings as shown in Table 4 below.
Table 4 Image and Video Compression Standards. Copied from [36, 33].
Audio compression is typically the compression of audio signals into formats such as
MP3 or AAC, which reduces the size of the audio signal or file. Audio compression is
used to remove useless audio data from its dynamic range that goes from absolute
high to absolute low.
45
While cable companies worked on moving their content on the web devices, telecom-
munication companies streamed television content over private network to a set-top
box that is connected to a TV. IPTV is an important aspect in Television today dramati-
cally changing the way people communicate and operate especially when its full poten-
tial is implemented and converged with mobile TV. It is a highway through which end
users will see whatever video, whenever they want and on what device they prefer, be
it on a television, a computer or mobile device.
With the emergence of internet, the TV experience has undergone a fundamental shift
enabling cable operators and phone services to offer triple play of cable TV and high
speed internet in digital voice services. With triple play services, consumers are now
able to watch high quality programming over the internet using free or low priced over
the top services from the like of Hullu, Netflix, YouTube and Apple iTunes.
To match up cable quality, IPTV providers rely on content delivery networks that store
content across a geographically distributed network server rather than just one loca-
tion. Distributing content close to the end customer, helps reduce bottleneck that result
in startup delays or streaming video at a lower quality resolution. For instance when a
request is made from Finland to view a video that originates from the United States, it
can take a long time to cover that distance as compared to Sweden. This contributes to
the importance of QoS to the end user satisfaction and experience.
To assure the highest quality viewing experience, IPTV providers must monitor their
content delivery networks to isolate and quickly fix problems that might lead to sub-
scriber defections. Phone companies such as Sonera can optimize streaming video
performance, reduce costs and display an encoded video stream of IP packets via a
STB. Some Telco network operators such as AT&T utilize DSL technologies to deliver
IPTV and broadband services to users over their access network.
IPTV systems have the advantage over traditional cable TV services in comparison in
that they are relatively independent on the operating point and offer quality of network
solutions. The compression techniques of digital television data allows for more storage
of content with less space. IPTV can be applied in different institutions and organiza-
tions such as education, finance, and medical fields to stream live operations with
46
higher quality of content delivery, protection of content and control of video quality over
a private network. It frees up bandwidth for distributers allowing them to deliver greater
content to their customer.
47
References
2. Palau C.E, Mares J, Molina B, Esteve M [online]. Wireless CDN video stream-
ing architecture for IPTV. Valencia, Spain: Universidad Politécnica de Valencia;
April 2010.
URL:
https://pdfs.semanticscholar.org/e226/eb152960609581c14d0eff5038e97e6962
3f.pdf.
Accessed 25 February 2016.
7. U.S. Robotics Corporation. Digital Subscriber Line (DSL): Using Next Genera-
tion Technologies to Expand Traditional Infrastructures [online]. Schaumburg;
2001.
URL: http://support.usr.com/download/whitepapers/8500-wp.pdf
Accessed 2 April 2016
8. Fish R.C, Rakib S.S. Home network gateway [online]. Terayon communication
Systems; July 2001.
URL: https://www.google.com/patents/EP1117214A2?cl=en
Accessed 2 April 2016
48
10. Nasser N.K, Hussein S.A. Triple Play Services Transmission over VDSL
Broadband Access Network in MDU [online]. Baghdad, Iraq:Al-Nahrain Univer-
sity; 2014.
URL:http://www.ijcsit.com/docs/Volume%205/vol5issue03/ijcsit20140503284.pd
f
Accessed 20 April 2016
12. Weber J, Newberry T. IPTV Crash Course. The MCGraw-Hill Companies; 2007.
13. Alencar M.S. Digital Television Systems. USA: Cambridge University; 2009
14. O’Driscoll G. Next Generation IPTV Services and Technologies [online] 2008.
URL:
http://site.ebrary.com/lib/metropolia/reader.action?docID=10226816&ppg=249#
Accessed 29 March 2016.
16. Allied Telesis. DSL White Paper [online]. USA, Singapore, Switzerland.
URL: http://www.alliedtelesis.com/media/pdf/dsl_wp.pdf
Accessed 18 April 2016
19. David E. Reese L.S, Gross B.G.Audio Production Worktext: Concepts, Tech-
niques, and Equipment [online]. USA:Elsevier; 2009.
URL:
https://books.google.fi/books?id=NH1Q0BI2YPcC&pg=PA68&lpg=PA68&dq=Mi
cro-
phones+can+be+of+two+types;+a+unidirectional+microphone+picks+up+sound
+from+one+direction+and+an+omnidirectional+microphone+picks+up+sound+fr
om+all+directions.&source=bl&ots=P6gBvy9ua8&sig=TGiCvub_aLYDFP_dtYs5
148xnk4&hl=fi&sa=X&ved=0ahUKEwiVu6bt7ZjMAhWHAJoKHUvFC-
0Q6AEISTAF#v=onepage&q=Microphones%20can%20be%20of%20two%20ty
49
pes%3B%20a%20unidirectional%20microphone%20picks%20up%20sound%2
0from%20one%20direction%20and%20an%20omnidirectional%20microphone
%20picks%20up%20sound%20from%20all%20directions.&f=false
Accessed 21 February 2016
24. Simpson W. Video Over IP: IPTV, Internet Video, H.264, P2P, Web TV, and
Streaming: A Complete Guide to Understanding the Technology. United King-
dom: Focal Press; 2013.
Accessed 12March 2016
25. Davis J. Unicast Routing Principles [online]. Microsoft TechNet; April, 2018.
URL: https://technet.microsoft.com/en-us/library/bb726995.aspx
Accessed 21 February 2016
B#v=onepage&q=Examples%20of%20synchronization%20sources%20include
%20the%20sender%20of%20a%20stream%20of%20packets%20derived%20fr
om%20a%20signal%20source%20such%20as%20a%20microphone%20or%2
0a%20camera%2C%20or%20a%20RTP%20mixer&f=false
Accessed 10 April 2016
28. Zurawski R, editor. The industrial information technology handbook. Boca Ra-
ton: CRC Press; 2005.
Accessed 10 April 2016
29. Schulzrinne H. Real Time Streaming Protocol [online]. Columbia: The Internet
Society; April 1998.
URL: https://www.ietf.org/rfc/rfc2326.txt
Accessed 10 April 2016
32. Allied Telesis. PIM-SM Introduction and Configuration [online]. USA, Singapore,
Switzerland.
URL:
http://www.alliedtelesis.com/nsp/documents/SBx8112_542/pimsm_conf.html
Accessed 2 April 2016
33. Park S, Jeong S.H. Mobile IPTV: Approaches, Challenges, Standards, and QoS
Support [online] 2009;13(3): 24.
URL: http://crystal.uta.edu/~kumar/CSE4340_5349MSE/Mobile%20IPTV.pdf
Accessed: 22 April 2016
36. John G. Video Compression: Principles, Practice, and Standards [online]. Apos-
tolopoulos Streaming Media Systems Group; September 27, 2005.
URL: http://sites.ieee.org/scv-
ces/files/2015/06/video_coding_overview_IEEESantaClara_Sept05.pdf
Accessed 23 April 2016
Appendix 1
1 (1)