WSN Yhk
WSN Yhk
7. Data Management:
Definition: The processes related to handling and storing data collected by
sensor nodes.
Functions: Data storage, retrieval, and organization for efficient processing
and analysis. Databases or data aggregation points may be utilized.
8. Security:
Definition: Measures to protect sensor network data and operations from
unauthorized access, tampering, or attacks.
Aspects: Encryption, authentication, access control, and intrusion detection
systems to ensure the integrity, confidentiality, and availability of sensor
network data.
9. Network Design Issues:
Issues: Challenges and considerations in designing a robust sensor network.
Examples: Energy efficiency to prolong node battery life, scalability for
accommodating a growing number of nodes, fault tolerance for maintaining
network functionality in the presence of failures, and adaptability to dynamic
environmental conditions.
Low cost: WSNs consist of small, low-cost sensors that are easy to deploy,
making them a cost-effective solution for many applications.
Limited processing power: WSNs use low-power devices, which may have
limited processing power and memory, making it difficult to perform complex
computations or support advanced applications.
Challenges in WSN
1. Quality of Service (QoS): Ensuring reliable and timely data delivery in WSNs is
challenging due to the dynamic and unpredictable nature of the environment.
Maintaining QoS metrics such as latency, reliability, and packet delivery ratio
becomes crucial, especially in applications where real-time data is essential.
6. Ability to Cope with Node Failure: Due to the harsh and dynamic deployment
environments, sensor nodes in WSNs may fail or become unreliable. Developing
fault-tolerant mechanisms and adaptive protocols to handle node failures without
compromising the overall network performance is a significant challenge.
Applications :-
Wireless Sensor Networks (WSNs) find applications across various domains due to their
ability to collect and transmit data from the physical world in a wireless and distributed
manner.
2. Surveillance and Monitoring for Security, Threat Detection: WSNs are extensively
used in surveillance systems for security and threat detection. They can monitor areas
for unauthorized intrusions, detect abnormal activities, and provide real-time alerts.
WSNs contribute to enhancing the overall security of various environments, including
public spaces, critical infrastructure, and military installations.
WSN/Unit 1/Notes
4. Noise Level Monitoring: WSNs can be employed to monitor and measure noise
levels in urban or industrial areas. This is important for assessing noise pollution,
ensuring compliance with regulations, and implementing measures to mitigate the
impact of excessive noise on the environment and human health.
Here are some key components and features commonly found in sensor nodes:
WSN/Unit 1/Notes
1. Sensors: Sensor nodes are equipped with one or more sensors to collect data from
the environment. These sensors can measure various parameters such as
temperature, humidity, pressure, light intensity, motion, gas concentration, etc.
2. Processing Unit: Sensor nodes include a microcontroller or a microprocessor
responsible for processing the data collected by the sensors. This processing may
involve filtering, aggregation, or even simple analysis of the data before transmission.
3. Communication: Sensor nodes are designed to communicate with each other or
with a central network gateway. They typically use wireless communication protocols
such as Wi-Fi, Bluetooth, Zigbee, LoRa, or cellular networks to transmit data to a
central server or other nodes in the network.
4. Power Management: Since sensor nodes are often deployed in remote or
inaccessible locations, power efficiency is crucial. They may include power
management systems to optimize energy consumption, such as low-power modes,
sleep modes, or energy harvesting techniques.
5. Compact Design: Sensor nodes are designed to be compact, lightweight, and often
rugged to withstand harsh environmental conditions. This allows for easy
deployment in various locations and applications.
6. Scalability and Flexibility: Sensor node networks can vary in size from a few nodes
to thousands or even millions. The technology is designed to be scalable, allowing
for easy addition or removal of nodes as needed. Additionally, sensor nodes are
often programmable, allowing for flexibility in adapting to different use cases and
environments.
Overall, sensor node technology plays a crucial role in enabling the collection of real-
time data from the physical world, facilitating applications ranging from
environmental monitoring and smart agriculture to industrial automation and smart
infrastructure.
Sensor taxonomy
Sensor taxonomy is essentially a hierarchical classification system that organizes
sensors into categories based on their characteristics, functionality, and application
domains. This classification aids in understanding the diverse range of sensors
available and helps in selecting the appropriate sensor for a specific use case. Here's
an explanation of how sensor taxonomy works:
1. Classification Criteria:
Physical Phenomenon: Sensors can be categorized based on the type of
physical phenomenon they detect, such as temperature, pressure, light,
motion, etc.
WSN/Unit 1/Notes
Radio Technology
Radio technology is used in a wide number of applications, and the list is growing all the time.
Some of the earliest applications were to enable communications where wired links were not
possible. Marconi, one of the early pioneers saw a need for radio communications between ships
and the shore, and of course radio is still used for this today. However as radio became more
established people started to use the medium for broadcasting. Today a huge number of stations
broadcast both sound and vision using radio to deliver the programmes to the listener.
There are many more applications for radio. Apart from being used for ship to shore
communications, radio is used for other forms of communications. Short wave radio was one of
the first applications for radio. With ships sailing over vast distances it was seen that radio could
provide a means for them to communicate when they were in the middle of an ocean. By
"bouncing" the signals off reflecting layers in the upper atmosphere, great distances could be
achieved. Once it was sent hat this could be done, many others also started to use the short
wave bands were long distance communications could be made. It was used by everyone from
the military to news agencies, weather stations, and even radio hams.
Radio is also used for telecommunications links. Signals with frequencies in the microwave
region are normally used. These signals have frequencies much higher than those in the short
wave band and they are not affected by the ionosphere. However they provide reliable direct line
of sight links that are able to carry many telephone conversations or other forms of traffic.
However as they are only line of sight, they require towers on which to mount the antennas to
enable them to transmit over sufficiently long distances.
Satellites
Satellites are also used for radio communication. As short wave communications are unreliable,
and cannot carry the level of traffic required, higher frequencies must be used. It is possible to
WSN/Unit 1/Notes
transmit signals up to satellites in outer space. These can receive the signals and broadcast
them back down to Earth. Using this concept it is possible to transmit signals over vast distances,
such as over the oceans. Additionally it is possible to use the satellites for broadcasting.
Transmitting a signal up to the satellite, it is then relayed on a different frequency, and can give
coverage over a whole country using just one satellite. A land based system may require many
transmitters to cover the whole country.
Satellites may also be used for many other applications. One of these is for observation. Weather
satellites, for example, take images of the Earth and relay them back to Earth using radio signals.
Another application for satellites is for navigation. GPS, the Global Positioning System uses a
number of satellites in orbit around the Earth to provide very accurate positioning. Now further
systems including Galileo (a European based system) and Glonass (a Russian based system)
are being planned and put into operation.
Radar
Radar is an application of radio technology that has proved to be very useful. It was first used by
the British in the Second World War (1939 - 1945) to detect incoming enemy bombers. By
knowing where they were, it was possible to send up fighters to intercept them and thereby gain
a significant advantage. The system operates by sending out a short burst of wireless energy.
The signal is sent out and reflects back from the objects in the area that is 'illuminated' by the
radio signal. By knowing the angle at which the signal is returned, and the time it takes for the
reflection to be received, it is possible to pinpoint the object that reflected the signal.
Mobile communications
In recent years there has been an explosion in personal communications. One of the first major
applications was the mobile phone. Since their introduction in the last 20 years of the 20th
century, their use has mushroomed. Their growth has shown the value of mobile
communications and mobile connectivity. Accordingly other applications such as Bluetooth, Wi-Fi
and others been developed and are now part of the wireless scene.
Network architectures
There is infinite knowledge sitting in the palm of our hands. With a few swipes, we
can log on to any website and get the information we want in seconds. It’s so
convenient that we often take for granted the complex and incredible mechanisms –
the wires, cables, and servers – that make it all possible.
This is what network architecture is all about. It’s how data flows efficiently from one
computer to another. And for businesses with an online component, it’s an important
concept that has a significant impact on their operation. Let’s start with the
networking architecture definition.
There are many ways to approach network architecture design, which depend on the
purpose and size of the network. Wide area networks (WAN), for example, refer to a
group of interconnected networks often spanning large distances. Its network
architecture will be vastly different from that of a local area network (LAN) of a
smaller office branch.
WSN/Unit 1/Notes
Planning the network architecture is vital because it either enhances or hinders the
performance of the entire system. Choosing the wrong transmission media or
equipment for a particular expected server load, for instance, can cause slowdowns
on the network.
Most network architectures adopt the Open Systems Interconnection Model or OSI.
This conceptual model separates the network tasks into seven logical layers, from
lowest to highest abstraction.
The Physical layer, for instance, deals with the wire and cable connections of the
network. The highest layer, the Application layer, involves APIs that deal with
application-specific functions like chat and file sharing.
The OSI model makes it easier to troubleshoot the network by isolating problem
areas from each other.
While there are myriads ways to design your network architecture, you’ll find that
most fall into one of two types. These are the peer-to-peer and client/server
architectures.
Most large networks, such as WANs, often use the client/server model. The web
server you’re accessing this article on, for instance, is a perfect example. In this
case, your computer or smartphone is the client device. Client/server is also the
preferred enterprise network architecture.
There’s also a hybrid architecture called edge computing, which is becoming more
popular with the Internet of Things (IoT). It’s similar to a client/server architecture.
However, instead of the server being responsible for all storage and processing
tasks, some of it is delegated to computers located closer to the client machine,
called edge devices.
WSN/Unit 1/Notes
The design of any digital network architecture involves optimizing its building blocks.
These include:
Hardware
These are the equipment that forms the components of a network, such as user
devices (laptops, computers, mobile phones), routers, servers, and gateways. So, in
a way, the goal of any network architecture is to find the most efficient way to get
data from one hardware point to another.
Transmission Media
Transmission media refers to the physical connections between the hardware
devices on a network. Different media have various properties that determine how
fast data travels from one point to another.
They come in two forms: wired and wireless. Wired media involve physical
cables for connection. Examples include coaxial and fiber optic. Wireless
media, on the other hand, relies on microwave or radio signals. The most
popular examples are WiFi and cellular.
Protocols
Protocols are the rules and models that govern how data transfers between devices
in a network. It’s also the common language that allows different machines in a
network to communicate with each other. Without protocols, your iPhone couldn’t
access a web page stored on a Linux server.
There are many network protocols, depending on the nature of the data.
Examples include the Transmission Control Protocol / Internet Protocol
(TCP/IP) used by networks to connect to the Internet, the Ethernet protocol for
connecting one computer to another, and the File Transfer Protocol for
sending and receiving files to and from a server.
Topology
How the network is wired together is just as important as its parts. Optimizing this is
the goal of network topology.
Topology is the structure of the network. This is important because factors like
distance between network devices will affect how fast data can reach its
destination, impacting performance. There are various network topologies,
each with strengths and weaknesses.
A star topology, for example, describes a layout where all devices in the
network are connected to a central hub. The advantage of this layout is that
it’s easy to connect devices to the network. However, if the central hub fails,
the whole network goes down.
On the other hand, a bus topology is where all network devices are connected
to a single pathway, called the bus. The bus acts like a highway that carries
data from one part of the network to the other. While cheap and easy to
implement, its performance tends to slow down as more devices are added to
the network.
WSN/Unit 1/Notes
Different network architectures have their pros and cons; and knowing them is the
key to picking out the right one for your needs.
Peer-to-peer models are often inexpensive and easy to put up because you don’t
need to invest in a powerful server. Theoretically, all you need are network cables or
a router, and you’re good to go. It’s also quite robust; if one computer goes down,
the network stays up. The distributed nature also lessens or at least spreads out the
network load to prevent congestions.
Client/server models, on the other hand, are easier to manage because they take on
a centralized approach. You can set up access privileges, firewalls, and proxy
servers to boost the network’s security. Thus, a client/server setup is best for large
networks over larger distances.
But the biggest con of a client/server model is that the server is a weak link. If the
server goes down, the entire network shuts down. Thus, security is often the most
robust at and near the server.
Let’s take a look at how network architecture works in practice. Let’s use a
manufacturing company with various locations globally as an example.
Each location, such as a factory, will have its own network. If the manufacturing site
uses Internet of Things (IoT) sensors on its equipment, it will most likely use edge
computing. These sensors will be connected via WiFi to an edge gateway device or
an on-site server. This can also accept user devices on the factory, such as
employee workstations and mobile phones.
These mini networks will then be connected to the company’s wide area network
(WAN), often using a client/server architecture. Corporate headquarters will often
house the central server, although a server on the cloud is also a possibility these
days. Regardless, network administrators on HQ can monitor and manage the whole
WAN infrastructure.
WSN/Unit 1/Notes
The enterprise WAN is also connected to the Internet via a broadband connection,
courtesy of their service provider.
1. Energy Efficiency: Since sensor nodes are often battery-powered and have limited
energy resources, energy efficiency is crucial. Design protocols and algorithms that
minimize energy consumption through techniques such as duty cycling, data
aggregation, and energy-aware routing.
2. Scalability: WSNs may need to accommodate a varying number of nodes and adapt
to changes in network size. Design protocols and architectures that can scale
effectively without significant performance degradation.
3. Reliability: Develop mechanisms to ensure reliable data transmission despite the
challenges posed by wireless communication, such as interference, fading, and node
failures. This may involve techniques like error detection and correction, redundancy,
and reliable routing protocols.
4. Fault Tolerance: WSNs should be resilient to node failures and environmental
disturbances. Employ redundancy and self-healing mechanisms to ensure continued
operation even in the presence of faults.
5. Low Latency: Depending on the application, minimizing communication delays
(latency) can be critical. Design protocols and algorithms with low latency to support
real-time or near-real-time applications such as industrial monitoring or emergency
response systems.
6. Security: Protect data confidentiality, integrity, and availability against various
security threats such as eavesdropping, tampering, and denial of service attacks.
Implement encryption, authentication, and access control mechanisms to secure
communication and data storage.
7. Localization: In many WSN applications, knowing the physical location of sensor
nodes is essential. Develop localization algorithms and techniques that accurately
determine node positions using available information such as signal strength, time-
of-flight, or GPS.
8. Data Aggregation: Minimize the amount of data transmitted over the network by
aggregating and summarizing information at intermediate nodes. This reduces
communication overhead and conserves energy.
9. Adaptability: WSNs often operate in dynamic and unpredictable environments.
Design protocols and algorithms that can adapt to changes in network conditions,
such as node mobility, varying traffic loads, and environmental changes.
WSN/Unit 1/Notes
10. Interoperability: Ensure that different components of the WSN, including sensors,
gateways, and applications, can communicate seamlessly by adhering to
standardized communication protocols and data formats.
Service interfaces in wireless sensor networks (WSNs) define the interaction points
between different layers, components, or entities within the network. These interfaces
enable communication, data exchange, and control between various elements of the
WSN architecture. Here are some common service interfaces found in WSNs:
1. Physical Layer Interface: This interface defines how the physical layer interacts with
the hardware components of the sensor nodes, including transceivers, antennas, and
sensors. It includes specifications for modulation, coding, transmission power control,
and channel access mechanisms.
2. Data Link Layer Interface: The data link layer interface governs communication
between neighboring nodes and manages data transmission over the wireless
medium. It includes services such as frame synchronization, error detection and
correction, medium access control (MAC), and link quality estimation.
3. Network Layer Interface: This interface provides services for routing and forwarding
data packets between sensor nodes within the network. It includes functionalities
such as addressing, routing protocol interactions, packet forwarding, and congestion
control.
4. Transport Layer Interface: The transport layer interface ensures end-to-end
communication between sensor nodes and higher-level network entities. It includes
services such as segmentation and reassembly of data packets, flow control, and
reliability mechanisms.
5. Application Layer Interface: At the highest level, the application layer interface
facilitates interaction between sensor nodes and application-specific services or
middleware. It includes services for data aggregation, event detection, localization,
and interfacing with external applications or services.
6. Management Interface: The management interface provides services for
configuring, monitoring, and maintaining the operation of the WSN. It includes
functionalities such as node configuration, network monitoring, energy management,
and fault detection.
7. Security Interface: In secure WSNs, the security interface governs interactions
related to authentication, encryption, access control, and key management. It ensures
the confidentiality, integrity, and authenticity of data exchanged within the network.
8. Time Synchronization Interface: For applications requiring synchronized operation,
the time synchronization interface provides services for synchronizing the clocks of
sensor nodes within the network. It includes protocols for time distribution and clock
synchronization accuracy.
WSN/Unit 1/Notes
Gateway concepts :-
In wireless sensor networks (WSNs), gateways serve as intermediaries between the
sensor nodes deployed in the field and higher-level networks or systems, such as the
internet or local area networks (LANs). Here are some key concepts related to
gateways in WSNs:
1. Data Aggregation: Gateways collect data from multiple sensor nodes within the
network and aggregate it before transmitting it to the higher-level network. This
aggregation reduces the amount of data sent over the network, conserving
bandwidth and energy.
2. Protocol Translation: Gateways often perform protocol translation to enable
communication between the sensor nodes, which typically use low-power, resource-
constrained protocols such as Zigbee or Bluetooth, and the higher-level network,
which may use standard internet protocols such as TCP/IP.
3. Interfacing with External Systems: Gateways provide interfaces for connecting
WSNs with external systems, applications, or services. This enables integration with
existing infrastructure and allows for seamless data exchange between the sensor
network and other systems.
4. Network Management: Gateways play a role in network management by
monitoring the status and performance of the sensor nodes, configuring network
parameters, and facilitating software updates or reprogramming of sensor nodes
remotely.
5. Security Gateway: In secure WSNs, gateways may act as security gateways,
enforcing security policies, performing encryption and decryption of data, and
authenticating sensor nodes before allowing data transmission to higher-level
networks.
6. Scalability: Gateways facilitate the scalability of WSNs by serving as access points for
adding new sensor nodes to the network or expanding the coverage area. They
manage the communication and coordination between sensor nodes and ensure
efficient data routing and delivery.
7. Reliability and Redundancy: Gateways can improve the reliability of WSNs by
providing redundancy and failover mechanisms. Multiple gateways can be deployed
to ensure continuous operation even in the event of gateway failure or network
partitioning.
8. Edge Processing: In some cases, gateways may perform edge processing tasks, such
as data filtering, aggregation, or preprocessing, before transmitting data to higher-
level networks. This reduces the amount of data transferred and can enable faster
response times for real-time applications.
9. Energy Efficiency: Gateways may employ energy-efficient communication protocols
and algorithms to minimize energy consumption, especially in battery-powered
WSNs. They can schedule communication activities and optimize transmission
parameters to conserve energy.
Wireless Sensor Network (WSN) operating systems are specialized platforms designed to
manage the resources and tasks of sensor nodes within a WSN. These operating systems are
tailored to the unique requirements and constraints of sensor networks, such as limited
processing power, memory, energy, and communication bandwidth. Here are some popular
WSN operating systems:
1. TinyOS:
Design Philosophy: TinyOS is designed for highly resource-constrained
sensor nodes. It follows a component-based architecture where functionality
is implemented as reusable components called "Tiny Components" (TinyC).
Concurrency Model: TinyOS uses an event-driven programming model,
where tasks are executed in response to events such as sensor readings or
incoming messages. This model helps conserve energy by allowing nodes to
remain in low-power sleep modes when not actively processing tasks.
Networking Support: It provides support for various networking protocols,
including low-power wireless protocols like IEEE 802.15.4. TinyOS applications
typically use the TinyOS network stack for communication between nodes.
Energy Efficiency: TinyOS emphasizes energy efficiency through techniques
such as duty cycling, where nodes alternate between active and sleep modes
to conserve power.
2. Contiki:
Multi-Threading: Contiki provides a multi-threaded kernel that allows
applications to run concurrent tasks. This enables more complex applications
and better utilization of resources compared to event-driven models.
IPv6 Support: Unlike many other WSN operating systems, Contiki offers full
support for IPv6 networking, enabling seamless integration with the Internet
and facilitating communication with other devices and services.
Low-Power Modes: Contiki supports low-power operation through
mechanisms like power-aware scheduling and low-power sleep modes. It
allows nodes to dynamically adjust their power consumption based on
workload and environmental conditions.
Rich Set of Protocols: Contiki includes a comprehensive set of networking
protocols, including CoAP, MQTT, RPL, and 6LoWPAN, making it suitable for a
wide range of IoT applications beyond traditional WSNs.
3. RIOT:
Preemptive Kernel: RIOT features a preemptive multitasking kernel that
allows tasks to be preempted and scheduled based on priority. This enables
real-time responsiveness and predictable behavior, crucial for many WSN
applications.
Supported Platforms: RIOT supports a wide range of hardware platforms,
from low-power microcontrollers to more capable single-board computers,
providing developers with flexibility in choosing hardware for their WSN
deployments.
Energy Management: RIOT includes energy-efficient scheduling algorithms
and power management features to optimize energy consumption. It allows
developers to specify power-saving policies based on application
requirements and device capabilities.
Networking Stack: RIOT provides support for various networking protocols,
including IEEE 802.15.4, 6LoWPAN, and IPv6, enabling seamless
communication between sensor nodes and integration with other IoT devices
and networks.
These are just a few examples of WSN operating systems, each with its own strengths
and capabilities tailored to the specific requirements of wireless sensor networks.
Developers often choose an operating system based on factors such as resource
constraints, application complexity, real-time requirements, and interoperability with
existing systems.
Because the devices in the ad hoc network can access each other's
resources directly through basic peer-to-peer (P2P) or point-to-multipoint
modes, central servers are unnecessary for functions such as file sharing
or printing. In a WANET, a collection of devices, or nodes -- such as a
wireless-capable PC or smartphone -- is responsible for network
operations, such as routing, security, addressing and key management.
It's important to note that not all ad hoc networks are built using a PC or
smartphone. In fact, Wi-Fi access points can be configured to work in either
ad hoc or infrastructure mode as well. Typically, Wi-Fi networks configured
for infrastructure mode are created and managed using equipment such as
Wi-Fi routers or a combination of WAPs and wireless controllers that
provide the necessary network intelligence. Ad hoc networks are also
commonly set up to provide temporary wireless network access created by
a computer or smartphone. The use of more sophisticated network
protocols and network services found on infrastructure-based wireless
networks, such as IEEE 802.1x authentication, usually are not suitable or
necessary for short-lived ad hoc networks.
MANET
IMANET
This network type involves devices in vehicles that are used for
communicating between them and roadside equipment. An example is the
in-vehicle safety and security system OnStar.
WMN
Ad hoc mode can be easier to set up than infrastructure mode when just
connecting a handful of devices without requiring a centralized access
point. For example, if a user has two laptops and is in a hotel room without
Wi-Fi, they can be connected directly in ad hoc mode to create a temporary
Wi-Fi network without a router. The Wi-Fi Direct standard -- a specification
that allows devices certified for Wi-Fi Direct to exchange data without an
internet connection or a wireless router -- also builds on ad hoc mode. It
enables devices to communicate directly over Wi-Fi signals.
Devices can only use the internet if one of them is connected to and
sharing it with the others, such as a cellular-connected smartphone
operating in "hotspot" mode, which is a variation of an ad hoc network.
When internet sharing is enabled, the client performing this function may
face performance problems, especially if there are many interconnected
devices. Ad hoc mode requires the use of more endpoint system
resources, as the physical network layout changes when devices are
moved around; in contrast, an access point in infrastructure mode typically
remains stationary from an endpoint perspective.
As mentioned, many ad hoc networks suffer from the fact that they were
built to be temporary and thus lack many of the advanced security features
often found in stationary, infrastructure WLANs. As such, many types of ad
hoc networks can only be configured with basic security functionality.
That said, because this type of ad hoc network is used temporarily, covers
a smaller area and often moves, the likelihood of an attacker gaining
access to it is far lower compared to a wireless infrastructure that is
stationary and operational at all times.
Characteristics:
Challenges:
Addressing these characteristics and challenges requires innovative research and the
development of specialized protocols and algorithms tailored to the unique
requirements of ad-hoc networks.
Energy efficiency is a critical consideration in ad-hoc networks
due to the limited power resources of the individual nodes, which are often battery-
powered. Here are some key energy efficiency considerations in ad-hoc networks:
1. Authentication: Ensuring that nodes can authenticate each other before establishing
communication is essential for preventing unauthorized access and attacks such as
spoofing or impersonation. Authentication mechanisms, such as digital signatures or
pre-shared keys, can be used to verify the identity of communicating nodes.
2. Encryption: Encrypting data transmissions can prevent eavesdropping and ensure
the confidentiality of communication in ad-hoc networks. Techniques such as
symmetric or asymmetric encryption can be employed to encrypt data packets and
protect them from unauthorized access.
3. Key Management: Managing cryptographic keys securely is crucial for maintaining
the effectiveness of encryption techniques in ad-hoc networks. Key distribution
protocols and mechanisms for key establishment and revocation help ensure that
only authorized nodes have access to encryption keys.
4. Secure Routing: Securing the routing process is essential for preventing attacks such
as routing attacks or packet spoofing. Secure routing protocols incorporate
mechanisms for authenticating routing information and detecting malicious nodes
attempting to disrupt or manipulate routing paths.
5. Intrusion Detection and Prevention: Deploying intrusion detection and prevention
systems can help detect and mitigate security threats in ad-hoc networks. These
systems monitor network traffic for suspicious activity and take proactive measures
to prevent attacks or isolate compromised nodes.
6. Privacy-Preserving Protocols: Protecting the privacy of users' data and identities is
critical in ad-hoc networks, where nodes may have limited trust in each other.
Privacy-preserving protocols employ techniques such as anonymization,
pseudonymization, or data obfuscation to prevent the disclosure of sensitive
information.
7. Secure Group Communication: Supporting secure group communication is
essential for applications such as multicast or collaborative computing in ad-hoc
networks. Group key management protocols enable secure communication among
members of a group while preventing unauthorized access by non-members.
8. Trust Management: Establishing trust relationships among nodes in ad-hoc
networks is challenging due to the lack of centralized authorities. Trust management
mechanisms allow nodes to assess the trustworthiness of their neighbors based on
past interactions, reputation systems, or recommendations from trusted sources.
9. Denial-of-Service (DoS) Mitigation: Protecting ad-hoc networks against DoS
attacks is essential for maintaining network availability and performance. DoS
mitigation techniques such as rate limiting, packet filtering, or traffic shaping help
prevent attackers from overwhelming network resources or disrupting
communication.
10. Energy-Aware Security: Considering the energy constraints of ad-hoc network
nodes is crucial when designing security mechanisms. Energy-aware security
protocols minimize the energy overhead associated with security operations to
prolong the battery life of individual nodes.
• Ad hoc deployment. Sensor nodes are deployed randomly. This requires that the
system be able to cope with the resultant distribution and form connections between
the nodes. Thus, the system should be adaptive to changes in network connectivity
as a result of node failure.
• Energy consumption without losing accuracy. Sensor nodes can use up their
limited supply of energy performing computations and transmitting information in a
wireless environment. As such, energy-conserving forms of communication and
computation are essential. Sensor node lifetime shows a strong dependence on
battery lifetime. In a multihop WSN, each node plays a dual role as data sender and
data router. The malfunctioning of some sensor nodes because of power failure can
cause significant topological changes and might require rerouting packets and
reorganizing the network.
• Fault tolerance. Some sensor nodes may fail or be blocked due to lack of power,
physical damage, or environmental interference. The failure of sensor nodes should
not affect the overall task of the sensor network. If many nodes fail, MAC and routing
protocols must accommodate formation of new links and routes to the data collection
base stations. This may require actively adjusting transmit powers and signaling
rates on the existing links to reduce energy consumption, or rerouting packets
through regions of the network where more energy is available. Therefore, multiple
levels of redundancy may be needed in a fault-tolerant sensor network.
• Scalability. The number of sensor nodes deployed in the sensing area may be in
the order of hundreds or thousands or more. Any scheme must be able to work with
this huge number of sensor nodes. Also, change in network size, node density, and
topology should not affect the task and operation of the sensor network. In addition,
sensor network routing protocols should be scalable enough to respond to events in
the environment. Until an event occurs, most of the sensors can remain in the sleep
state, with data from the few remaining sensors providing a coarse quality. Once an
event of interest is detected, the system should be able to configure so as to obtain
very high-quality results.
The communication architecture of the sensor network is shown in Figure 6.2. The
sensor nodes are usually scattered in a sensor field — an area in which the sensor
nodes are deployed. The nodes in these networks coordinate to produce high-quality
information about the physical environment. Each sensor
S-MAC Protocol in WSNs
S-MAC (Sensor MAC) is a low-power, duty-cycled MAC (medium access
control) protocol designed for wireless sensor networks. It tries to save
energy by reducing the time a node spends in the active (transmitting) state
and lengthening the time it spends in the low-power sleep state. S-MAC
achieves this by implementing a schedule-based duty cycling mechanism. In
this system, nodes coordinate their sleeping and waking times with their
neighbors and send the data only at predetermined time slots. As a result of
this mechanism, there are fewer collisions and idle listening events, which
leads to low energy usage.
SMAC (Sensor MAC) is a wireless sensor network(WSNs) protocol that is
designed to reduce the overhead and power consumption of
MAC protocols.
The term “S-MAC” refers to the entire S-MAC protocol, which contains every
component of our new system. A unique MAC protocol specifically created
for wireless sensor networks is called sensor-MAC (S-MAC). This protocol
has good scaling and collision avoidance capabilities, even if reducing
energy consumption is the main objective. By applying a hybrid scheduling
and contention-based approach, it achieves good scalability and collision
avoidance. We must determine the main causes of inefficient energy usage,
as well as the trade-offs that can be made to lower the usage of energy, in
order to achieve the primary goal of energy efficiency.
S-MAC saves energy mostly by preventing overhearing and effectively
sending a lengthy message. Periodic sleep is crucial for energy conservation
when inactive listening accounts for the majority of total energy usage. S-
MAC’s energy usage is mostly unaffected by the volume of traffic. To reduce
the capacity of transmissions and data transmitted in the network, S-MAC
also has capabilities like packet aggregation and route discovery. This
improves the network’s scalability and also helps to reduce overhead.
Benefits of S-MAC
Limitation of S-MAC
Despite its many advantages, there are some limitations to S-MAC that
should be considered when evaluating its suitability for a particular
application:
Complexity: It seems complex because it requires better understanding
and a higher level of technical knowledge for its implementation and
working, which also makes it costly for its fulfillment.
Scalability: When embedded in large-scale networks, its performance
gets reduced for high-speed communication, and hence it is expensive to
build and not affordable by all.
Latency: It focuses more on the duty-cycling mechanism for energy
consumption, due to which there is a reduction in both latency and per-
hop fairness, so some of the real-time applications get affected, which
require low latency.
Interference: Although it has the mechanism to avoid interference, it fails
to do so due to high levels of interference coming from the outside
surrounding the sensing nodes.
Overhead: Due to its communication mechanism, it has an increased
overhead in comparison with other MAC protocols.
Overhearing: Here nodes receive a packet that is destined for another
node, and it is kept silent until it meets its requirement.
Security: It doesn’t have its own in-built security mechanism, so it is prone
to hacking by hackers.
RTS/CTS/ACK overhead.
Challenges in S-MAC
IEEE 802.15.4e:
802.15.4e for industrial applications and 802.15.4g for the smart utility
networks (SUN)
The 802.15.4e improves the old standard by introducing mechanisms such
as time slotted access, multichannel communication and channel hopping.
IEEE 802.15.4e introduces the following general functional
enhancements:
1. Low Energy (LE): This mechanism is intended for applications that can
trade latency for energy efficiency. It allows a node to operate with a very low
duty cycle.
2. Information Elements (IE) It is an extensible mechanism to exchange
information at the MAC sublayer.
3. Enhanced Beacons (EB): Enhanced Beacons are an extension of the
802.15.4 beacon frames and provide a greater flexibility. They allow to
create application-specific frames.
4. Multipurpose Frame: This mechanism provides a flexible frame format that
can address a number of MAC operations. It is based on IEs.
5. MAC Performance Metric: It is a mechanism to provide appropriate
feedback on the channel quality to the networking and upper layers, so that
appropriate decision can be taken.
6. Fast Association (FastA) The 802.15.4 association procedure introduces a
significant delay in order to save energy. For time-critical application latency
has priority over energy efficiency.
IEEE 802.15.4e defines five new MAC behavior modes.
1. Time Slotted Channel Hopping (TSCH): It targets application domains
such as industrial automation and process control, providing support for
multi-hop and multichannel communications, through a TDMA approach.
2. Deterministic and Synchronous Multi-channel Extension (DSME): It is
aimed to support both industrial and commercial applications.
3. Low Latency Deterministic Network (LLDN): Designed for single-hop and
single channel networks
4. Radio Frequency Identification Blink (BLINK): It is intended for application
domains such as item/people identification, location and tracking.
5. Asynchronous multi-channel adaptation (AMCA): It is targeted to
application domains where large deployments are required, such as smart
utility networks, infrastructure monitoring networks, and process control
networks.
Properties:
IEEE 802.15.4
Abstract. This paper evaluates three routing strategies for wireless sensor
networks: source, shortest path, and hierarchical-geographical, which are the
three most commonly employed by wireless ad-hoc and sensor networks
algorithms. Source routing was selected because it does not require costly
topology maintenance, while shortest path routing was chosen because of its
simple discovery routing approach and hierarchical-geographical routing was
elected because it uses location information via Global Positioning System
(GPS). The performance of these three routing strategies is evaluated by
simulation using OPNET, in terms of latency, End to End Delay (EED), packet
delivery ratio, routing overhead, overhead and routing load.
1 Introduction
Recent advances in micro-electro-mechanical systems (MEMS) technology have
made the deployment of wireless sensor nodes a reality [1] [2], in part, because they
are small, inexpensive and energy efficient. Each node of a sensor network consists of
three basic subsystems: a sensor subsystem to monitor local environment parameters,
a processing subsystem to give computation support to the node, and a
communication subsystem to provide wireless communications to exchange
information with neighboring nodes. Because each individual sensor node can only
cover a relatively limited area, it needs to be connected with other nodes in a
coordinated manner to form a sensor network (SN), which can provide large amounts
of detailed information about a given geographic area. Consequently, a wireless
sensor network (WSN) can be described as a collection of intercommunicated
wireless sensor nodes which coordinate to perform a specific action. Unlike
traditional wireless networks, WSNs depend on dense deployment and coordination to
carry out their task. Wireless sensor nodes measure conditions in the environment
surrounding them and then transform these measurements into signals that can be
Aquino-Santos, R., Villasenor-Gonzalez, L., Sanchez, J., Gallardo, J. R., 2007, in IFIP International Federation for
Information Processing, Volume 248, Wireless Sensor and Actor Networks, eds. L. Orozco-Barbosa, Olivares, T.,
Casado, R., Bermudez, A., (Boston: Springer), pp. 191-202.
192 Raul Aquino-Santos, Luis Villaseflor-Gonzalez, Jaime Sanchez, JosS Rosario Gallardo
ADV- new data advertisement. When a SPIN node has data to share, it can
advertise this fact by transmitting an ADV message containing meta-data.
REQ - request for data. A SPIN node sends an REQ message when it wishes to
receive some actual data.
DATA - data message. DATA messages contain actual sensor data with a meta-data
header.
Unlike traditional networks, a sensor node does not necessarily require an identity
(e.g. an address). Instead, applications focus on the different data generated by the
sensors. Because data is identified by its attributes, applications request data by
matching certain attribute values. One of the most popular algorithms for data-centric
protocols is direct diffusion and it bases its routing strategy on shortest path [11]. A
sensor network based on direct diffusion exhibits the following properties: each
sensor node names data that it generates with one or more attributes, other nodes may
express interests based on these attributes, and network nodes propagate interests.
Interests establish gradients that direct the diffusion of data. In its simple form, a
gradient is a scalar quantity. Negative gradients inhibit the distribution of data along a
particular path, and positive gradients encourage the transmission of data along the
path.
The Energy-Aware Routing protocol is a destination-initiated reactive protocol that
increases the network lifetime using only one path at all times, it seems very similar
to source routing [12]. Rumor routing [13] is a variation of direct diffusion that is
mainly intended for applications where geographic routing is not feasible. Gradient-
based routing is another variant of direct diffusion [14]. The key idea of gradient-
based routing is to memorize the number of hops when the interest is diffused
throughout the network. Constraint Anisotropic Diffusion Routing (CADR) is a
general form of direct diffusion [15] and lastly, Active Query Forwarding in Sensor
Networks (ACQUIRE) [16] views the network as a distributed database, where
complex queries can be further divided into several sub queries.
194 Raul Aquino-Santos, Luis Villasefior-Gonzalez, Jaime Sanchez, Jose Rosario Gallardo
Hierarchical protocols are based on clusters because clusters can contribute to more
scalable behavior as the number of nodes increases, provide improved robustness, and
facilitate more efficient resource utilization for many distributed sensor coordination
tasks.
Low-Energy Adaptive Clustering Hierarchy (LEACH) is a cluster-based protocol
that minimizes energy dissipation in sensor networks by randomly selecting sensor
nodes as cluster-heads [17]. Power-Efficient Gathering in Sensor Information System
(PEGASIS) [18] is a near optimal chain-based protocol. The basic idea of the protocol
is to extend network lifetime by allowing nodes to communicate exclusively with
their closest neighbors, employing a turn-taking strategy to communicate with the
Base Station (BS). Threshold-sensitive Energy Efficient protocol (TEEN) [19] and
Adaptive Periodic TEEN (APTEEN) [20] have also been proposed for time-critical
applications. In TEEN, sensor nodes continuously sense the medium, but data
transmission is done less frequently. APTEEN, on the other hand, is a hybrid protocol
that changes the periodicity or threshold values used in the TEEN protocol, according
to user needs and the application type.
routing algorithm primarily designed for ad-hoc networks that can also be applied to
sensor networks. GAF conserves energy by turning off unnecessary nodes in the
network without affecting the level of routing fidelity. Finally, Geographic and
Energy Aware Routing [24] uses energy-awareness and geographically informed
neighbor selection heuristics to route a packet toward the destination region.
The IEEE 802.15.4-2003 standard defines the lower two layers: the physical (PHY)
layer and the medium access control (MAC) sub-layer. The ZigBee alliance builds on
this foundation by providing the network (NWK) layer and the framework for the
application layer, which includes the application support sub-layer (APS), the ZigBee
device objects (ZDO) and the manufacturer-defined application objects.
IEEE 802.15.4-2003 has two PHY layers that operate in two separate frequency
ranges: 868/915 MHz and 2.4 GHz. The 2.4 GHz mode specifies a Spread Spectrum
modulation technique with processing gain equal to 32. It handles a data rate of 250
kbps, with Offset-QPSK modulation, and a chip rate of 2 Mcps.
The 868/915 MHz mode specifies a DSSS modulation technique with data rates of
20/40 kbps and chip rates of 300/600 kcps. The digital modulation is BPSK and the
processing gain is equal to 15.
On the other hand, the MAC sub-layer controls access to the radio channel using a
CSMA-CA mechanism. Its responsibilities may also include transmitting beacon
frames, synchronizing transmissions and providing a reliable transmission
mechanism.
The responsibilities of the ZigBee NWK layer includes mechanisms used to join
and exit a network, in order to apply security to frames and to route frames to their
intended destinations based on shortest path strategy. In addition, the discovery and
maintenance of routes between devices transfer to the NWK layer. Also, the
discovery of one-hop neighbors and the storing of pertinent neighbor information are
done at the NWK layer. The NWK layer of a ZigBee coordinator is responsible for
starting a new network, when appropriate, and assigning addresses to newly
associated devices.
The responsibilities of the APS sub-layer include maintaining tables for binding,
which is the ability to match two devices together based by their services and their
needs, and forwarding messages between bound devices. The responsibilities of the
ZDO include defining the role of the device within the network, initiating and/or
responding to binding requests and establishing a secure relationship between
network devices. The ZDO is also responsible for discovering devices on the network
and determining which application services they provide.
196 Raul Aquino-Santos, Luis Villaseilor-Gonzalez, Jaime Sanchez, Jose Rosario Gallardo
3 Scenarios Simulated
The routing protocols described in section 2 make use of one, or a combination of the
following strategies: source, shortest path or hierarchical-geographical routing
strategies. The performance of these basic strategies is evaluated using the following
metrics:
• Route discovery time (Latency): is the time the sink has to wait before
actually receiving the first data packet.
• Average end-to-end delay of data packets: are all possible delays caused by
queuing, retransmission delays at the MAC and propagation and transfer
times.
• Packet delivery ratio: is the ratio of the number of data packets delivered to
the destination and the number of data packets sent by the transmitter. Data
packets may be dropped en route for several reasons: e.g. the next hop link is
broken when the data packet is ready to be transmitted or one or more
collisions have occurred.
• Routing load: is measured in terms of routing packets transmitted per data
packets transmitted. The latter includes only the data packets finally
delivered at the destination and not the ones that are dropped. The
transmission at each hop is counted once for both routing and data packets.
This provides an idea of network bandwidth consumed by routing packets
with respect to "useful" data packets.
• Routing overhead: is the total number of routing packets transmitted during
the simulation. For packets sent over multiple hops, each packet transmission
(hop) counts as one transmission.
• Overhead (packets): is the total number of routing packets generated divided
by the sum of total number of data packets transmitted and the total number
of routing packets
In source routing, each packet header carries the complete ordered list of nodes
through which the packet must pass. The key advantage of source routing is that
intermediate nodes do not need to maintain up-to-date routing information in order to
route the packets they forward, since the packets themselves already contain all the
routing information. This fact, coupled with the on-demand nature of the protocol,
eliminates the need for the periodic route advertisement and neighbor detection
packets present in other protocols such as the Energy Aware Routing.
In the shortest path strategy, when a node S needs a route to destination D, it
broadcasts a route request message to its neighbors, including the last known
sequence number for that destination. The route request is flooded in a controlled
manner through the network until it reaches a node that has a route to the destination.
Routing Strategies for Wireless Sensor Networks 197
Each node that forwards the route request creates a reverse route for itself back to
node S. Examples are SPIN, Direct Diffusion, MECN, and the ZigBee standard.
When the route request reaches a node with a route to D, that node generates a route
reply containing the number of hops necessary to reach D and the sequence number
for D most recently seen by the node generating the reply. Importantly, each node that
forwards this reply back toward the originator of the route request (node S) creates a
forward route to D. The state created in each node remembers only the next hop and
not the entire route, as would be done in source routing.
Hierarchical-geographical strategy improves the traditional routing strategies based
on non-positional routing by making use of location information provided by GPS as
it minimizes flooding of its Location Request (LREQ) packets. Flooding, therefore, is
directive for traffic control by using only the selected nodes, called gateway nodes to
diffuse LREQ messages. The purpose of gateway nodes is to minimize thefloodingof
broadcast messages in the network by reducing duplicate retransmissions in the same
region.
Member nodes are converted into gateways when they receive messages from more
than one cluster-head. All the members of the cluster read and process the packet, but
do not retransmit the broadcast message. This technique significantly reduces the
number of retransmissions in a flooding or broadcast procedure in dense networks.
Therefore, only the gateway nodes retransmit packets between clusters (hierarchical
organization). Moreover, gateways only retransmit a packet from one gateway to
another in order to minimize unnecessary retransmissions, and only if the gateway
belongs to a different cluster-head.
We decided to evaluate source, shortest path and hierarchical-geographical routing
strategies since they represent the foundation of all of the above mentioned routing
protocols.
The simulator for evaluating the three routing strategies for our wireless sensor
network is implemented in OPNET 11.5, and the simulation models a network of 225
MICAz sensor nodes [2]. This configuration represents a typical scenario where
nodes are uniformly placed within an area of 1.5 km2.
We used a 2405- 2480 MHz frequency range and a 250 kbps data rate for our
simulation, with a MICAz sensor node separation of 75 m. This scenario represents a
typical wireless sensor network with one sink node acting as a gateway to
communicate the WSN with a separate network (Internet). In our scenario one sensor
node communicates with the sink, and the sensor node sends a packet every second
(constant bit rate).
Figure 1 shows the latency between the sink and the source in milliseconds. Source
and shortest path routing strategies show a similar behavior. However, hierarchical-
geographical routing shows the poorest behavior due to the transmission of position
information via hello packets which produce more collision in the wireless medium,
in addition, the cluster formation mechanism also increase the latency.
198 Raul Aquino-Santos, Luis Villasefior-Gonzalez, Jaime Sanchez, Jos6 Rosario Gallardo
1120
Figure 2 shows the End-to-End Delay (EED) between the sink and the source in
milliseconds. The hierarchical-geographical routing strategy shows the worst behavior
because the static nature of the wireless sensor nodes causes synchronization of the
packets. Synchronization arises from the simultaneous transmission of packets
between neighbors. As results, the frequent transmission of Hello packets produces
more collision with data packets.
X
/
E.50
O /
JU40- s
/-""*
+ -•-•"""
± * - jr:;:...„ ...» m
0
» . », : , r
Distance from sink to source (m)
The three routing mechanisms show a similar behavior in terms of packet delivery
ratio because of their static nature, as illustrated in figure 3.
Routing Strategies for Wireless Sensor Networks 199
100 i N H
- * • f —
90
tde ivery ra
•••*•• Shortest Path
Q.
60
50
Figure 4 shows the Routing Overhead between the sink and the source. Routing
overhead is the total number of routing packets transmitted during the simulation.
Again, the shortest path routing strategy performs the best and the hierarchical-
geographical strategy the worst. This is due to the Hello packets used for the cluster
formation mechanism.
• • __ZHJ; —^# ^
20000
- • - Hierarchical and Geographical
ts)
n
01
Jk
O
$
> uting
cc 5000
0 m- , • ; m ~« *
Figure 5 shows the overhead between the sink and the source. The shortest path
technique also has the best performance, with source routing and hierarchical-
geographical mechanism performing in a similar fashion.
200 Raul Aquino-Santos, Luis Villasefior-Gonzalez, Jaime Sanchez, Jose Rosario Gallardo
X-
A_ = £ = : * •
95
* - -A- .
90
8
(pac kets)
5
•£70. \
°65^
^
60-
50
Figure 6 shows the Routing Load between the sink and the source. This metric
provides an idea of how much network bandwidth is consumed by routing packets in
relation to the useful data packets actually received. Once again, the shortest path
routing strategy has the best performance, and the hierarchical-geographical
mechanism the worst.
In this paper, we have evaluated three basic routing strategies widely used in routing
protocols for wireless sensor networks. Source routing only improves shortest path
and hierarchical-geographical routing in terms of latency. The main disadvantage of
Routing Strategies for Wireless Sensor Networks 201
source routing is that it lacks a number of hops metric, which can frequently result in
longer path selection. Shortest path behaves well in terms of EED, routing overhead,
overhead and routing load. Hierarchical-geographical routing performs the worst
because it must send hello packets in order to acquire and transmit location
information. This consideration makes hierarchical-geographical routing in wireless
sensor networks more weighty because it transmits hello packets more frequently,
requiring greater bandwidth and energy resources. However, despite these significant
disadvantages, hierarchical-geographical routing remains the routing option most
often used in health, military, agriculture, robotic, environmental and structural
monitoring. An important area of future research is to optimize hierarchical-
geographical routing algorithm to facilitate its use in large geographical areas
requiring dense sensor distribution.
Acknowledgements
The authors acknowledge the support received by the National Council of Science
and Technology (CONACYT) under project grant No. 48391.
References
1. V. Rajaravivarma, Yi Yang, and Teng Yang. An Overview of Wireless Senor Network and
Applications. Proceeding of the 35th Southeastern Symposium on System Theory, pp. 432-
436, 2003.
2. http://www.xbow.com/ProductsAVireless_Sensor_Networks.htm
3. Stephan Olariu and Qingwen Xu. Information Assurance in Wireless Sensor Networks.
Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium,
pp. 236 -240, 2005.
4. Alan Mainwaring, Joseph Polastre, Robert Szewczyk, David Culler, John Anderson.
Wireless Sensor Networks for Habitat Monitoring. Proceeding of the 1st ACM International
workshop on wireless sensor Networks and applications, pp. 88-97, 2002.
5. ZIgBee Specification. ZigBee Document 053474r06, version 1.0. December 2004.
6. Nirupama Bulusu, John Heidemann, Deborah Strin. GPS-less Low Cost Outdoor
Localization For Very Small Devices. IEEE Personal Communication, vol. 7, issue 5, pp.
28-34, 2000.
7. A Reliable Routing Protocol Design for wireless sensor Networks. Yanjun Li, Jiming Chen,
Ruizhong Lin, Zhi Wang. International Conference on Mobile Ad-hoc and Sensor Systems,
2005.
8. I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci. Wireless sensor networks: a
survey. Computer Networks, pp. 393-422, 2002.
9. Wendi Rabiner Heinzelman, Joana Kulik, and Hari Balakrishnan. Adaptive Protocols for
Information Dissemination in Wireless Sensor Networks. Proceedings of the 5th annual
ACM/IEEE International Conference on Mobil Computing and Networking (MOBICOM),
pp.174-185, 1999.
lO.Jamal N. Al-karaki, Ahmed E. Kamal. Routing Techniques in Wireless Sensor Networks: A
survey. IEEE Wireless Communications, pp. 6-28, 2004.
202 Raul Aquino-Santos, Luis Villaseflor-Gonzalez, Jaime Sanchez, Jose Rosario Gallardo
11.Deborah Estrin, Ramesh Govindan, John Heidemann, Satish Kumar. Next Century
Challenges: Scalable Coordination in Sensor Networks. Proceedings of the 5th ACM/IEEE
International Conference on Mobile Computing and Networking, pp. 263-270, 1999.
12.Shah, R. C. Rabaey, J. M. Energy Aware Routing for low Ad Hoc Sensor Networks. IEEE
Wireless Comunications and Networks Conference, vol. 1, pp. 350-355, 2002.
13.David Braginsky, Deborah Estrin. Rumor Routing Algorithm for Sensor Netorks.
International Conference on Distributed Computing Systems (ICDCS-22), 2002.
14.Curt Schurgers, Mani B. Srivastava. Energy Efficient Routing in Wireless Sensor Networks.
Proceeding of the Communication for Network-centric operations: creating the information
force, 2001.
15.Maurice Chu, Horst Haussecker, and Feng Zhao. Scalable Information-Driven Sensor
Querying and Routing for ad hoc Heterogeneous Sensor Networks. International Journal of
High Performance Computing Applications, 2002.
16.Narayanan Sadagopan, Bhaskar Krishnamachari, and Ahmed Helmy. The ACQUIRE
Mechanism for Efficient Querying in Sensor Networks. Proceedings of the IEEE
International Workshop on Sensor Network Protocols and Applications (SNPA), in
conjunction with IEEE ICC, pp. 149-155, 2003.
17.Heinzelman, W. R. Chandrakasan, A. Balakrishnan, H. Energy-efficient communication
protocol for wireless microsensor networks. Proceedings of the 33rd Annual Hawaii
International Conference on System Sciences, vol. 2, pp. 1-10, 2000.
18.Stephanie Lindsey and Cauligi S. Raghavendra. PEGASIS: Power-Efficient GAthering in
Sensor Information Systems. Proceeding of the IEEE Aerospace Conference, vol. 3, pp.
1125-1130,2002.
19.Arati Manjeshwar and Dharma P. Agrawal. TEEN: A Routing Protocol for Enhanced
Efficiency in Wireless Sensor Networks. Proceedings of the 15th International Symposium
on Parallel and Distributed Processing, pp. 2009-2015, 2001.
20.Arati Manjeshwar and Dharma P. Agrawal. APTEEN: A Hybrid Protocol for Efficient
Routing and Comprehensive Information Retrieval in Wireless Sensor Networks,
Proceedings of the 16th International Symposium on Parallel and Distributed Processing,
pp.195-202,2002.
21.Volkan Rodoplu and Teresa H. Meng. Minimum Energy Mobile Wireless Networks. IEEE
Journal on selected areas in communications, vol. 17, issue 8, pp. 1333-1344, 1999.
22.Li Li, Joseph Y. Halpern. Minimum-Energy Mobile Wireless Networks Revisited. IEEE
International Conference on Communications, vol. 1, pp. 278-283, 2001.
23 .Ya Xu, John Heideman, and Deborah Estrin. Geography-informed Energy Conservation for
Ad-Hoc Routing. In proceedings of the ACM/IEEE International Conference on Mobile
Computing and Networking, pp. 70-84, 2001.
24.Yan Yu, Ramesh Govindan, Deborah Estrin. Geographic and Energy Aware Routing: a
recursive data dissemination protocol for wireless sensor networks. UCLA Computer
Science Department Technical Report UCLA/CSD-TR-01-0023, 2001.
Traditional TCP
Transmission Control Protocol (TCP) is the transport layer protocol that
serves as an interface between client and server. The TCP/IP protocol is
used to transfer the data packets between transport layer and network layer.
Transport protocol is mainly designed for fixed end systems and fixed, wired
networks. In simple terms, the traditional TCP is defined as a wired network
while classical TCP uses wireless approach. Mainly TCP is designed for
fixed networks and fixed, wired networks. The main research activities in
TCP are as listed below.
2. Slow start: The behavior TCP shows after the detection of congestion is
called as slow start. The sender always calculates a congestion window for a
receiver. At first the sender sends a packet and waits for the
acknowledgement. Once the acknowledgement is back it doubles the packet
size and sends two packets. After receiving two acknowledgements, one for
each packet, the sender again doubles the packet size and this process
continues. This is called Exponential growth. It is dangerous to double the
congestion window each time because the steps might become too large.
The exponential growth stops at congestion threshold. As it reaches
congestion threshold, the increase in transmission rate becomes linear (i.e.,
the increase is only by 1). Linear increase continues until the sender notices
gap between the acknowledgments. In this case, the sender sets the size of
congestion window to half of its congestion threshold and the process
continues.
Example: Assume that few packets of data are being transferred from
sender to receiver, and the speed of sender is 2 Mbps and the speed of
receiver is 1 Mbps respectively. Now the packets that are being transferred
from sender sender to receiver makes a traffic jam inside the network. Due
to this the network may drop some of the packets. When these packets are
lost, the receiver sends the acknowledgement to the sender and the sender
identifies the missing acknowledgement. This process is called as
congestion control. Now the slowstart mechanism takes up the plan. The
sender slows down the packet transfer and then the traffic is slightly reduces.
After sometime it puts a request to fast re-transmission through which the
missing packets can be sent again as fast as possible. After all these
mechanisms, the process of next packet begins.
What is Transport Layer?
The Transport Layer in the Open System Interconnection (OSI) model is
responsible for end-to-end delivery over a network. Whereas the network
layer is concerned with the end - to- end delivery of individual packets and it
does not recognize any relationship between those packets.
This figure shows the relationship of the transport layer to the network and
session layer.
Computers often run many programs at the same time. Due to this,
source-to-destination delivery means delivery from a specific job
(currently running program) on one computer to a specific job (currently
running program) on the other system not only one computer to the
next.
For this reason, the transport layer added a specific type of address to
its header, it is referred to as a service point address or port address.
By this address each packet reaches the correct computer and also
the transport layer gets the complete message to the correct process
on that computer.
3. Connection Control
5. Flow control
The transport layer also responsible for the flow control mechanism
between the adjacent layers of the TCP/IP model.
It does not perform across a single link even it performs an end-to-end
node.
By imposing flow control techniques data loss can be prevented from
the cause of the sender and slow receiver.
For instance, it uses the method of sliding window protocol in this
method receiver sends a window back to the sender to inform the size
of the data is received.
6. Error Control
Error Control is also performed end to end like the data link layer.
In this layer to ensure that the entire message arrives at the receiving
transport layer without any error(damage, loss or duplication). Error
Correction is achieved through retransmission of the packet.
The data has arrived or not and checks for the integrity of data, it uses
the ACK and NACK services to inform the sender.
i. The middleware generally gathers information from both the application and
network protocols, determines how to support the connected applications, and at the
same time adjust network protocol parameters.
ii. Sometimes the middleware goes under the network protocols layer and interfaces
with the operating system directly. WSN middleware needs to dynamically adjust
network protocol parameters and configure the sensor nodes based on application
requirements in terms of performance improvement, QoS, and energy conservation.
iii. The WSN middleware can abstract the common properties of applications and offer
general purpose services that can be used by a wide range of applications.
1. Random Access Protocol: In this, all stations have same superiority that
is no station has more priority than another station. Any station can send
data depending on medium’s state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for
shared medium. In this, multiple stations can transmit data at the
same time and can hence lead to collision and data being garbled.
Pure Aloha:
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station
waits for a random amount of time called back-off time (Tb) and re-sends
the data. Since different stations wait for different amount of time, the
probability of further collision decreases.
Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and
sending of data is allowed only at the beginning of these slots. If a station
misses out the allowed time, it must wait for the next slot. This reduces
the probability of collision.
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the
station is required to first sense the medium (for idle or busy) before
transmitting data. If it is idle then it sends data, otherwise it waits till the
channel becomes idle. However there is still chance of collision in CSMA due
to propagation delay. For example, if station A wants to send data, it will first
sense the medium.If it finds the channel idle, it will start sending data.
However, by the time the first bit of data is transmitted (delayed due to
propagation delay) from station A, if station B requests to send data and
senses the medium it will also find it idle and will also send data. This will
result in collision of data from station A and B.
CSMA access modes-
1-persistent: The node senses the channel, if idle it sends the data,
otherwise it continuously keeps on checking the medium for being idle
and transmits unconditionally(with 1 probability) as soon as the channel
gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
P-persistent: The node senses the medium, if idle it sends the data with
p probability. If the data is not transmitted ((1-p) probability) then it waits
for some time and checks the medium again, now if it is found idle then it
send with p probability. This repeat continues until the frame is sent. It is
used in Wifi and packet radio systems.
O-persistent: Superiority of nodes is decided beforehand and
transmission occurs in that order. If the medium is idle, node waits for its
time slot to send data.
(c) CSMA/CD – Carrier sense multiple access with collision detection.
Stations can terminate transmission of data if collision is detected.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The
process of collisions detection involves sender receiving acknowledgement
signals. If there is just one signal(its own) then the data is successfully sent
but if there are two signals(its own and the one with which it has collided)
then it means a collision has occurred. To distinguish between these two
cases, collision must have a lot of impact on received signal. However it is
not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space – Station waits for medium to become idle and if found
idle it does not immediately send data (to avoid collision due to
propagation delay) rather it waits for a period of time called Interframe
space or IFS. After this time it again checks the medium for being idle.
The IFS duration depends on the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait
time which doubles every time medium is not found idle. If the medium is
found busy it does not restart the entire process, rather it restarts the
timer when the channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if
acknowledgement is not received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations.
Reservation
In the reservation method, a station needs to make a reservation before
sending data.
The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot
1. No other station is allowed to transmit during this slot.
In general, i th station may announce that it has a frame to send by
inserting a 1 bit into i th slot. After all N slots have been checked, each
station knows which stations wish to transmit.
The stations which have reserved their slots transfer their frames in that
order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any
collisions.
The following figure shows a situation with five stations and a five-slot
reservation frame. In the first interval, only stations 1, 3, and 4 have made
reservations. In the second interval, only station 1 has made a reservation.
Polling
Polling process is similar to the roll-call performed in class. Just like the
teacher, a controller sends a message to each node in turn.
In this, one acts as a primary station(controller) and the others are
secondary stations. All data exchanges must be made through the
controller.
The message sent by the controller contains the address of the node
being selected for granting access.
Although all nodes receive the message but the addressed one responds
to it and sends data, if any. If there is no data, usually a “poll reject”(NAK)
message is sent back.
Problems include high overhead of the polling messages and high
dependence on the reliability of the controller.
Token Passing
In token passing scheme, the stations are connected logically to each other in
form of ring and access to stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one
station to the next in some predefined order.
In Token ring, token is passed from one station to another adjacent station in
the ring whereas incase of Token bus, each station uses the bus to send the
token to the next station in some predefined order.
In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame
before it passes the token to the next station. If it has no queued frame, it
passes the token simply.
After sending a frame, each station must wait for all N stations (including itself)
to send the token to their neighbours and the other N – 1 stations to send a
frame, if they have one.
There exists problems like duplication of token or token is lost or insertion of
new station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and
code to multiple stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth
is divided into equal bands so that each station can be allocated its own
band. Guard bands are also added so that no two bands overlap to avoid
crosstalk and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is
shared between multiple stations. To avoid collision time is divided into
slots and stations are allotted these slots to transmit data. However there
is a overhead of synchronization as each station needs to know its time
slot. This is resolved by adding synchronization bits to each slot. Another
issue with TDMA is propagation delay which is resolved by addition of
guard bands.
Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all
speaking at the same time, then also perfect reception of data is possible
if only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.
Ethernet protocol definition: The most popular and oldest LAN technology
is Ethernet Protocol, so it is more frequently used in LAN environments
which is used in almost all networks like offices, homes, public places,
enterprises, and universities. Ethernet has gained huge popularity because
of its maximum rates over longer distances using optical media.
The Ethernet protocol uses a star topology or linear bus which is the
foundation of the IEEE 802.3 standard. The main reason to use Ethernet
widely is, simple to understand, maintain, implement, provides flexibility,
and permits less cost network implementation.
The data link layer in the network system mainly addresses the way that
data packets are transmitted from one type of node to another. Ethernet
uses an access method called CSMA/CD (Carrier Sense Multiple
Access/Collision Detection). This is a system where every computer listens
to the cable before transmitting anything throughout the network.
Applications
The applications of Ethernet protocol include the following.
Standard Ethernet
Standard Ethernet is also referred to as Basic Ethernet. It uses 10Base5 coaxial cables for
communications. Ethernet provides service up to the data link layer. At the data link layer,
Ethernet divides the data stream received from the upper layers and encapsulates it into
frames, before passing them on to the physical layer.
100-Base-T4
o This has four pairs of UTP of Category 3, two of which are bi-directional
and the other two are unidirectional.
o In each direction, three pairs can be used simultaneously for data
transmission.
o Each twisted pair is capable of transmitting a maximum of 25Mbaud
data. Thus the three pairs can handle a maximum of 75Mbaud data.
o It uses the encoding scheme 8B/6T (eight binary/six ternary).
100-Base-TX
o This has either two pairs of unshielded twisted pairs (UTP) category 5
wires or two shielded twisted pairs (STP) type 1 wires. One pair
transmits frames from hub to the device and the other from device to
hub.
o Maximum distance between hub and station is 100m.
o It has a data rate of 125 Mbps.
o It uses MLT-3 encoding scheme along with 4B/5B block coding.
100-BASE-FX
o This has two pairs of optical fibers. One pair transmits frames from hub
to the device and the other from device to hub.
o Maximum distance between hub and station is 2000m.
o It has a data rate of 125 Mbps.
o It uses NRZ-I encoding scheme along with 4B/5B block coding.
Gigabit Ethernet
In computer networks, Gigabit Ethernet (GbE) is the family of Ethernet technologies that
achieve theoretical data rates of 1 gigabit per second (1 Gbps). It was introduced in 1999
and was defined by the IEEE 802.3ab standard.
1000BASE-SX
Defined by IEEE 802.3z standard
Uses a pair of fibre optic cables of a shorter wavelength having 770 – 860 nm
diameter
The maximum segment length varies from 220 – 550 metres depending upon the
fiber properties.
Uses NRZ line encoding and 8B/10B block encoding
1000BASE-LX
Defined by IEEE 802.3z standard
Uses a pair of fibre optic cables of a longer wavelength having 1270 – 1355 nm
diameter
Maximum segment length is 500 metres
Can cover distances up to 5 km
Uses NRZ line encoding and 8B/10B block encoding
1000BASE-T
Defined by IEEE 802.3ab standard
Uses a pair four lanes of twisted-pair cables (Cat-5, Cat-5e, Cat-6, Cat-7)
Maximum segment length is 100 metres
10-Gigabit Ethernet
Gigabit Ethernet 802 committee was asked by IEEE to start on 10-
Gigabit Ethernet. The work pattern was the same as the
previous Ethernet standards. 10 Gbps is a truly great speed, 1000
times faster than the original Ethernet. It could be needed inside data
centers and exchanges to connect high-end routers, switches, and
servers, as well as in long-distance, high bandwidth trunks between
offices that are enabling entire area networks based on Ethernet and
fiber. The short-distance connections may use copper or fiber, while
the long connections may use optical fiber. 10-gigabit Ethernet
supports the only full-duplex operation. They concentrate on the details
of physical layers that can run at very high speed. CSMA/CD is no
more a part of 10-gigabit Ethernet. Compatibility still matters, though,
so 10-gigabit Ethernet interfaces auto-negotiate and fall back to the
highest speed supported by both ends of the line. The main kinds of
10-gigabit Ethernet are listed in the table below. Multimode fiber with
the 0.85µ (short) wavelength is used for medium distances, and single-
mode fiber at 1.3µ (long) and 1.5µ (extended) is used for long
distances. To make it suitable for wide-area applications, one can use
10GBase-ER which can run for distances of 40 km. All the versions of
10-gigabit Ethernet send a serial stream of information that is produced
by scrambling the data bits, then encoding them with a 64B/66B code.
Table: 10-Gigabit Ethernet Cabling
Name Cable Max. segment Advantages
10GBase-SR
Defined by IEEE 802.3ae standard
Uses fiber optic cables
Maximum segment length is 300 m
Deployed using multimode fibers having 0.85μ frequency
10GBase-LR
Defined by IEEE 802.3ae standard
Uses fiber optic cables
Maximum segment length is 10 km
Deployed using single-mode fibers having 1.3μ frequency
10GBase-ER
Defined by IEEE 802.3ae standard
Uses fiber optic cables
Maximum segment length is 40 km
Deployed using single-mode fibers having 1.5μ frequency
10GBase-CX4
Defined by IEEE 802.3ak standard
Uses 4 pairs of twin-axial cables
Maximum segment length is 15 m
Uses 8B/10B coding
10GBase-T
Defined by IEEE 802.3an standard
Uses 4 pairs of unshielded twisted pair cables
Maximum segment length is 100 m
Uses low-density parity-check code (LPDC code)
Telephone Network
Telephone Network is used to provide voice communication. Telephone
Network uses Circuit Switching. Originally, the entire network was referred to
as a plain old telephone system (POTS) which uses analog signals. With the
advancement of technology, i.e. in the computer era, there comes a feature
to carry data in addition to voice. Today’s network is both analogous and
digital.
Major Components of Telephone Network: There are three major
components of the telephone network:
1. Local loops
2. Trunks
3. Switching Offices
There are various levels of switching offices such as end offices, tandem
offices, and regional offices. The entire telephone network is as shown in
the following figure:
Local Loops: Local Loops are the twisted pair cables that are used to
connect a subscriber telephone to the nearest end office or local central
office. For voice purposes, its bandwidth is 4000 Hz. It is very interesting to
examine the telephone number that is associated with each local loop. The
office is defined by the first three digits and the local loop number is defined
by the next four digits defines.
Trunks: It is a type of transmission medium used to handle the
communication between offices. Through multiplexing, trunks can handle
hundreds or thousands of connections. Mainly transmission is performed
through optical fibers or satellite links.
Switching Offices: As there is a permanent physical link between any two
subscribers. To avoid this, the telephone company uses switches that are
located in switching offices. A switch is able to connect various loops or
trunks and allows a connection between different subscribes.
1. STS Multiplexer:
Performs multiplexing of signals
Converts electrical signal to optical signal
2. STS Demultiplexer:
Performs demultiplexing of signals
Converts optical signal to electrical signal
3. Regenerator:
It is a repeater, that takes an optical signal and regenerates (increases
the strength) it.
4. Add/Drop Multiplexer:
It allows to add signals coming from different sources into a given path or
remove a signal.
SONET Connections:
Section: Portion of network connecting two neighbouring devices.
Line: Portion of network connecting two neighbouring multiplexers.
Path: End-to-end portion of the network.
SONET Layers:
Working of ATM:
ATM standard uses two types of connections. i.e., Virtual path connections
(VPCs) which consist of Virtual channel connections (VCCs) bundled
together which is a basic unit carrying a single stream of cells from user to
user. A virtual path can be created end-to-end across an ATM network, as it
does not rout the cells to a particular virtual circuit. In case of major failure,
all cells belonging to a particular virtual path are routed the same way
through the ATM network, thus helping in faster recovery.
Switches connected to subscribers use both VPIs and VCIs to switch the
cells which are Virtual Path and Virtual Connection switches that can have
different virtual channel connections between them, serving the purpose of
creating a virtual trunk between the switches which can be handled as a
single entity. Its basic operation is straightforward by looking up the
connection value in the local translation table determining the outgoing port
of the connection and the new VPI/VCI value of connection on that link.
ATM Layers:
1. ATM Adaption Layer (AAL) –
It is meant for isolating higher-layer protocols from details of ATM
processes and prepares for conversion of user data into cells and
segments it into 48-byte cell payloads. AAL protocol excepts transmission
from upper-layer services and helps them in mapping applications, e.g.,
voice, data to ATM cells.
2. Physical Layer –
It manages the medium-dependent transmission and is divided into two
parts physical medium-dependent sublayer and transmission
convergence sublayer. The main functions are as follows:
It converts cells into a bitstream.
It controls the transmission and receipt of bits in the physical medium.
It can track the ATM cell boundaries.
Look for the packaging of cells into the appropriate type of frames.
3. ATM Layer –
It handles transmission, switching, congestion control, cell header
processing, sequential delivery, etc., and is responsible for
simultaneously sharing the virtual circuits over the physical link known as
cell multiplexing and passing cells through an ATM network known as cell
relay making use of the VPI and VCI information in the cell header.
ATM Applications:
1. ATM WANs –
It can be used as a WAN to send cells over long distances, a router
serving as an end-point between ATM network and other networks, which
has two stacks of the protocol.
1. Frequency Bands: RF spectrum is divided into various frequency bands, each with
distinct properties and applications. These bands range from very low frequencies
(VLF) to extremely high frequencies (EHF), covering a wide range of electromagnetic
waves. Each band has its specific characteristics, such as propagation range, ability to
penetrate obstacles, and bandwidth.
2. Waveform Characteristics: The frequency of a radio transmission directly influences
its waveform characteristics. Lower frequencies produce longer wavelengths, while
higher frequencies have shorter wavelengths.
3. Propagation: Different frequency bands exhibit different propagation characteristics.
Lower frequencies, such as those in the LF and MF bands, tend to propagate over
long distances and penetrate obstacles like buildings and foliage. However, they
typically require larger antennas for efficient transmission and reception. Higher
frequencies, like those in the VHF, UHF, and above bands, are more susceptible to
attenuation by obstacles but offer higher data rates and require smaller antennas.
4. Regulatory Considerations: The use of radio frequencies is regulated by
governmental bodies to prevent interference and ensure efficient spectrum
utilization. Regulatory agencies allocate specific frequency bands for various
purposes, such as broadcasting, mobile communication, aviation, military, and
amateur radio. Users must adhere to these regulations to operate radio equipment
legally.
5. Modulation Techniques: Radio signals carry information through modulation,
where the characteristics of the electromagnetic wave are varied to represent data.
Common modulation techniques include Amplitude Modulation (AM), Frequency
Modulation (FM), and Phase Modulation (PM). The choice of modulation scheme
depends on factors like bandwidth efficiency, noise resilience, and compatibility with
existing systems.
1. Very Low Frequency (VLF): 3 kHz to 30 kHz - Used for submarine communication due
to its ability to penetrate seawater.
2. Low Frequency (LF): 30 kHz to 300 kHz - Used for navigation beacons and AM radio
broadcasting.
3. Medium Frequency (MF): 300 kHz to 3 MHz - Used for AM radio broadcasting and
aviation communications.
4. High Frequency (HF): 3 MHz to 30 MHz - Used for long-distance communication,
including amateur radio, international broadcasting, and military communications.
5. Very High Frequency (VHF): 30 MHz to 300 MHz - Used for FM radio broadcasting,
television broadcasting, aircraft communication, and air traffic control.
6. Ultra High Frequency (UHF): 300 MHz to 3 GHz - Used for terrestrial television
broadcasting, satellite communication, and mobile phones.
7. Super High Frequency (SHF): 3 GHz to 30 GHz - Used for satellite communication,
microwave links, and radar systems.
8. Extremely High Frequency (EHF): 30 GHz to 300 GHz - Used for satellite
communication and some types of radar systems.
The specific frequency within each band may vary depending on the application and
regulatory requirements in a particular region. Different frequency bands offer
different propagation characteristics and are suitable for different purposes.
Signals
Wireless signals, also known as radio signals or electromagnetic waves, are a form of
energy that travels through space without the need for a physical medium like wires.
These signals are crucial for various types of wireless communication, including radio
communication, Wi-Fi, cellular networks, and satellite communication. Here are some
key aspects of wireless signals:
What is an antenna?
An antenna is a specialized transducer that converts electric current into
electromagnetic (EM) waves or vice versa. Antennas are used to transmit and
receive nonionizing EM fields, which include radio waves, microwaves, infrared
radiation (IR) and visible light.
Radio wave antennas and microwave antennas are used extensively throughout
most industries and in our day-to-day lives. Infrared and visible light antennas are
less common. They're still deployed in a variety of settings, although their use
tends to be more specialized.
A receiving antenna intercepts EM waves transmitted through the air. From these
waves, the antenna generates a small amount of current, which varies depending on
the strength of the signal. The current is passed to the receiving device, where it is
transformed for its specific environment. For example, a car's antenna might pick
up the FM signal from the radio station. The antenna converts the signal's radio
waves to current, which is fed to the car's radio. The radio amplifies the current and
in other ways transforms it and delivers it as music to the speakers.
Signal Prapogation
Signal propagation refers to the process by which electromagnetic signals travel
from a transmitter to a receiver through a medium, such as air, water, or space.
Understanding signal propagation is essential in designing and optimizing wireless
communication systems. Here are some key aspects of signal propagation:
1. Path Loss: Path loss refers to the reduction in signal strength as the signal travels
through the propagation medium. It occurs due to factors such as distance,
absorption, scattering, and diffraction. Path loss increases with distance, higher
frequencies experience more significant path loss due to increased absorption and
scattering.
2. Free Space Path Loss (FSPL): In free space, where there are no obstacles or
reflections, signal strength decreases with the square of the distance from the
transmitter.
3. Multipath Propagation: In real-world environments, signals often reach the receiver
through multiple paths due to reflections, diffraction, and scattering off objects in the
environment. This phenomenon is known as multipath propagation and can lead to
constructive or destructive interference at the receiver, affecting signal quality.
4. Fading: Fading refers to the variation in signal strength experienced by a receiver
over time or distance due to multipath propagation. Fading can be classified into two
main types:
Slow Fading: Occurs over large distances or time scales and is typically
caused by changes in the environment, such as terrain or atmospheric
conditions.
Fast Fading: Occurs over short distances or time scales and is caused by rapid
changes in the signal path due to movement of objects or the
transmitter/receiver.
5. Shadowing: Shadowing occurs when obstacles such as buildings, hills, or vegetation
block or attenuate the signal, creating regions of reduced signal strength known as
shadow zones. Shadowing effects are particularly significant at higher frequencies,
where signals are more susceptible to obstruction.
6. Doppler Effect: The Doppler Effect occurs when there is relative motion between the
transmitter, receiver, or both. It causes a shift in the frequency of the received signal,
which can affect the performance of communication systems, especially in mobile
applications such as cellular networks and satellite communication.
7. Channel Models: To model and predict signal propagation in various environments
accurately, researchers have developed empirical and theoretical channel models.
These models consider factors such as path loss, multipath propagation, fading, and
shadowing to simulate real-world communication scenarios.
Multiplexing
FDM's most common applications are a traditional radio or television broadcasting, mobile or
satellite stations, or cable television.
For example: In cable TV, you can see that only one cable is reached to the customer's
locality, but the service provider can send multiple television channels or signals
simultaneously over that cable to all customers without any interference. The customers have
to tune to the appropriate frequency (channel) to access the required signal.
In FDM, several frequency bands can work simultaneously without any time constraint.
Advantages of FDM
o The concept of frequency division multiplexing (FDM) applies to both analog signals
and digital signals.
o It facilitates you to send multiple signals simultaneously within a single connection.
Disadvantages of FDM
o It is less flexible.
o In FDM, the bandwidth wastage may be high.
Usage
It is used in Radio and television broadcasting stations, Cable TV etc.
The Time frames of the same intervals are divided so that you can access the entire frequency
spectrum at that time frame.
Advantages of TDM
Disadvantages of TDM
Usage
o It is highly efficient.
o It faces fewer Inferences.
Disadvantages of CDM
Usage
Advantages of SDM
Disadvantages of SDM
Usage
Modulation
Modulation is a process of mixing signals with a sinusoid to produce a new form of
signals. The newly produced signal has certain benefits over an un-modulated signal.
Mixing of low-frequency signal with a high-frequency carrier signal is called Modulation.
o In other words, you can say that "Modulation is the process of converting one
form of signals into another form of signals." For example, Analog signals to
Digital signals or Digital signals to Analog signals.
o Modulation is also called signal modulation.
o Example: Let's understand the concept of signal modulation by a simple example.
Suppose an Analog transmission medium is available to transmit signals, but you
have a digital signal that needs to be transmitted through this Analog medium. So, to
complete this task, you have to convert the digital signal into an analog signal. This
process of conversion of signals from one form to another form is called Modulation.
Advantages of Modulation
Following is the list of some advantages of implementing Modulation in the
communication systems:
Types of Modulation
Primarily Modulation can be classified into two types:
o Digital Modulation
o Analog Modulation
Digital Modulation
Digital Modulation is a technique in which digital signals/data can be converted into
analog signals. For example, Base band signals.
Digital Modulation can further be classified into four types:
Cellular Networks
A Cellular Network is formed of some cells. The cell covers a geographical region and
has a base station analogous to 802.11 AP which helps mobile users attach to the
network and there is an air interface of physical and data link layer protocol between
mobile and base station. All these base stations are connected to the Mobile
Switching Center which connects cells to a wide-area net, manages call setup, and
handles mobility.
There is a certain radio spectrum that is allocated to the base station and to a
particular region and that now needs to be shared. There are two techniques for
sharing mobile-to-base station radio spectrum:
Combined FDMA/TDMA: It divides the spectrum into frequency channels and
divides each channel into time slots.
Code Division Multiple Access (CDMA): It allows the reuse of the same spectrum
over all cells. Net capacity improvement. Two frequency bands are used one of
which is for the forwarding channel (cell-site to subscriber) and one for the
reverse channel (sub to cell-site).
Cell Fundamentals
In practice, cells are of arbitrary shape(close to a circle) because it has the same
power on all sides and has same sensitivity on all sides, but putting up two-three
circles together may result in interleaving gaps or may intersect each other so order
to solve this problem we can use equilateral triangle, square or a regular hexagon in
which hexagonal cell is close to a circle used for a system design. Co-channel
reuse ratio is given by:
DL/RL = Square root of (3N)
Where,
DL = Distance between co-channel cells
RL = Cell Radius
N = Cluster Size
The number of cells in cluster N determines the amount of co-channel interference
and also the number of frequency channels available per cell.
Cell Splitting
When the number of subscribers in a given area increases allocation of more
channels covered by that channel is necessary, which is done by cell splitting. A
single small cell midway between two co-channel cells is introduced.
Cell Splitting
Satellite Broadcasting
A satellite receiver decodes the incoming signals and presents them to the user
through standard television or satellite radio. In the case of satellite television, the
signals coming in are encoded and digitally compressed so as to minimize the size
and so that the provider can bundle more channels into the signal. The user can then
select which channel to decode and view. The compression used for satellite digital
TV is often MPEG compression so that quality can be retained.
GSM (or Global System for Mobile Communications) is defined as a set
of mobile communications standards and protocols governing second-
generation or 2G networks, first developed and deployed in Europe.
GSM is a digital cellular communication standard that is universally accepted. The European
Telecommunications Standards Institute created the GSM standard to define the procedures for second-
generation digital mobile networks that are used by devices such as mobile phones. It is a wide-area
communications technology program that utilizes digital radio channeling to bring forth audio,
information, and multimedia communication systems.
GSM is a mobile network and not a computer networkOpens a new window – this implies that devices
interact with it by looking for nearby cells. GSM, including other technological advances, has influenced
the evolution of mobile wireless telecommunication services. A GSM system manages communication
between mobile stations, base stations, and switching systems.
Every GSM radio channel is 200 kHz wide and is additionally divided into frames of 8-time slots. The
global system for mobile communication (GSM) was first known as Groupe Special Mobile, which is the
reason for the acronym. The GSM system comprises mobile stations, base stations, and intertwining
switching systems.
The GSM program enables 8 to 16 audio users to share every radio channel, and every radio transmission
location may have multiple radio channels. Because of its simplicity, affordability, and accessibility, GSM
is presently the most commonly used network technology in the applications.
However, this is likely to change in the coming years. Various programs have been designed without the
advantage of standardized provisions all through the transformation of mobile telecommunication services.
This significantly created many issues tied directly to consistency as digital radio technology advanced.
The global system for mobile communication is designed to address these issues. GSM accounts for about
70% of the world’s digital cellular services. GSM automates and encodes the information before
transmitting it via a channel including three distinct streams of user information inside each time slot. For
the vast majority of the world, it is also the leading 2G digital cell phone standard. It governs how cell
phones interact with the land-based tower system.
In Europe, GSM operates in the 900MHz and 1.8GHz bands, while in the United States, it functions in the
1.9GHz PCS band. GSM describes the overall mobile network, not just the Time division multiple access
air interface, as it is centered on a circuit-switched structure that splits every 200 kHz channel into eight 25
kHz time frames. It is a rapidly expanding transmission technique, with over 250 million GSM users by the
early 2000s. The one billionth GSM consumer was linked by mid-2004.
While using the 900 MHz bandwidth was one of the initial plans for the Global System of the mobile
communication path, it is no longer mandatory. GSM systems since then have grown and can now operate
in a variety of frequency bands.
The GSM frequency bandwidths are generally separated into two paths: 900/1800 MHz and 850/1900
MHz. Most of Europe, Asia, Africa, the Middle East, and Australia use the 900 MHz / 1800 MHz band.
North and South America, as well as the United States, Canada, Mexico, and other countries, use the 850
MHz / 1900 MHz band. In the Global system for mobile communication, 900 MHz bandwidth spans 880
to 960 MHz, while the 1800 MHz band spans 1710 to 1880 MHz.
The 850 MHz frequency band, on the other hand, spans 824 to 894 MHz, while the 1900 MHz band spans
1850 to 1990 MHz. GSM-based cellular systems use a series of numbers or unique codes to recognize
cellular subscribers and deliver the appropriate assistance to them. IMSI (International Mobile Subscriber
Identity) is a unique serial code for every SIM card. To conceal the permanent identity, the phone network
can create a short-term code called Temporary Mobile Subscriber Identity for each IMSI.
The Mobile Station International Subscriber Directory Number is the complete phone number for a
particular SIM, including all prefixes. Lastly, MSRN is an abbreviation for Mobile Subscriber Roaming
Number, and it is a short-term cellphone number given to a cellular station if it is not on the local network
(roaming). Therefore, any calls or communication systems can be tied to it.
Numerous GSM network carriers have roaming agreements with foreign corporations, allowing people to
use their phones when traveling internationally. SIM cards with household network access designs can be
changed to those with a metered local connection, lowering roaming costs while maintaining service. The
global system for mobile communication organizes the geographical area into hexagonal cells, the size of
which is controlled by the transmitter’s power and the number of end-users. The middle of the cell has a
base station consisting of a transceiver (which combines the transmitter and reception) and an antenna.
Frequency Division Multiple Access (FDMA) and Time Division Multiple Access (TDMA) are the two
critical approaches used by GSM:
FDMA is the technique of subdividing frequency bands into many bands, each of which is
allocated to specific users. In GSM, FDMA separates the 25MHz bandwidth into 124 carrier
frequencies by 200 kHz. Every base station has one or more carrier frequencies assigned to it.
Time Division Multiple Access (TDMA) is the practice of allocating the same frequency to multiple
users by separating the bandwidths into various time slots. Every subscriber is assigned a
timeslot, allowing different stations to split the same transmission area.
TDMA is used to divide each subdivided carrier frequency into different time slots for GSM. Each Time-
division multiple access frame does have 8-time slots and takes 4.164 milliseconds (ms). This means that
every time slot or physical channel in this structure should take 577 microseconds, and information is
transferred in bursts during that time. A GSM system has several cell sizes, including macro, micro, Pico,
and umbrella cells. Each cell differs depending on the execution domain.
A GSM network has five cell sizes: macro, micro, pico, and umbrella. Depending on the option provided,
the connectivity of each cell differs. The time division multiple access (TDMA) method works by giving
every client a varying time slot on a similar frequency. This can easily be adapted to sending and receiving
data and voice communication and it can hold bandwidths ranging from 64kbps to 120Mbps.
See More: Distributed Computing vs. Grid Computing: 10 Key ComparisonsOpens a new window
The GSM architecture is made up of three central systems. The following are the primary components of
the GSM architecture:
NSS is a GSM element that provides flow management and call processing for mobile devices moving
between base stations. The switching system consists of the functional units listed below.
Mobile Services Switching Center (MSC): Mobile Switching Center is integral to the GSM
network architecture’s central network space. The MSC supports call switching across cellular
phones and other fixed or mobile network users. It also monitors cellular services, including
registration, location updates, and call forwarding to a roaming user.
Home Location Register (HLR): It is a set of data items used for storing and managing
subscriptions. It provides data for each consumer as well as their last known position. The HLR is
regarded as the most significant database because it preserves enduring records about users.
When a person purchases a membership from one of the operators, they are enlisted in that
operator’s HLR.
Visitor Location Register (VLR): VLR is a database that provides subscriber information necessary
for the MSC to service passengers. This includes a short-term version of most of the data stored
in the HLR. The visitor location register can also be run as a standalone program, but it is usually
implemented as a component of the MSC.
Equipment Identity Register (EIR): It is the component that determines if one can use particular
mobile equipment on the system. This consists of a list of every functioning mobile device on the
system, with each mobile device recognized by its own International Mobile Equipment Identity
(IMEI) number.
Authentication Center (AuC): The AUC is a unit that offers verification and encryption factors to
ensure the user’s identity and the privacy of every call. The verification center is a secure file that
contains the user’s private key in the SIM card. The AUC shields network operators from various
types of fraud prevalent in the modern-day cellular world.
The mobile station is a cell phone with a display, digital signal processor, and radio transceiver regulated
by a SIM card that functions on a system. Hardware and the SIM card are the two most essential elements
of the MS. The MS (Mobile stations) is most widely recognized by cell phones, which are components of a
GSM mobile communications network that the operator monitors and works.
Currently, their size has shrunk dramatically while their capabilities have skyrocketed. Additionally, the
time between charges has been significantly improved.
It serves as a connection between the network subsystem and the mobile station. It consists of two parts:
The Base Transceiver Station (BTS): The BTS is responsible for radio connection protocols with
the MS and contains the cell’s radio transceivers. Companies may implement a significant
number of BTSs in a big metropolitan area. Each network cell has transceivers and antennas that
make up the BTS. Based on the cell’s consumer density, every BTS includes anywhere from one
to sixteen transceivers.
The Base Station Controller (BSC): The BSC is responsible for managing the radio resources of
one or more BTS(s). This manages radio channel configuration and handovers. The BSC serves as
the link seen between mobile and MSC. It allocates and emits MS frequency bands and time
slots. Additionally, the BSC is responsible for intercell handover and transmits the BSS and MS
power within its jurisdiction.
The operation support system (OSS) is a part of the overall GSM network design. This is linked to the NSS
and BSC components. The OSS primarily manages the GSM network and BSS traffic load. As the number
of BS increases due to customer population scaling, a few maintenance duties are shifted to the base
transceiver stations, lowering the system’s financial responsibility. The essential purpose of OSS is to have
a network synopsis and assist various services and maintenance organizations with their routine
maintenance arrangements.
Applications of GSM
The ability to send and receive text messages to and from mobile phones is known as the Short Message
Service (SMS). SMS provides services related to two-way paging, except with more features incorporated
into the cell device or port. Text messaging allows a cell phone user to receive a quick short message on
their cell phone. Similarly, that user can compose a brief message to send to other users.
SMS delivers short text messages of up to 140 octets over the GSM platform’s control system air interface.
The Short Message Service Center (SMSC) stores and transmits short messages from mobile users to their
intended recipients. One may use it to send and receive brief messages, saving time due to the rapid
transmission of communications. Furthermore, there is no need to go online; the mobile device has a
signal, and it can send and receive short messages.
Data security is the most crucial factor for usage operators. Specific aspects are now implemented in GSM
to improve security. There is currently an indication for ME and MS in this framework. The system
proposes two subsystems. The appliance control subsystem allows users to remotely control household
appliances, while the security alert subsystem provides fully automated security monitoring.
This same system can instruct users via SMS from a particular phone number on how to change the
condition of the home appliance based on the person’s needs and preferences. The client is configured via
SIM, allowing the system to observe Mobile subscribers on the database. GSM also includes features for
signal encryption.
The second element of GSM security is security alert, which would be accomplished in such a way that
upon identification of an invasion, the system would allow for the automatic creation of SMS, thereby
notifying the user of a potential threat.
GSM technology will enable communication with anyone, wherever, at any moment. GSM’s functional
architecture employs sophisticated networking principles, and its idea offers the advancement of GSM as
the first move towards an authentic personal communication network with sufficient homogeneity to
guarantee compatibility.
The procedure of handover in any mobile system is critical. It is a necessary process, so handover could
lead to call loss if done improperly. Undelivered calls could be especially aggravating to subscribers, and
as the percentage of undelivered calls grows, so does user dissatisfaction, so they’re more likely to switch
to another network.
As a result, GSM handover was given special consideration when creating the standard. Whenever a
cellular customer switches cells, the radio signal shifts from past to new. Even though the GSM network is
complicated, in contrast to other systems, the flexibility of the GSM procedure provides better
performance to subscribers. In a GSM network, there are four basic types of handoffs:
Intra-cell handover: This type of handover is used to improve data traffic in the cell or to
strengthen connection performance by modifying the carrier signal.
Inter-cell handover: Additionally, it is known as intra-BSC handover. In this instance, the mobile
changes cells while remaining in the BSC. Here, the BSC is in control of the transfer procedure.
Inter-BSC handover: It’s also known as an intra-MSC handover. Because BSC can only handle a
restricted number of cells, we may have to move a phone from one BSC to the other. Here, the
handover is managed by the MSC.
Inter-MSC handoff: This occurs when a mobile device moves from one MSC area to the next.
MSC is spread over a wide area.
GSM - Protocol Stack
GSM architecture is a layered model that is designed to allow
communications between two different systems. The lower layers
assure the services of the upper-layer protocols. Each layer
passes suitable notifications to ensure the transmitted data has
been formatted, transmitted, and received accurately.
MS Protocols
The RR layer is the lower layer that manages a link, both radio
and fixed, between the MS and the MSC. For this formation, the
main components involved are the MS, BSS, and MSC. The
responsibility of the RR layer is to manage the RR-session, the
time when a mobile is in a dedicated mode, and the radio
channels including the allocation of dedicated channels.
The CM layer is the topmost layer of the GSM protocol stack. This
layer is responsible for Call Control, Supplementary Service
Management, and Short Message Service Management. Each of
these services are treated as individual layer within the CM layer.
Other functions of the CC sublayer include call establishment,
selection of the type of service (including alternating between
services during a call), and call release.
BSC Protocols
The BSC uses a different set of protocols after receiving the data
from the BTS. The Abis interface is used between the BTS and
BSC. At this level, the radio resources at the lower portion of
Layer 3 are changed from the RR to the Base Transceiver Station
Management (BTSM). The BTS management layer is a relay
function at the BTS to the BSC.
MSC Protocols
One of the main features of GSM system is the automatic, worldwide localisation of it’s
users. The GSM system always knows where a user is currently located, and the same phone
number is valid worldwide. To have this ability the GSM system performs periodic location
updates, even if the user does not use the MS, provided that the MS is still logged on to the
GSM network and is not completely switched off. The HLR contains information about the
current location, and the VLR that is currently responsible for the MS informs the HLR about
the location of the MS when it changes. Changing VLRs with uninterrupted availability of all
services is also called roaming. Roaming can take place within the context of one GSM
service provider or between two providers in one country, however this does not normally
happen but also between different service providers in different countries, known as
international roaming.
To locate an MS and to address the MS, several numbers are needed:
MSISDN (Mobile Station International ISDN Number)The only important number for the
user of GSM is the phone number, due to the fact that the phone number is only associated
with the SIM, rather than a certain MS. The MSISDN follows the E.164, this standard is also
used in fixed ISDN networks.
IMSI (International Mobile Subscriber Identity). GSM uses the IMSI for internal unique
identification of a subscriber.
TMSI (Temporary Mobile Subscriber Identity). To disguise the IMSI that would give the exact
identity of the user which is signaling over the radio air interface, GSM uses the 4 byte TMSI
for local subscriber identification. The TMSI is selected by the VLR and only has temporary
validity within the location area of the VLR. In addition to that the VLR will change the TMSI
periodically.
MSRN (Mobile Station [Subscriber] Roaming Number)This is another temporary address that
disguises the identity and location of the subscriber. The VLR generates this address upon
request from the MSC and the address is also stored in the HLR. The MSRN is comprised of
the current VCC (Visitor Country Code), the VNDC (Visitor National Destination Code) and
the identification of the current MSC together with the subscriber number, hence the MSRN
is essential to help the HLR to find a subscriber for an incoming call.
All the numbers described above are needed to find a user within the GSM system, and to
maintain the connection with a mobile station. The following scenarios below shows a MTC
(Mobile Terminate Call) and a MOC (Mobile Originated Call).
System Architecture
There are several uses of DECT systems and depending upon the
usage there can be multiple physical techniques for the implementation
of the system. But one thing which is to be kept in mind is that there is a
single logical reference model of the system architecture which is the
basis of all the implementations.
The figure given below shows the reference model of DECT system
architecture:
Protocol Architecture
ETRA (Enhanced Third Generation Radio Access): ETRA was a concept developed
by Nortel Networks in the late 1990s as part of their vision for enhancing third-
generation (3G) mobile telecommunications technology. It aimed to provide higher
data rates, improved coverage, and better spectral efficiency compared to existing
3G technologies.
1. UMTS (Universal Mobile Telecommunications System): UMTS is a third-
generation (3G) mobile telecommunications technology that was standardized by the
3rd Generation Partnership Project (3GPP). It was developed to succeed GSM (Global
System for Mobile Communications) and provide higher data rates and multimedia
services. UMTS supports both circuit-switched and packet-switched data services and
serves as the foundation for the High-Speed Packet Access (HSPA) family of
technologies, including HSDPA (High-Speed Downlink Packet Access) and HSUPA
(High-Speed Uplink Packet Access).
2. IMT-2000 (International Mobile Telecommunications-2000): IMT-2000 is a global
standard for third-generation (3G) mobile telecommunications systems developed by
the International Telecommunication Union (ITU). It encompasses various
technologies, including UMTS, CDMA2000, and TD-SCDMA, among others. IMT-2000
aims to provide high-speed data transmission, multimedia services, and global
roaming capabilities. It sets the framework for the evolution of mobile networks
beyond the second-generation (2G) systems like GSM.
o GEO (Geostationary Earth Orbit) at about 36,000km above the earth's surface.
o LEO (Low Earth Orbit) at about 500-1500km above the earth's surface.
o MEO (Medium Earth Orbit) or ICO (Intermediate Circular Orbit) at about 6000-
20,000 km above the earth's surface.
o HEO (Highly Elliptical Orbit)
Advantages of MEO
o Using orbits around 10,000km, the system only requires a dozen satellites which is
more than a GEO system, but much less than a LEO system.
o These satellites move more slowly relative to the earth's rotation allowing a simpler
system design (satellite periods are about six hours).
o Depending on the inclination, a MEO can cover larger populations, so requiring fewer
handovers.
o A MEO satellite's longer duration of visibility and wider footprint means fewer
satellites are needed in a MEO network than a LEO network.
Disadvantages of MEO
o Again due to the larger distance to the earth, delay increases to about 70-80 ms.
o The satellites need higher transmit power and special antennas for smaller footprints.
o A MEO satellite's distance gives it a longer time delay and weaker signal than LEO
satellite.
1. Routing:
Routing in satellite systems involves determining the path that data packets
should take from the source to the destination through the satellite network.
This can be challenging due to the long propagation delay inherent in satellite
communication, as well as the dynamic nature of the satellite constellation
and varying link conditions.
Different routing protocols may be used, such as Distance Vector Routing
(DVR), Link State Routing (LSR), or newer protocols designed specifically for
satellite networks.
Efficient routing algorithms take into account factors such as link quality,
traffic load, and latency to ensure optimal data transmission.
2. Localization:
Localization refers to determining the geographical location of ground
stations or mobile terminals in satellite communication networks.
Accurate localization is essential for satellite systems, especially for mobile
terminals and IoT devices that may be moving.
Various techniques are used for localization, including GPS (Global Positioning
System), which relies on satellite signals for accurate positioning.
In cases where GPS signals are not available or insufficient, alternative
methods such as triangulation based on signal strength from multiple
satellites or ground-based reference stations may be employed.
3. Handover:
Handover, also known as handoff, is the process of transferring an ongoing
communication session from one satellite or beam to another as the user
moves.
Handover is critical for maintaining continuous connectivity, particularly for
mobile satellite communication systems such as those used in maritime,
aviation, or land-based mobile applications.
Handover decisions are typically based on factors such as signal strength, link
quality, Doppler shift, and beam coverage.
Seamless handover mechanisms are essential to minimize service disruption
during transitions between satellites or beams.