0% found this document useful (0 votes)
20 views7 pages

Unit IV

Uploaded by

honapag891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

Unit IV

Uploaded by

honapag891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Dissemination protocol for large sensor network

Dissemination protocols in a large sensor network typically take a data-centric paradigm


in which the communication primitives are organized around the sensing data instead of the
network nodes. In a large-scale sensor networks data flows from potentially multiple sources
to potentially multiple, static or mobile sinks. We define a data source as a sensor node that
detects a stimulus, which is a target or an event of interest, and generates data to report the
event.

1. Drip
Drip is the simplest of all dissemination protocols and is based on Trickle algorithm
and Drip provides a standard message reception interface in WSN. Each node that
wishes to use Drip will register with a specific identifier, which represents a
dissemination channel. All messages received on that channel will be delivered
directly to the node. Drip achieves great efficiency by avoiding redundant
transmissions if the same information has already been received by the nodes in the
neighbourhood.

2. CodeDrip
It is a data dissemination protocol proposed by Nildo et al. and can be used in
Wireless Sensor Networks. This protocol is mainly used for dissemination of small
values. Network Coding is a mechanism that combines packets in the network thus
increasing the throughput and decreasing number of messages transmitted. CodeDrip
uses the Trickle algorithm for dissemination. It is similar to Drip except for the fact
that here messages are sometimes combined and sent.

3. Dip
DIP (Dissemination Protocol) is a data detection and dissemination protocol proposed
by Lin et al. [4]. It is a protocol based on the Trickle algorithm. It works in two parts:
detecting whether a difference in data in nodes has occurred, and identifying which
data item is different. It uses the concept of version number and keys for each data
item.

4. DHV (Difference detection, Horizontal search and Vertical search)


It is a code consistency maintenance protocol given by Dang et al. It tries to keep
codes on different nodes in a WSN consistent and up to date It is based on the
observation that if two versions are different, they may only differ in a few least
significant bits of their version number rather than in all their bits. Hence, it is not
always necessary to transmit and compare the whole version number in the network.

5. Deluge
Deluge which is a reliable data dissemination protocol for propagating large data
objects from one source node to other nodes over a multi-hop, wireless sensor
network. Deluge achieves reliability in unpredictable wireless environments and
robustness when node densities can vary by factors of a thousand or more. This
protocol is based on Trickle algorithm.

6. MNP
MNP is a Multihop Network re-programming Protocol (MNP). It provides a reliable
service to propagate new program code to all sensor nodes in the network. The main
aim of this dissemination protocol is to ensure reliable, low memory usage and fast
data dissemination. It is based on a sender selection protocol in which source nodes
compete with each other based on the number of distinct requests they have received.

Data Dissemination
Data dissemination is the process by which queries or data are routed in the sensor
network. The data collected by sensor nodes has to be communicated to the BS or to any
other node interested in the data. The node that generates data is called a source and the
information to be reported is called an event. A node which is interested in an event and seeks
information about it is called a sink. Traffic models have been developed for sensor networks
such as the data collection and data dissemination (diffusion) models. In the data collection
model, the source sends the data it collects to a collection entity such as the BS.

Flooding: In flooding, each node which receives a packet broadcasts it if the maximum hop-
count of the packet is not reached and the node itself is not the destination of the packet.

Gossiping: Gossiping is a modified version of flooding, where the nodes do not broadcast a
packet, but send it to a randomly selected neighbour. This avoids the problem of implosion,
but it takes a long time for a message to propagate throughout the network.

Rumor Routing: Rumor routing is an agent-based path creation algorithm. Agents, or "ants,"
are long-lived entities created at random by nodes. These are basically packets which are
circulated in the network to establish shortest paths to events that they encounter. They can
also perform path optimizations at nodes that they visit. When an agent finds a node whose
path to an event is longer than its own, it updates the node's routing table.

Sequential Assignment Routing: The sequential assignment routing (SAR) algorithm creates
multiple trees, where the root of each tree is a one-hop neighbour of the sink. Each tree grows
outward from the sink and avoids nodes with low throughput or high delay. At the end of the
procedure, most nodes belong to multiple trees.
Directed Diffusion: They generate requests/queries for data sensed by other nodes, instead of
all queries arising only from a BS. Hence, the sink for the query could be a BS or a sensor
node. The directed diffusion routing protocol improves on data diffusion using interest
gradients. The diffusion model allows nodes to cache or locally transform data. This
increases the scalability of communication and reduces the number of message transmissions
required.

Sensor Protocols for Information via Negotiation: A family of protocols called sensor
protocols for information via negotiation (SPIN) is proposed in. SPIN uses negotiation and
resource adaptation to address the deficiencies of flooding. Negotiation reduces overlap and
implosion, and a threshold-based resource-aware operation is used to prolong network
lifetime.

Cost-Field Approach: The cost-field approach considers the problem of setting up paths to a
sink. It is a two-phase process, the first phase being to set up the cost field, based on metrics
such as delay, at all sensor nodes, and the second being data dissemination using the costs.

Geographic Hash Table: Geographic hash table (GHT) is a system based on data-centric
storage, inspired by Internet- cale distributed hash table (DHT) systems such as Chord and
Tapestry. The routing protocol used is greedy perimeter stateless routing (GPSR), which
again uses geographical information to route the data and queries. GHT is more effective in
large sensor networks, where a large number of events are detected but not all are queried.

Small Minimum Energy Communication Network: Small minimum energy communication


network (SMECN) is a protocol proposed in to construct a sub-network from a given
communication network. If the entire sensor network is represented by a graph G, the
subgraph G' is constructed such that the energy usage of the network is minimized. The
power required to transmit data between two nodes u and v is modelled as

where t is a constant, n is the path loss exponent indicating the loss of power with distance
from the transmitter, and d(u, v) is the distance between u and v. Let the power needed to
receive the data be c. Since the transmission power increases exponentially with distance, it
would be more economical to transmit data by smaller hops. Suppose the path between u
(i.e., u0) and v (i.e., uk) is represented by r1= (u0, u1, ...uk), such that each (ui, ui+ 1 ) is an
edge in the subgraph G', then the total power consumed for the transmission is
Data Gathering

The objective of the data-gathering problem is to transmit the sensed data from each
sensor node to a BS. One round is defined as the BS collecting data from all the sensor nodes
once. This scheme performs poorly with respect to the energy × delay metric. Power-
Efficient Gathering for Sensor Information Systems: Power-efficient gathering for sensor
information systems (PEGASIS) is a data-gathering protocol based on the assumption that all
sensor nodes know the location of every other node, that is, the topology information is
available to all nodes. The goals of PEGASIS are as follows: • Minimize the distance over
which each node transmits • Minimize the broadcasting overhead • Minimize the number of
messages that need to be sent to the BS • Distribute the energy consumption equally across
all nodes.

Binary Scheme: This is also a chain-based scheme like PEGASIS, which classifies nodes
into different levels. All nodes which receive messages at one level rise to the next. The
number of nodes is halved from one level to the next. The number of nodes is halved from
one level to the next. For instance, consider a network with eight nodes labelled s0 to s7. This
scheme is possible when nodes communicate using CDMA, so that transmissions of each
level can take place simultaneously.
Chain-Based Three-Level Scheme: For non-CDMA sensor nodes, a binary scheme is not
applicable. The chain based three-level scheme addresses this situation, where again a chain
is constructed as in PEGASIS. The chain is divided into a number of groups to space out
simultaneous transmissions in order to minimize interference. One node out of each group
aggregates data from all group members and rises to the next level. The index of this leader
node is decided a priori. In the second level, all nodes are divided into two groups, and the
third level consists of a message exchange between one node from each group of the second
level.

Quality of a Sensor Network


The purpose of a sensor network is to monitor and report events or phenomena taking place
in a particular area. Hence, the main parameters which define how well the network observes
a given area are "coverage" and "exposure."

Coverage: Coverage is a measure of how well the network can observe or cover an event.
Coverage depends upon the range and sensitivity of the sensing nodes, and the location and
density of the sensing nodes in the given region. The worst-case coverage defines areas of
breach, that is, where coverage is the poorest. This can be used to determine if additional
sensors need to be deployed to improve the network. The best-case coverage, on the other
hand, defines the areas of best coverage. A path along the areas of best coverage is called a
maximum support path or maximum exposure path. A mathematical technique to solve the
coverage problem is the Voronoi diagram. It can be proved that the path PB will be
composed of line segments that belong to the Voronoi diagram corresponding to the sensor
graph.

In two dimensions, the Voronoi diagram of a set of sites is a partitioning of the plane into a
set of convex polygons such that all points inside a polygon are closest to the site enclosed by
the polygon, and the polygons have edges equidistant from the nearby sites

The algorithm to find the breach path PB is:

• Generate the Voronoi diagram, with the set of vertices V and the set of edges E. This is
done by drawing the perpendicular bisectors of every line segment joining two sites, and
using their points of intersection as the vertices of the convex polygons.

• Create a weighted graph with vertices from V and edges from E, such that the weight of
each edge in the graph is the minimum distance from all sensors in S. The edge weights
represent the distance from the nearest sensor. Smaller edge weights imply better coverage
along the edge.

• Determine the maximum cost path from I to F, using breadth-first search. The maximum
cost implies least coverage. Hence, the required breach path is along this maximum-cost path
determined from the Voronoi diagram. The breach path shows the region of maximum
vulnerability in a sensor network, where the coverage provided by the sensors is the weakest.

A related problem is that of finding the best-case coverage. The problem is


formally stated as finding the path which offers the maximum coverage, that is, the maximum
support path PS in S, from I to F. The solution is obtained by a mathematical technique called
Delaunay triangulation. This is obtained from the Voronoi diagram by connecting the sites
whose polygons share a common edge. The best path PS will be a set of line segments from
the Delaunay triangulation, connecting some of the sensor nodes. The algorithm is again
similar to that used to find the maximum breach path, replacing the Voronoi diagram by the
Delaunay triangulation, and defining the edge costs proportional to the line segment lengths.
The maximum support path is hence formed by a set of line segments connecting some of the
sensor nodes.

Exposure: Exposure is defined as the expected ability of observing a target in the sensor
field. It is formally defined as the integral of the sensing function on a path from source node
Ps to destination node Pd . The sensing power of a node s at point p is usually modelled as

where λ and k are constants, and d(s, p) is the distance of p from s. Consider a network with
sensors s1 ,s2 , ..., sn. The total intensity at point p, called the all sensor field intensity, is
given by

The closest- sensor field intensity at p is where Smin is the closest sensor to p.

You might also like