Sensor Networks:
Directed Diffusion and other proposals
ECE 256
Sensor Networking Why ??
Data Collection A basic need
Will the volcano erupt? Need temperature/gas signatures
Are poles melting due to GW? Need ocean current data
How many enemy tanks crossed through the jungle?
Did anyone enter my house while I was away?
Human monitoring possible/feasible ?
Not always
Why not a sensor + RF? Why need processor?
Too much data
In-network data distillation necessary 2
Sensor Networking -- Vision
3
San Fransiscos Moscone Center equipped with sensor network
4
Sensor Hardware
(Glimpse)
5
Sensor Nodes
Motivating factors for emergence
Applications
Moores Law in chips, MEMS
Advances in wireless technology
Challenges
Battery technology lagging
Canonical Sensor Node contains
1. Sensor(s) to convert a energy form to an electrical impulse
e.g., to measure temperature
2. Microprocessor
3. Communications link e.g., wireless
4. Power source e.g., battery
6
Example: Berkeley Motes or Smart Dust
Laser diode
III-V process
Passive CCR comm.
MEMS/polysilicon
Analog I/O, DSP, Control
Sensor COTS CMOS
MEMS/bulk, surface, ... Power capacitor
Multi-layer ceramic
Solar cell
CMOS or III-V
Thick film battery
Sol/gel V2O5
1-2 mm 7
Example Hardware
Size
Golem Dust: 11.7 cu. mm
MICA motes: Few inches
Everything on one chip: micro-everything
processor, transceiver, battery, sensors, memory, bus
MICA: 4 MHz, 40 Kbps, 4 KB SRAM / 512 KB Serial
Flash, lasts 7 days at full blast on 2 x AA batteries
8
Examples
Spec, 3/03
4 KB RAM
4 MHz clock
19.2 Kbps, 40 feet
Supposedly $0.30
MICA: More recent (from xbow)
Similar i-motes by Intel 9
Types of Sensors
Micro-sensors (MEMS, Materials, Circuits)
acceleration, vibration, gyroscope, tilt, magnetic, heat, motion, pressure, temp,
light, moisture, humidity, barometric, sound
Chemical
CO, CO2, radon
Biological
pathogen detectors
Actuators too (mirrors, motors, smart surfaces, micro-robots)
10
Berkeley Family of Motes
11
Sensor Software
(TinyOS Glimpse)
12
Programming TinyOS
Use a variant of C called NesC
NesC defines components
A component is either
A module
A module can be a Clock or LED
Or an user-defined software module
Or a configuration
set of other components wired together
Specifying the unimplemented methods invocation mappings
Complete NesC application - one top level configuration
13
Steps in writing/installing your NesC app
(applies to MICA Mote)
On your PC
Write NesC program
Compile to an executable for the mote
Plug the mote into the parallel port through a connector board
Install the program
On the mote
Turn the mote on, and its already running your application
14
TinyOS component model
Component specifies:
Internal Tasks Internal State
Commands Events
Component invocation is event driven
arising from hardware events
Staticallocation avoids run-time overhead
Scheduling: dynamic
Explicit interfaces accommodate different applications
15
A Complete TinyOS Application
application sensing application
routing Routing Layer
messaging Messaging Layer
packet Radio Packet
byte Radio byte Temp
photo SW
bit RFM ADC i2c HW
clocks
16
Energy a critical resource
Component Rate Startup time Current consumption
CPU Active 4 MHz N/A 4.6 mA
CPU Idle 4 MHz 1 us 2.4 mA
CPU Suspend 32 kHz 4 ms 10 uA
Radio Transmit 40 kHz 30 ms 12 mA
Radio Receive 40 kHz 30 ms 3.6 mA
Photo 2000 Hz 10 ms 1.235 mA
I2C Temp 2 Hz 500 ms 0.150 mA
Pressure 10 Hz 500 ms 0.010 mA
Press Temp 10 Hz 500 ms 0.010 mA
Humidity 500 Hz 500 ms 0.775 mA
Thermopile 2000 Hz 200 ms 0.170 mA
Thermistor 2000 Hz 10 ms 0.126 mA
17
Sensor-node Operating System
Size of code and run-time memory footprint
Embedded System OSs inapplicable
Need hundreds of KB ROM
Workload characteristics
Continuous ? Bursty ?
Application diversity - Need to reuse sensor nodes
Energy consumption - Primary concern
Computation, Communication must be energy-aware
18
TinyOS: Summary
Matches both
Hardware requirements
power conservation, size
Application requirements
diversity (through modularity), event-driven, real time
19
AdHoc and Sensors
Ad Hoc network lacking killer applications
Difficult to force co-operation among HUMAN users
Mobility/connectivity unreliable for a business model
Difficult to bootstrap critical mass required
Sensor networks more realizable
More defined applications
Single owner/administration easier to implement
Sensing already an established process just add
networking to it.
20
However
Ad Hoc and Sensor Networks are both multi-
hop wireless architectures
Thereby shares several technical issues and challenges
Solutions in one domain often applicable to others.
However, key differences exist
Energy constraint in sensor networks
Traffic models and characteristics
Other issues like coverage, fault-tolerance, etc.
21
This Talk
Directed Diffusion
Focusing on the shift from the ad hoc paradigm
The attention to energy conservation
Other routing proposals
SPIN, LEACH, Rumor Routing, etc.
SMAC
Energy-Aware Medium Access Control
22
Directed Diffusion
23
The Problem
A region requires event-
A sensor field
monitoring (harmful gas,
vehicle motion, seismic Sensor sources
vibration, temperature, etc.)
Event
Deploy sensors forming a
distributed network
Directed
Diffusion
On event, sensed and/or
processed information
delivered to the inquiring Sensor sink
destination
24
The Proposal
Proposes an application-aware paradigm to
facilitate efficient aggregation, and delivery of
sensed data to inquiring destination
Challenges:
Scalability
Energy efficiency
Robustness / Fault tolerance in outdoor areas
Efficient routing (multiple source destination pairs)
25
IP or not to IP
IP is the pivot of wired/wireless networks
All networking protocol over and below IP
Should we stick to this model?
Comments ?
26
Directed Diffusion
Typical IP based networks
Requires unique host ID addressing
Application is end-to-end, routers unaware
Directed diffusion uses publish/subscribe
Inquirer expresses an interest, I, using attribute values
Sensor sources that can service I, reply with data
27
Data Naming
Expressing an Interest
Using attribute-value pairs
E.g., Type = Wheeled vehicle // detect vehicle location
Interval = 20 ms // send events every 20ms
Duration = 10 s // Send for next 10 s
Field = [x1, y1, x2, y2] // from sensors in this area
Other interest-expressing schemes possible
E.g., hierarchical (different problem)
28
Gradient Set Up
Inquirer (sink) broadcasts exploratory interest, i1
Intended to discover routes between source and sink
Neighbors update interest-cache and forwards i1
Gradient for i1 set up to upstream neighbor
No source routes
Gradient a weighted reverse link
Low gradient Few packets per unit time needed
29
Exploratory Gradient
Exploratory Request
Gradient
Event
Low Low
Low
Bidirectional gradients established on all links through flooding
30
Event-data propagation
Event e1 occurs, matches i1 in sensor cache
e1 identified based on waveform pattern matching
Interest reply diffused down gradient (unicast)
Diffusion initially exploratory (low packet-rate)
Cache filters suppress previously seen data
Problem of bidirectional gradient avoided
31
Reinforcement
Reinforced gradient
Reinforced gradient
Event
A sensor field Sink
From exploratory gradients, reinforce optimal
path for high-rate data download Unicast
By requesting higher-rate-i1 on the optimal path
Exploratory gradients still exist useful for faults
32
Path Failure / Recovery
Link failure detected by reduced rate, data loss
Choose next best link (i.e., compare links based on
infrequent exploratory downloads)
Negatively reinforce lossy link
Either send i1 with base (exploratory) data rate
Or, allow neighbors cache to expire over time
Link A-M lossy
Event D A reinforces B
Src
M B reinforces C
A D need not
C A () reinforces M
B Sink
M () reinforces D
33
Loop Elimination
P Q
D M A
M gets same data from both D and P, but P
always delivers late due to looping
M negatively-reinforces (nr) P, P nr Q, Q nr M
Loop {M Q P} eliminated
Conservative nr useful for fault resilience
34
Simulation Setup & Metrics
ns2, 50 nodes in 160x160 sqm., range 40m
Node density maintained, 802.11 MAC
Random 5 sources in 70x70, random 5 sinks
Average Dissipated Energy
Per node energy dissipation / # events seen by sinks
Average Delay
Latency of event transmission to reception at sink
Distinct event delivery ratio
Ratio of # events sent to # events received by sink
35
Average Dissipated Energy
flooding
Multicast Diffusion
In-network aggregation reduces DD redundancy
Flooding poor because of multiple paths from source to sink
36
Delay
flooding
Diffusion
Multicast
DD finds least delay paths, as OM encouraging
Flooding incurs latency due to high MAC contention, collision
37
Event Delivery Ratio under node failures
0%
10%
20%
Delivery ratio degrades with higher % node failures
Graceful degradation indicates efficient negative reinforcement
38
Conclusion
Directed diffusion, a paradigm proposed for
event monitoring sensor networks
Energy efficiency achievable
Diffusion mechanism resilient to fault tolerance
Conservative negative reinforcements proves useful
A careful MAC protocol, designed for such
specifics, can yield further performance gains
39
Questions?
40
An Energy-Efficient MAC Protocol for
Wireless Sensor Networks (S-MAC)
Wei Ye, John Heidemann, Deborah Estrin
Major source of energy waste
Collision
Overhearing
Control Overhead
Idle Listening
Listening to possible traffic that is not sent
50%-100% energy drain compared with receiving
42
Avenues to Reduce Energy Consumption
(1) Periodic listen and sleep
(2) Collision avoidance
(3) Overhearing avoidance
(4) Message passing
43
(1) Periodic Listen and Sleep
The main idea
Put nodes to sleep periodically
Called Duty Cycles
However, ensure that sleep/wake-up is synchronous
44
Listen/Sleep Schedule Assignment
Choosing Schedule (1)
Listen
Synchronizer
Listen for a mount of time
A Listen for SYNC Go to sleep after time t Sleep
If hear no SYNC, select its
own SYNC
Broadcasts its SYNC
Broadcasts
immediately
Follower
Listen Listen for a mount of time
B td Go to sleep after time t- td Sleep Hear SYNC from A, follow
As SYNC
Rebroadcasts SYNC after
Broadcasts
random delay td
45
Listen/Sleep Schedule Assignment
Choosing Schedule (2)
1. B receives As schedule and
Listen
rebroadcast it.
A Sleep
Listen for SYNC Go to sleep after time t1 2. Hear different SYNC from C
Broadcasts 3. Adapt both schedules
Listen
B td Sleep
Broadcasts
Only need to broadcast once
Nodes only rarely adopt
Listen multiple schedules
C Listen for SYNC Go to sleep after time t2 Sleep
46
Keeping Clocks in SYNC
SYNC packets must not collide
Reserve separate time window for SYNC transmission
47
(2) Collision Avoidance
Identical to 802.11
RTS/CTS
Virtual carrier sense (NAV)
Physical carrier sense
48
(3) Overhearing Avoidance
Neighbors go to sleepon overhearing RTS/CTS
A is talking to B
D receives CTS from B -> sleep
Ds transmission will collide Bs
C receives RTS from A -> sleep
C cannot receive CTS/DATA from E
All immediate neighbours of transmitting node sleep
How long should they sleep?
C and D update their NAV
Keeping sleeping until NAV count down to zero
49
(4) Message Passing
How to transmit long message?
Transmitting one long message is inefficient
Many small packets with RTS/CTS/ACK for each
S-MAC: Divide into fragments, transmit in burst
RTS/CTS reserve medium for the entire sequence
Fragment-errors recovery with ACK
no control packets for fragments
50
Acknowledgment to Pro. Jun Yang
Neighbors can sleep for whole message
51
Message Passing
Advantages:
Energy saving:
Neighbors go to sleep when sense transmissions
Reduces control overhead by sending multiple ACK
Disadvantage:
Node-to-node fairness reduces
However, message-level latency reduces
52
Listen time: 300ms
Experiment Sleeping time: 1s
SYNC: every 13s (10 listen/sleep period)
A, B, C use the same schedule
53
Energy save due
to periodic sleep 802.11
Energy save due to avoiding
overhearing by using message
passing OA
SMAC
Heavy Traffic Light Traffic 54
OA: In light traffic status, nodes keep
listening for quite a long time
55
SYNC overhead
Overhearing avoidance still benefit
Heavy Traffic Light Traffic
56
Questions?
57
Energy Efficient Routing in Ad Hoc Disaster
Recovery Networks:
An Application Perspective
58
Motivation
Disasterrecovery emerging application for
adhoc/sensor networks
During Sep 11 attacks survivors were detected
through mobile phone signals
People often buried below earthquake disaster
New RFID or smart badge technologies
Each person wears a badge that is a transceiver
Sends out very low rate signals about human location
Information collected at peripheral central stations
59
Problem
Given some pkt generation rate at each badge
Design routing strategy that maximizes network
lifetime
Problem formulated as a LPP
Maximize minimum lifetime
subject to the flow constraints on each node
Subject to the capacity constraints of the links
60
Approach
Existing simplex techniques can be used to
solve the problem
Computation intensive due to several iterations for
convergence
Paper proposes binary search on network
lifetime
In plain words, a network lifetime (T) is chosen and
applied to see if there exists a feasible flow assignment
If not, (T/2) is tried, else (2T) until convergence
61
Summary
Complexity of O(n3logT)
n3 for finding a feasible assignment of flows
Log T for the binary search
However, distributed version of this protocol
Only available for a single origin node
For multiple badges future work
62
Other Research Challenges in Sensors
Coverage
Union of all sensing ranges need to cover entire region
Time synchronization
Data Aggregation
Calculating functions over a spatial distribution of sensors
Data Dissemination
Rumour routing, Ant colonies, swarm intelligence
Motion tracking, object guiding
Sensors + Actuators mobile robots !!!
63
Questions?
64
Message Complexity
Grid topology
N = 25
n = 5 Sources
m = 3 sinks
Nodes talk with
Adj. or diagonal
nodes
Flooding: Unrestricted broadcast
Each interest broadcast by each node nN messages
A msg received twice over a link total # receptions = 2n
(# of links)
Total msg. cost = nN + 4n(N 1)(2N 1) = O( nN )
65
Message Complexity II
Omniscient Multicast: Multicast trees rooted at each source
(Cost of tree establishment not counted.)
Overhead of 2 receptions on each link of tree, Tj
Total msg. cost = 2 |{distinct links l: l Uj = 1 to n (Tj)}|
Expressing all trees in terms of a common tree, T1, we get
Message Complexity = O(nN), asymptotically, and m N
Directed Diffusion: Similar approach using rooted trees
Message Complexity = O(nN), asymptotically, and m N
But, cost lower than OM, cause DD can perform duplicate
suppression on common link. More gain when more sources
66
TinyOS design point
Bursty dataflow-driven computations
Multiple data streams => concurrency-intensive
Real-time computations (hard and soft)
Power conservation
TinyOS:
Event-driven execution (reactive mote)
Size
Accommodate diverse set of applications (plug n play)
Modular structure (components) and clean interfaces
67
TinyOS Facts
Software Footprint 3.4 KB
Power Consumption on Rene Platform
Transmission Cost: 1 J/bit
Inactive State: 5 A
Peak Load: 20 mA
Concurrency support:
At peak load CPU is asleep 50% of time
Events propagate through stack <40 S
68
TinyOS: More Performance Numbers
Byte copy 8 cycles, 2 microsecond
Post Event 10 cycles
Context Switch 51 cycles
Interrupt h/w: 9 cycles, s/w: 71 cycles
69
TinyOS: Size
Code size for ad hoc networking
application
3500
Interrupts
3000
Message Dispatch
Initilization
2500
C-Runtime Scheduler: 144 Bytes code
Light Sensor
Clock Totals: 3430 Bytes code
2000
Bytes
Scheduler 226 Bytes data
1500 Led Control
Messaging Layer
1000 Packet Layer
Radio Interface
500 Routing Application
Radio Byte Encoder
0
70
Contribution
Network addressing is data centric
Probably correct approach for sensor type applications
Application-awareness a beneficial tradeoff
Data aggregation can improve energy efficiency
Better bandwidth utilization
Notion of gradient (exploratory and reinforced)
Fault tolerance
Implementation on Berkley motes
Network API, Filter API
71
Critique
Choice of path does not maximize aggregation
Least delay path does not max aggregation
Exploratory paths improve fault tolerance
But at the cost of additional msg./energy overhead
Overhead analysis omits the exploratory paths
Data overlap can be suppressed
2 sources, reporting overlapping data can be combined
Idle energy = 10% of receive, 5% of transmit
Explains the poor energy performance of flooding
Not realistic numbers optimistic assumption
72
Rumor Routing
LEACH
SPIN
Some other proposals for sensor routing
73
Rumor Routing
74
LEACH
Proposes clustering of sensors + cluster leaders
Can aggregate data in single (local) cluster
Rotating cluster head balances energy consumption
Cluster formation distributed and energy efficient
Cluster-head
always awake
Member nodes can
sleep when not Txing
75
LEACH The Protocol
Time is divided into rounds
A node self-elects itself as the cluster head
Higher residual energy, higher probability to be head
Close-by sensors join this cluster-head
Cluster head does TDMA scheduling and gathers data
Gathered data compressed based on spatial correlation
Transmits data to Base Station (@ higher power)
In the next round, another cluster head elected
Probabilistic load balancing
Network lifetime can increase manifolds
76
SPIN: Information Via Negotiation
Flooding many sensors transmit same data
Redundant
Make sensors disseminate spatially/temporally
disjoint data sets
Name data with meta-data to define space/time property
Sensors compare overheard data with self-sensed data
Combine data to minimize overlap
Make sensors resource-adaptive
When low battery perform minimum activities
77
The SPIN 3-Step Protocol
78
The SPIN 3-Step Protocol
Notice the color of the data packets sent by node B
79
The SPIN 3-Step Protocol
SPIN effective when DATA sizes are large :
REQ, ADV overhead gets amortized 80