Edge, Fog Connectivity and Protocols
An edge devise is any piece of hardware that controls data flow at the boundary between two
networks. Edge devices fulfill a variety of roles, depending on what type of device they are, but
they essentially serve as network entry -- or exit -- points. Some common functions of edge devices
are the transmission, routing, processing, monitoring, filtering, translation and storage of data
passing between networks. Edge devices are used by enterprises and service providers.
Cloud computing and the internet of things (IoT) have elevated the role of edge devices, ushering
in the need for more intelligence, computing power and advanced services at the network edge.
This concept, where processes are decentralized and occur in a more logical physical location, is
referred to as edge computing.
One of the most common types of edge devices is an edge router. Usually deployed to connect a
campus network to the internet or a WAN, edge routers chiefly function as gateways between
networks. A similar type of edge device, known as a routing switch, can also be used for this
purpose, although routing switches typically offer less-comprehensive features than full-fledged
routers.
Firewalls can also be classified as edge devices, as they sit on the periphery of one network and
filter data moving between internal and external networks.
Championed by Cisco, IBM, and Dell client data is processed at the periphery of the network, as
close to the originating source as possible. To enable quicker response, intelligence is pushed from
the cloud to the edge, localizing certain kinds of analysis and decision-making. The sheer number
of networked devices in the (IoT), growth of mobile computing, and the decreasing cost of
computer components all are driving forces behind the move toward Edge Computing architecture.
Since transmitting massive amounts of raw data over a network puts tremendous load on network
resources, it is much more efficient to process data near its source and send only the data that has
value over the network to a remote data center. Time sensitive data will be processed in an edge
computing architecture at the point of origin or sent to an intermediary server located in close
geographical proximity to the client. Less time sensitive data is sent to the cloud for historical
analysis or long-term storage. Edge Computing offers several advantages such as improving time
to action, reducing response time down to milliseconds, while also conserving network resources
1
Benefits of Edge Computing/Connectivity
Edge for the IoT brings potential benefits for many IoT deployments, including decreased response
time along with increased communications efficiency, compared to using the cloud to process and
store data. For example, many IoT processes can have a high level of automation at the edge
2
resulting in low latency for rapid data processing. Only the most important information need then
be sent to the cloud for further action or investigation.
Benefits of IoT edge computing that have been identified include:
low latency: By its nature, the edge is closer to the IoT device than the core or cloud. This
means a shorter round-trip for communications to reach local processing power,
significantly speeding up data communications and processing.
longer battery life for IoT devices: Being able to open communication channels for shorter
periods of time due to improved latency, means that battery life of battery powered IoT
devices could be extended. distributed ledger, or a hybrid open source ledger
implementation such as BigchainDB could be used to obtain the advantage of a distributed
ledger which provides features from the NoSQL database MongoDB on which it is based.
More efficient data management: Processing data at the edge makes simple data quality
management such as filtering and prioritization more efficient. Completing this data
administration at the edge, means cleaner data sets can be presented to cloud based
processing for further analytics
Access to data analytics and AI: Edge processing power and data storage could all be
combined to enable analytics and AI, which require very fast response times or involve the
processing of large ‘real-time’ data sets that are impractical to send to centralized systems.
Resilience: The edge offers more possible communication paths than a centralized model.
This distribution means that resilience of data communications is more assured. If there is
a failure at the edge, other resources are available to provide continuous operation.
Scalability: As processing is decentralized with the edge model, less load should ultimately
be placed on the network. This means that scaling IoT devices should have less resource
impact on the network, especially if application and control planes are located at the edge
alongside the data
FOG
3
Cloud systems are generally located in the Internet, which is a large network of unknown
network devices of varying speeds, technologies, and topologies that is under no direct
control. As a result, traffic can be routed over the network but with no quality of service
measures applied, as QoS has to be defined at every hop of the journey. There is also the
issue of security as data is traversing many autonomous system routers along the way, and
the risk of confidentiality and integrity being compromised is increased the farther the
destination is away from the data source.
IoT data is very latency sensitive and requires mobility support in addition to location
awareness. However, IIoT benefits from the cloud model, which handles data storage,
compute, and network requirements dynamically in addition to providing cloud based Big
Data analysis and real-time data streaming analytics. So how can we get the two
requirements to coexist?
The answer is to use the fog.
The fog is a term first coined by Cisco to describe a cloud infrastructure that is located
close to the network edge. The fog in effect extends the cloud through to the edge devices,
and similar to the cloud it delivers services such as compute, storage, network, and
application delivery. The fog differs from the cloud by being situated close to the edge of
the proximity network border, typically connecting to a service provider’s edge router. It
will be on the service provider’s edge router that the fog network will connect to, thereby
reducing latency and improving QoS.
Fog deployments have several advantages over cloud deployments, such as low latency,
very low jitter, client and server only one hop away, definable QoS and security, and
supporting mobility location awareness and wireless access. In addition, the fog does not
work in a centralized cloud location, but is distributed around the network edge, reducing
latency and bandwidth requirements as data is not aggregated over a single cloud channel
but distributed to many edge nodes. Similarly, the fog avoids slow response times and
delays by distributing workloads across several edge node servers rather than a few
centralized cloud servers.
Some examples of fog computing in an IoT context are:
The fog network is ideally suited to the IoT connected vehicles use-case, as
connected cars have a variety of wireless connection methods such as car-2-car,
car-2-access point, which can use Wi-Fi, 3g/4G communications but require
low latency response. Along with SDN, network concepts fog can address
outstanding issues with vehicular networks such as long latency, irregular
connections, and high packet loss by supplementing vehicle-vehicle
4
communications with vehicle-infrastructure communication and ultimately
unified control.
Fog computing addresses many of the severe problems cloud computing has
with network latency and congestion over the Internet; however, it cannot
completely replace cloud computing which will always have a place due to its
ability to store Big Data and perform analytics on massive quantities of data.
As Big Data analytics is a major part of the IoT and then the cloud, computing
will also remain highly relevant to the overall architecture.
Three Layer Edge-Fog-Cloud architecture
5
Two Layer Cloud-Edge and Cloud-Fog architectures
Protocols
There are many industry protocols that facilitate different styles of device communication. An
edge solution should support the most common protocols. For example, these include Z-Wave,
ZigBee, Bluetooth LE, etc.
DDS/RTPS as Edge/Fog computing protocols
The main Edge/Fog computing goal is minimizing latency and save bandwidth, adding some
intelligence to the access point to optimize the typical scenarios in the IoT. Examples of Edge/fog
computing scenarios include low power devices, real-time requirements, and wireless networks.
Taking into account these requirements, a good Edge/Fog computing protocol should be
lightweight, transport agnostic and customizable. Heavyweight Protocols such as Web Services
or REST are discouraged because consume a lot of resources in terms of bandwidth and CPU use.
Remember the IoT devices include small sensors, embedded devices and low bandwidth data links.
The Data Distribution Service (DDS) middleware standard is widely known for being used in very
exigent real-time distributed systems in Aerospace and defense systems, and its underlying
protocol RTPS (Real Time Publish Subscribe Protocol) main features make it very suitable for the
goals of Edge/fog computing:
6
Designed for real-time distributed systems: low latency communications and low
bandwidth use.
Highly customizable: It allows to setup different Quality of Service parameters for
different kinds of data or different data links.
Wireless enabled: It can be deployed over any transport/data link, including specifically
disconnected and intermittent data links, such as low bandwidth wireless networks. RTPS
supports multicast, a very recommendable feature in this kind of networks, and it can also
be deployed over non IP-Based data links.
Lightweight: It requires very few resources, and it can be implemented in low power
devices.