HOME UNIT 1 UNIT 2 UNIT 3 UNIT 4 UNIT 5
Fog, Edge and Cloud in IOT
Fog Computing in IOT:
Fog computing is a
decentralized computing infrastructure in which data, compute, storage and
applications are located somewhere between the data source and the cloud. Like
edge computing, fog computing brings the advantages and power of the cloud
closer to where data is created and acted upon. Many people use the terms fog computing and edge computing interchangeably because both involve bringing
intelligence and processing closer to where the data is created. This is often
done to improve efficiency, though it might also be done for security and
compliance reasons.
The fog metaphor comes from
the meteorological term for a cloud close to the ground, just as fog
concentrates on the edge of the network. The term is often associated with
Cisco; the company's product line manager, Ginny Nichols, is believed to have
coined the term. Cisco Fog Computing is a registered name; fog computing is
open to the community at large.
How does fog computing work?
Fog networking complements
-- doesn't replace -- cloud computing; fogging enables short-term analytics at
the edge, while the cloud performs resource-intensive, longer-term analytics.
Although edge devices and sensors are where data is generated and collected,
they sometimes don't have the compute and storage resources to perform advanced
analytics and machine learning tasks. Though cloud servers have the power to do
this, they are often too far away to process the data and respond in a timely
manner.
In addition, having all
endpoints connecting to and sending raw data to the cloud over the internet can
have privacy, security and legal implications, especially when dealing with
sensitive data subject to regulations in different countries. Popular fog computing
applications include smart grids, smart cities, smart buildings, vehicle
networks and software-defined networks.
How and why is fog computing used?
There are any number of
potential use cases for fog computing. One increasingly common use case for fog
computing is traffic control. Because sensors -- such as those used to detect
traffic -- are often connected to cellular networks, cities sometimes deploy
computing resources near the cell tower. These computing capabilities enable
real-time analytics of traffic data, thereby enabling traffic signals to
respond in real time to changing conditions.
This basic concept is also
being extended to autonomous vehicles. Autonomous vehicles essentially function
as edge devices because of their vast onboard computing power. These vehicles
must be able to ingest data from a huge number of sensors, perform real-time
data analytics and then respond accordingly.
Because an autonomous
vehicle is designed to function without the need for cloud connectivity, it's
tempting to think of autonomous vehicles as not being connected devices. Even
though an autonomous vehicle must be able to drive safely in the total absence
of cloud connectivity, it's still possible to use connectivity when available.
Some cities are considering how an autonomous vehicle might operate with the
same computing resources used to control traffic lights. Such a vehicle might,
for example, function as an edge device and use its own computing capabilities
to relay real-time data to the system that ingests traffic data from other
sources. The underlying computing platform can then use this data to operate
traffic signals more effectively.
What are the benefits of fog computing?
Like any other technology,
fog computing has its pros and cons. Some of the advantages to fog computing
include the following:
Bandwidth conservation. Fog
computing reduces the volume of data that is sent to the cloud, thereby
reducing bandwidth consumption and related costs.
Improved response time.
Because the initial data processing occurs near the data, latency is reduced,
and overall responsiveness is improved. The goal is to provide
millisecond-level responsiveness, enabling data to be processed in near-real
time.
Network-agnostic. Although
fog computing generally places compute resources at the LAN level -- as opposed
to the device level, which is the case with edge computing -- the network could
be considered part of the fog computing architecture. At the same time, though,
fog computing is network-agnostic in the sense that the network can be wired,
Wi-Fi or even 5G.
What are
the disadvantages of fog computing?
Of course, fog computing
also has its disadvantages, some of which include the following:
Physical location. Because
fog computing is tied to a physical location, it undermines some of the
"anytime/anywhere" benefits associated with cloud computing.
Potential security issues.
Under the right circumstances, fog computing can be subject to security issues,
such as Internet Protocol (IP) address spoofing or man in the middle (MitM)
attacks.
Startup costs. Fog
computing is a solution that utilizes both edge and cloud resources, which
means that there are associated hardware costs.
Ambiguous concept. Even
though fog computing has been around for several years, there is still some
ambiguity around the definition of fog computing with various vendors defining
fog computing differently.
EDGE
computing in IOT
Edge computing is
a distributed information technology (IT) architecture in which client data is
processed at the periphery of the network, as close to the originating source
as possible.
Data is the
lifeblood of modern business, providing valuable business insight and
supporting real-time control over critical business processes and operations.
Today's businesses are awash in an ocean of data, and huge amounts of data can
be routinely collected from sensors and IoT devices operating in real time from
remote locations and inhospitable operating environments almost anywhere in the
world.
But this virtual
flood of data is also changing the way businesses handle computing. The
traditional computing paradigm built on a centralized data center and everyday
internet isn't well suited to moving endlessly growing rivers of real-world
data. Bandwidth limitations, latency issues and unpredictable network
disruptions can all conspire to impair such efforts. Businesses are responding
to these data challenges through the use of edge computing architecture.
In simplest
terms, edge computing moves some portion of storage and compute resources out
of the central data center and closer to the source of the data itself. Rather
than transmitting raw data to a central data center for processing and
analysis, that work is instead performed where the data is actually generated
-- whether that's a retail store, a factory floor, a sprawling utility or
across a smart city. Only the result of that computing work at the edge, such
as real-time business insights, equipment maintenance predictions or other
actionable answers, is sent back to the main data center for review and other
human interactions.
Thus, edge
computing is reshaping IT and business computing. Take a comprehensive look at
what edge computing is, how it works, the influence of the cloud, edge use
cases, tradeoffs and implementation considerations.
Figure: Edge computing brings data processing closer to the data source.
Source: https://www.spiceworks.com/tech/edge-computing/articles/what-is-edge-computing/
Edge
computing is all a matter of location. In traditional enterprise computing,
data is produced at a client endpoint, such as a user's computer. That data is
moved across a WAN such as the internet, through the corporate LAN, where the
data is stored and worked upon by an enterprise application. Results of that
work are then conveyed back to the client endpoint. This remains a proven and
time-tested approach to client-server computing for most typical business
applications.
But the
number of devices connected to the internet, and the volume of data being
produced by those devices and used by businesses, is growing far too quickly
for traditional data center infrastructures to accommodate. Gartner predicted
that by 2025, 75% of enterprise-generated data will be
created outside of centralized data centers. The prospect of moving so much
data in situations that can often be time- or disruption-sensitive puts
incredible strain on the global internet, which itself is often subject to
congestion and disruption.
So IT
architects have shifted focus from the central data center to the logical edge of the infrastructure -- taking
storage and computing resources from the data center and moving those resources
to the point where the data is generated. The principle is straightforward: If
you can't get the data closer to the data center, get the data center closer to
the data. The concept of edge computing isn't new, and it is rooted in
decades-old ideas of remote computing -- such as remote offices and branch
offices -- where it was more reliable and efficient to place computing
resources at the desired location rather than rely on a single central
location.
Edge computing
use cases and examples
In principal, edge computing techniques are used to
collect, filter, process and analyze data "in-place" at or near the
network edge. It's a powerful means of using data that can't be first moved to
a centralized location -- usually because the sheer volume of data makes such
moves cost-prohibitive, technologically impractical or might otherwise violate
compliance obligations, such as data sovereignty. This definition has spawned
myriad real-world examples and use cases:
●
Manufacturing. An industrial
manufacturer deployed edge computing to monitor manufacturing, enabling
real-time analytics and machine learning at the edge to find production errors
and improve product manufacturing quality. Edge computing supported the
addition of environmental sensors throughout the manufacturing plant, providing
insight into how each product component is assembled and stored -- and how long
the components remain in stock. The manufacturer can now make faster and more
accurate business decisions regarding the factory facility and manufacturing
operations.
●
Farming. Consider a business that grows
crops indoors without sunlight, soil or pesticides. The process reduces grow
times by more than 60%. Using sensors enables the business to track water use,
nutrient density and determine optimal harvest. Data is collected and analyzed
to find the effects of environmental factors and continually improve the crop
growing algorithms and ensure that crops are harvested in peak condition.
●
Network optimization. Edge computing can
help optimize network performance by measuring performance for users across the
internet and then employing analytics to determine the most reliable,
low-latency network path for each user's traffic. In effect, edge computing is
used to "steer" traffic across the network for optimal time-sensitive
traffic performance.
●
Workplace safety. Edge computing can
combine and analyze data from on-site cameras, employee safety devices and
various other sensors to help businesses oversee workplace conditions or ensure
that employees follow established safety protocols -- especially when the workplace
is remote or unusually dangerous, such as construction sites or oil rigs.
●
Improved healthcare. The healthcare
industry has dramatically expanded the amount of patient data collected from
devices, sensors and other medical equipment. That enormous data volume
requires edge computing to apply automation and machine learning to access the
data, ignore "normal" data and identify problem data so that
clinicians can take immediate action to help patients avoid health incidents in
real time.
●
Transportation. Autonomous vehicles
require and produce anywhere from 5 TB to 20 TB per day, gathering information
about location, speed, vehicle condition, road conditions, traffic conditions
and other vehicles. And the data must be aggregated and analyzed in real time,
while the vehicle is in motion. This requires significant onboard computing --
each autonomous vehicle becomes an "edge." In addition, the data can
help authorities and businesses manage vehicle fleets based on actual
conditions on the ground.
●
Retail. Retail businesses can also produce
enormous data volumes from surveillance, stock tracking, sales data and other
real-time business details. Edge computing can help analyze this diverse data
and identify business opportunities, such as an effective endcap or campaign,
predict sales and optimize vendor ordering, and so on. Since retail businesses
can vary dramatically in local environments, edge computing can be an effective
solution for local processing at each store.
What are the
benefits of edge computing?
Edge computing addresses vital infrastructure challenges
-- such as bandwidth limitations, excess latency and network congestion -- but
there are several potential additional benefits to edge computing that can make
the approach appealing in other situations.
●
Autonomy. Edge computing is useful where
connectivity is unreliable or bandwidth is restricted because of the site's
environmental characteristics. Examples include oil rigs, ships at sea, remote
farms or other remote locations, such as a rainforest or desert. Edge computing
does the compute work on site -- sometimes on the edge device itself -- such as
water quality sensors on water purifiers in remote villages, and can save data
to transmit to a central point only when connectivity is available. By
processing data locally, the amount of data to be sent can be vastly reduced,
requiring far less bandwidth or connectivity time than might otherwise be
necessary.
Figure: Edge devices encompass a broad range of device types, including sensors, actuators and other endpoints, as well as IoT gateways.
Source:https://www.mdpi.com/sensors/sensors-21-07276/article_deploy/html/images/sensors-21-07276-g002.png
●
Data sovereignty. Moving huge amounts of
data isn't just a technical problem. Data's journey across national and
regional boundaries can pose additional problems for data security, privacy and
other legal issues. Edge computing can be used to keep data close to its source
and within the bounds of prevailing data sovereignty laws, such as the European
Union's GDPR, which defines how data should be stored, processed and exposed.
This can allow raw data to be processed locally, obscuring or securing any
sensitive data before sending anything to the cloud or primary data center,
which can be in other jurisdictions.
●
Edge security. Finally, edge
computing offers an additional opportunity to implement and ensure data
security. Although cloud providers have IoT services and specialize in complex
analysis, enterprises remain concerned about the safety and security of data
once it leaves the edge and travels back to the cloud or data center. By
implementing computing at the edge, any data traversing the network back to the
cloud or data center can be secured through encryption, and the edge deployment
itself can be hardened against hackers and other malicious activities -- even
when security on IoT devices remains limited.
Figure: Challenges of edge computing
source:https://www.sketchbubble.com/en/presentation-edge-computing-challenges.html
Edge vs. cloud vs. fog computing
Edge computing is closely associated with the concepts of
cloud computing and fog computing. Although there is some overlap between these
concepts, they aren't the same thing, and generally shouldn't be used
interchangeably. It's helpful to compare the concepts and understand their
differences.
One of the easiest ways to understand the differences
between edge, cloud and fog computing is to highlight their common theme: All
three concepts relate to distributed computing and focus on the physical
deployment of compute and storage resources in relation to the data that is
being produced. The difference is a matter of where those resources are
located.
Edge:
Edge computing is
the deployment of computing and storage resources at the location where data is
produced. This ideally puts compute and storage at the same point as the data
source at the network edge. For example, a small enclosure with several servers
and some storage might be installed atop a wind turbine to collect and process
data produced by sensors within the turbine itself. As another example, a
railway station might place a modest amount of compute and storage within the
station to collect and process myriad track and rail traffic sensor data. The
results of any such processing can then be sent back to another data center for
human review, archiving and to be merged with other data results for broader
analytics.
Cloud:
Cloud computing is a huge, highly scalable deployment of
compute and storage resources at one of several distributed global locations
(regions). Cloud providers also incorporate an assortment of pre-packaged
services for IoT operations, making the cloud a preferred centralized platform
for IoT deployments. But even though cloud computing offers far more than
enough resources and services to tackle complex analytics, the closest regional
cloud facility can still be hundreds of miles from the point where data is
collected, and connections rely on the same temperamental internet connectivity
that supports traditional data centers. In practice, cloud computing is an
alternative -- or sometimes a complement -- to traditional data centers. The
cloud can get centralized computing much closer to a data source, but not at
the network edge.
Figure: Unlike cloud computing, edge
computing allows data to exist closer to the data sources through a network of
edge devices.
Fog.
But the choice of compute and storage deployment isn't
limited to the cloud or the edge. A cloud data center might be too far away,
but the edge deployment might simply be too resource-limited, or physically
scattered or distributed, to make strict edge computing practical. In this
case, the notion of fog computing can help. Fog computing typically takes a
step back and puts compute and storage resources "within" the data,
but not necessarily "at" the data.
Fog computing environments
can produce bewildering amounts of sensor or IoT data generated across
expansive physical areas that are just too large to define an edge. Examples
include smart buildings, smart cities or even smart utility grids. Consider a
smart city where data can be used to track, analyze and optimize the public
transit system, municipal utilities, city services and guide long-term urban
planning. A single edge deployment simply isn't enough to handle such a load,
so fog computing can operate a series of fog node deployments within the scope
of the environment to collect, process and analyze data.
Note: It's important to repeat that fog computing and edge computing share
an almost identical definition and architecture, and the terms are sometimes
used interchangeably even among technology experts.
Edge and fog
computing addresses three principal network limitations: bandwidth, latency and
congestion or reliability.
●
Bandwidth. Bandwidth is the amount of data
which a network can carry over time, usually expressed in bits per second. All
networks have a limited bandwidth, and the limits are more severe for wireless
communication. This means that there is a finite limit to the amount of data --
or the number of devices -- that can communicate data across the network.
Although it's possible to increase network bandwidth to accommodate more
devices and data, the cost can be significant, there are still (higher) finite
limits and it doesn't solve other problems.
●
Latency. Latency is the time needed to send
data between two points on a network. Although communication ideally takes
place at the speed of light, large physical distances coupled with network
congestion or outages can delay data movement across the network. This delays
any analytics and decision-making processes, and reduces the ability for a
system to respond in real time. It even cost lives in the autonomous vehicle
example.
●
Congestion. The internet is basically a global
"network of networks." Although it has evolved to offer good
general-purpose data exchanges for most everyday computing tasks -- such as
file exchanges or basic streaming -- the volume of data involved with tens of
billions of devices can overwhelm the internet, causing high levels of
congestion and forcing time-consuming data retransmissions. In other cases,
network outages can exacerbate congestion and even sever communication to some
internet users entirely - making the internet of things useless during outages.
By deploying servers and storage where the data is
generated, edge computing can operate many devices over a much smaller and more
efficient LAN where ample bandwidth is used exclusively by local
data-generating devices, making latency and congestion virtually nonexistent.
Local storage collects and protects the raw data, while local servers can
perform essential edge analytics -- or at least pre-process and reduce the data
-- to make decisions in real time before sending results, or just essential
data, to the cloud or central data center.
Comments
Post a Comment