0% found this document useful (0 votes)
21 views

Evolution of Cloud Computing

Uploaded by

vivekcse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Evolution of Cloud Computing

Uploaded by

vivekcse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Evolution of Cloud Computing

Cloud computing is all about renting computing services. This idea first came in
the 1950s. In making cloud computing what it is today, five technologies played a
vital role. These are distributed systems and its peripherals, virtualization, web
2.0, service orientation, and utility computing.

• Distributed Systems:

It is a composition of multiple independent systems but all of them are


depicted as a single entity to the users. The purpose of distributed
systems is to share resources and also use them effectively and
efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was
that all the systems were required to be present at the same
geographical location. Thus to solve this problem, distributed computing
led to three more types of computing and they were-Mainframe
computing, cluster computing, and grid computing.
• Mainframe computing:

Mainframes which first came into existence in 1951 are highly powerful
and reliable computing machines. These are responsible for handling
large data such as massive input-output operations. Even today these are
used for bulk processing tasks such as online transactions etc. These
systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the
system. But these were very expensive. To reduce this cost, cluster
computing came as an alternative to mainframe technology.

• Cluster computing:

In 1980s, cluster computing came as an alternative to mainframe


computing. Each machine in the cluster was connected to each other by
a network with high bandwidth. These were way cheaper than those
mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required.
Thus, the problem of the cost was solved to some extent but the problem
related to geographical restrictions still pertained. To solve this, the
concept of grid computing was introduced.

• Grid computing:

In 1990s, the concept of grid computing was introduced. It means that


different systems were placed at entirely different geographical
locations and these all were connected via the internet. These systems
belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new
problems emerged as the distance between the nodes increased. The
main problem which was encountered was the low availability of high
bandwidth connectivity and with it other network associated issues.
Thus. cloud computing is often referred to as “Successor of grid
computing”.

• Virtualization:

It was introduced nearly 40 years back. It refers to the process of


creating a virtual layer over the hardware which allows the user to run
multiple instances simultaneously on the hardware. It is a key
technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on.
Hardware virtualization is still one of the most common types of
virtualization.

• Web 2.0:

It is the interface through which the cloud computing services interact


with the clients. It is because of Web 2.0 that we have interactive and
dynamic web pages. It also increases flexibility among web pages.
Popular examples of web 2.0 include Google Maps, Facebook, Twitter,
etc. Needless to say, social media is possible because of this technology
only. In gained major popularity in 2004.

• Service orientation:

It acts as a reference model for cloud computing. It supports low-cost,


flexible, and evolvable applications. Two important concepts were
introduced in this computing model. These were Quality of Service (QoS)
which also includes the SLA (Service Level Agreement) and Software as
a Service (SaaS).

• Utility computing:
It is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such
as storage, infrastructure, etc which are provisioned on a pay-per-use
basis.
Thus, the above technologies contributed to the making of cloud computing.
Difference between Cloud Computing
and Grid Computing
Cloud Computing:
Cloud Computing is a Client-server computing architecture. In cloud
computing, resources are used in centralized pattern and cloud computing is
a high accessible service. It is a pay and use business means, in cloud
computing, the users pay for the use

Grid Computing:
Grid Computing is a Distributed computing architecture. In grid computing,
resources are used in collaborative pattern, and also in grid computing, the
users do not pay for use.
Let’s see the difference between cloud and grid computing which are given
below:

Cloud Computing Grid Computing


S.NO

Cloud computing is a Client- While it is a Distributed computing


1. server computing architecture. architecture.

Cloud computing is a centralized While grid computing is a decentralized


2. executive. executive.

In cloud computing, resources are While in grid computing, resources are


3. used in centralized pattern. used in collaborative pattern.

It is more flexible than grid While it is less flexible than cloud


4. computing. computing.

In cloud computing, the users pay While in grid computing, the users do
5. for the use. not pay for use.

Cloud computing is a high While grid computing is a low


6. accessible service. accessible service.

It is highly scalable as compared While grid computing is low scalable in


7. to grid computing. comparison to cloud computing.
Cloud Computing Grid Computing
S.NO

It can be accessed through While it is accessible through grid


8. standard web protocols. middleware.

Grid Computing
Grid Computing can be defined as a network of computers working together to
perform a task that would rather be difficult for a single machine. All machines on
that network work under the same protocol to act as a virtual supercomputer. The
task that they work on may include analyzing huge datasets or simulating
situations that require high computing power. Computers on the network
contribute resources like processing power and storage capacity to the network.
Grid Computing is a subset of distributed computing, where a virtual
supercomputer comprises machines on a network connected by some bus, mostly
Ethernet or sometimes the Internet. It can also be seen as a form of Parallel
Computing where instead of many CPU cores on a single machine, it contains
multiple cores spread across various locations. The concept of grid computing isn’t
new, but it is not yet perfected as there are no standard rules and protocols
established and accepted by people.
Working:
A Grid computing network mainly consists of these three types of machines
1. Control Node:
A computer, usually a server or a group of servers which administrates
the whole network and keeps the account of the resources in the
network pool.
2. Provider:
The computer contributes its resources to the network resource pool.
3. User:
The computer that uses the resources on the network.

When a computer makes a request for resources to the control node, the control
node gives the user access to the resources available on the network. When it is
not in use it should ideally contribute its resources to the network. Hence a normal
computer on the node can swing in between being a user or a provider based on
its needs. The nodes may consist of machines with similar platforms using the
same OS called homogeneous networks, else machines with different platforms
running on various different OSs called heterogeneous networks. This is the
distinguishing part of grid computing from other distributed computing
architectures.
For controlling the network and its resources a software/networking protocol is
used generally known as Middleware. This is responsible for administrating the
network and the control nodes are merely its executors. As a grid computing
system should use only unused resources of a computer, it is the job of the control
node that any provider is not overloaded with tasks.
Another job of the middleware is to authorize any process that is being executed
on the network. In a grid computing system, a provider gives permission to the
user to run anything on its computer, hence it is a huge security threat for the
network. Hence a middleware should ensure that there is no unwanted task being
executed on the network.
The meaning of the term Grid Computing has changed over the years, according to
“The Grid: Blueprint for a new computing infrastructure” by Ian Foster and Carl
Kesselman published in 1999, the idea was to consume computing power like
electricity is consumed from a power grid. This idea is similar to the current
concept of cloud computing, whereas now grid computing is viewed as a
distributed collaborative network. Currently, grid computing is being used in
various institutions to solve a lot of mathematical, analytical, and physics
problems.

Advantages of Grid Computing:


1. It is not centralized, as there are no servers required, except the control
node which is just used for controlling and not for processing.
2. Multiple heterogeneous machines i.e. machines with different
Operating Systems can use a single grid computing network.
3. Tasks can be performed parallelly across various physical locations and
the users don’t have to pay for them (with money).
Disadvantages of Grid Computing :
1. The software of the grid is still in the involution stage.
2. A super fast interconnect between computer resources is the need of
hour.
3. Licensing across many servers may make it prohibitive for some
applications.
4. Many groups are reluctant with sharing resources .

Difference between Grid computing


and Cluster computing
Cluster Computing:

A Computer Cluster is a local network of two or more homogeneous computers. A


computation process on such a computer network i.e. cluster is called Cluster
Computing.

Grid Computing:

Grid Computing can be defined as a network of homogeneous or heterogeneous


computers working together over a long distance to perform a task that would
rather be difficult for a single machine.
Difference between Cluster and Grid Computing:
Cluster Computing Grid Computing
Nodes must be homogeneous i.e. they Nodes may have different Operating systems
should have same type of hardware and and hardwares. Machines can be
operating system. homogeneous or heterogeneous.
Computers in a grid contribute their unused
Computers in a cluster are dedicated to processing resources to the grid computing
the same work and perform no other task. network.
Computers are located close to each Computers may be located at a huge distance
other. from one another.
Computers are connected by a high Computers are connected using a low speed
speed local area network bus. bus or the internet.
Computers are connected in a centralized Computers are connected in a distributed or
network topology. de-centralized network topology.
Scheduling is controlled by a centralIt may have servers, but mostly each node
server. behaves independently.
Whole system has a centralized resourceEvery node manages it’s resources
manager. independently.
Whole system functions as a single Every node is autonomous, and anyone can opt
system. out anytime.
Difference between Cloud Computing
and Cluster Computing
1.Cloud Computing

Cloud Computing refers to the on demand delivery of the IT resources especially


computing power and data storage through the internet with pay per use pricing.
It generally refers to the data centers available to the users over internet. Cloud
Computing is the virtualized pool of resources. It allows us to create, configure and
customize our applications online. The user can access any resource at any time
and any where with out worrying about the management and maintenance of
actual resources. Cloud computing delivers both a combination of hardware and
software based computing resources over network.

The below figure illustrates a simple architecture of Cloud Computing –

2. Cluster Computing :

Cluster computing refers to the process of sharing the computation task to


multiple computers of the cluster. The number of computers are connected on a
network and they perform a single task by forming a Cluster of computers where
the process of computing is called as cluster computing.
Cluster Computing is a high performance computing framework which helps in
solving more complex operations more efficiently with a faster processing speed
and better data integrity. Cluster Computing is a networking technology that
performs its operations based on the principle of distributed systems.
The below figure illustrates a simple architecture of Cluster Computing –

Difference between Cloud Computing and Cluster Computing:

Serial
Number Category Cloud Computing Cluster Computing
Performing a
Providing on demand IT complex task in a
1. Goal resources and services. modular approach.

Specific assigned
Resource Specific assigned resources are resources are not
2. Sharing not shareable. shareable.

In Cluster
Computing there is
In cloud computing there is homogeneous
3. Resource type heterogeneous resource type. resource type.
Virtualization hardware and No virtualization
4. Virtualization software resources. resources.

Security through
node
Security through isolation can be credential can be
5. Security achieved. achieved.

Initial capital cost


Initial capital cost for setup is for setup is very
6. Initial Cost very low. high.

Security
7. Requirement Very low Very high

Requires little
8. Maintenance Requires low maintenance. more maintenance.

More hardware
No hardware requirement requirement
9. Hardware physically. physically.

10. Node OS Multiple OS runs in VM Windows, Linux

User management is centralized


User or decentralized to vendor/third User management
11. Management party. is centralized.

12. Scalability Allowed Limited

In Cluster
Computing Cluster
In Cloud Computing User oriented
13. Architecture chosen architecture. architecture

Dynamic computing Tightly coupled


14. Characteristic infrastructure/resources/services systems/resources

In cluster
Software In cloud computing application computing
15. Dependent domain independent software. application domain
dependent
software.

Sony PlayStation
16. Example Dropbox, Gmail clusters

Difference between Cloud Computing


and Traditional Computing
Cloud Computing :

Cloud Computing, as name suggests, is collective combination of configurable


system resources and advanced service that can be delivered quickly using
internet. It simply provides lower power expenses, no capital costs, no
redundancy, lower employee costs, increased collaboration, etc. It makes us more
efficient, more secure, and provide greater flexibility.

2. Traditional Computing :

Traditional Computing, as name suggests, is a possess of using physical data


centers for storing digital assets and running complete networking system for
daily operations. In this, access to data, or software, or storage by users is limited
to device or official network they are connected with. In this computing, user can
have access to data only on system in which data is stored.

Difference between Cloud Computing and Traditional Computing :

Cloud Computing Traditional Computing

It refers to delivery of different services


such as data and programs through It refers to delivery of different services on
internet on different servers. local server.

It takes place on third-party servers that


is hosted by third-party hosting It takes place on physical hard drives and
companies. website servers.
Cloud Computing Traditional Computing

It is ability to access data anywhere at User can access data only on system in which
any time by user. data is stored.

It is more cost effective as compared to


tradition computing as operation and
maintenance of server is shared among It is less cost effective as compared to cloud
several parties that in turn reduce cost of computing because one has to buy expensive
public services. equipment’s to operate and maintain server.

It is less user-friendly as compared to cloud


It is more user-friendly as compared to computing because data cannot be accessed
traditional computing because user can anywhere and if user has to access data in
have access to data anytime anywhere another system, then he need to save it in
using internet. external storage medium.

It requires fast, reliable and stable


internet connection to access It does not require any internet connection to
information anywhere at any time. access data or information.

It provides more storage space and


servers as well as more computing
power so that applications and software It provides less storage as compared to cloud
run must faster and effectively. computing.

It also provides scalability and elasticity


i.e., one can increase or decrease storage
capacity, server resources, etc., It does not provide any scalability and
according to business needs. elasticity.

It requires own team to maintain and monitor


Cloud service is served by provider’s system that will need a lot of time and
support team. efforts.

Software is offered as an on-demand


service (SaaS) that can be accessed Software in purchased individually for every
through subscription service. user and requires to be updated periodically.
Difference between Grid Computing
and Utility Computing

Grid Computing :

Grid Computing, as name suggests, is a type of computing that combine resources


from various administrative domains to achieve common goal. Its main goal to
virtualized resources to simply solve problems or issues and apply resources of
several computers in network to single problem at same time to solve technical or
scientific problem.

2. Utility Computing :

Utility Computing, as name suggests, is a type of computing that provide services


and computing resources to customers. It is basically a facility that is being
provided to users on their demand and charge them for specific usage. It is similar
to cloud computing and therefore requires cloud-like infrastructure.

Difference between Grid Computing and Utility Computing :

Grid Computing Utility Computing

It is a process architecture that combines It is process architecture that provide on-


different computing resources from demand computing resources and
multiple locations to achieve desired and infrastructure on basis of pay per use
common goal. method.

It distributes workload across multiple It allows organization to allocate and


systems and allow computers to segregate computing resources and
contribute their individual resources to infrastructure to various users on basis of
common goal. their requirements.

It makes better use of existing resources,


address rapid fluctuations in customer It simply reduces IT costs, easier to manage,
demands, improve computational provide greater flexibility, compatibility,
capabilities, provide flexibility, etc. provide more convenience, etc.

It mainly focuses on sharing computing It mainly focuses on acquiring computing


resources. resources.
Grid Computing Utility Computing

It is of three types i.e., computational It is of two type i.e., Internal and external
grid, data grid, and collaborative grid. utility.

It is used in large organizations such as


Amazon, Google, etc., where they establish
It is used in ATMs, back-end their own utility services for computing
infrastructures, marketing research, etc. storage and applications.

Its main purpose is to make computing


Its main purpose is to integrate usage of resources and infrastructure management
computer resources from cooperating available to customer as per their need, and
partners in form of VO (Virtual charge them for specific usage rather than
Organizations). flat rate.

Its characteristics include resource Its characteristics include scalability,


coordination, transparent access, demand pricing, standardized utility
dependable access, etc. computing services, automation, etc.

Difference between Parallel Computing


and Distributed Computing
Parallel Computing:

In parallel computing multiple processors performs multiple tasks assigned to


them simultaneously. Memory in parallel systems can either be shared or
distributed. Parallel computing provides concurrency and saves time and money.

Distributed Computing:

In distributed computing we have multiple autonomous computers which seems


to the user as single system. In distributed systems there is no shared memory and
computers communicate with each other through message passing. In distributed
computing a single task is divided among different computers.
Difference between Parallel Computing and Distributed Computing:

S.NO Parallel Computing Distributed Computing


Many operations are performed System components are located
1. simultaneously at different locations

2. Single computer is required Uses multiple computers

Multiple processors perform Multiple computers perform


3. multiple operations multiple operations

It may have shared or distributed


4. memory It have only distributed memory

Computer communicate with


Processors communicate with each each other through message
5. other through bus passing.

Improves system scalability,


fault tolerance and resource
6. Improves the system performance sharing capabilities

Difference between Cloud Computing


and Distributed Computing
1.Cloud Computing :

Cloud computing refers to providing on demand IT resources/services like server,


storage, database, networking, analytics, software etc. over internet. It is a
computing technique that delivers hosted services over the internet to its
users/customers. Cloud computing provides services such as hardware, software,
networking resources through internet. Some characteristics of cloud computing
are providing shared pool of configurable computing resources, on-demand
service, pay per use, provisioned by the Service Providers etc.
It is classified into 4 different types such as
• Public Cloud
• Private Cloud
• Community Cloud
• Hybrid Cloud
2. Distributed Computing :

Distributed computing refers to solve a problem over distributed autonomous


computers and they communicate between them over a network. It is a computing
technique which allows to multiple computers to communicate and work to solve
a single problem. Distributed computing helps to achieve computational tasks
more faster than using a single computer as it takes a lot of time. Some
characteristics of distributed computing are distributing a single task among
computers to progress the work at same time, Remote Procedure calls and Remote
Method Invocation for distributed computations.
It is classified into 3 different types such as
• Distributed Computing Systems
• Distributed Information Systems
• Distributed Pervasive Systems

Difference between Cloud Computing and Distributed Computing :

S.No. CLOUD COMPUTING DISTRIBUTED COMPUTING


Cloud computing refers to
providing on demand IT Distributed computing refers to solve
resources/services like server, a problem over distributed
storage, database, networking, autonomous computers and they
analytics, software etc. over communicate between them over a
01. internet. network.

In simple distributed computing can


In simple cloud computing can be be said as a computing technique
said as a computing technique that which allows to multiple computers to
delivers hosted services over the communicate and work to solve a
02. internet to its users/customers. single problem.

It is classified into 3 different types


It is classified into 4 different types such as Distributed Computing
such as Public Cloud, Private Systems, Distributed Information
Cloud, Community Cloud and Systems and Distributed Pervasive
03. Hybrid Cloud. Systems.

There are many benefits of cloud


computing like cost effective,
elasticity and reliable, economies of There are many benefits of distributed
Scale, access to the global market computing like flexibility, reliability,
04. etc. improved performance etc.
Cloud computing provides services Distributed computing helps to
such as hardware, software, achieve computational tasks more
networking resources through faster than using a single computer as
05. internet. it takes a lot of time.

The goal of distributed computing is


The goal of cloud computing is to to distribute a single task among
provide on demand computing multiple computers and to solve it
services over internet on pay per use quickly by maintaining coordination
06. model. between them.

Some characteristics of cloud Some characteristics of distributed


computing are providing shared computing are distributing a single
pool of configurable computing task among computers to progress the
resources, on-demand service, pay work at same time, Remote Procedure
per use, provisioned by the Service calls and Remote Method Invocation
07. Providers etc. for distributed computations.

Some disadvantage of cloud


computing includes less control
especially in the case of public Some disadvantage of cloud
clouds, restrictions on available computing includes chances of failure
services may be faced and cloud of nodes, slow network may create
08. security. problem in communication.

Difference Between Cloud Computing


and Fog Computing
Cloud Computing:

The delivery of on-demand computing services is known as cloud computing. We


can use applications to storage and processing power over the internet. It is a pay
as you go service. Without owning any computing infrastructure or any data
centers, anyone can rent access to anything from applications to storage from a
cloud service provider.

We can avoid the complexity of owning and maintaining infrastructure by using


cloud computing services and pay for what we use.
In turn, cloud computing services providers can benefit from significant
economies of scale by delivering the same services to a wide range of customers.
Fog Computing:

Fog computing is a decentralized computing infrastructure or process in which


computing resources are located between the data source and the cloud or any
other data center. Fog computing is a paradigm that provides services to user
requests at the edge networks. The devices at the fog layer usually perform
operations related to networking such as routers, gateways, bridges, and hubs.
Researchers envision these devices to be capable of performing both
computational and networking operations, simultaneously. Although these
devices are resource-constrained compared to the cloud servers, the geological
spread and the decentralized nature help in offering reliable services with
coverage over a wide area. Fog computing is the physical location of the devices,
which are much closer to the users than the cloud servers.

Below is a table of differences between Cloud Computing and Fog Computing:

Feature Cloud Computing Fog Computing

Cloud computing has high


latency compared to fog
Latency computing Fog computing has low latency

Cloud Computing does not


provide any reduction in Fog Computing reduces the
data while sending or amount of data sent to cloud
Capacity transforming data computing.

Response time of the system Response time of the system is


Responsiveness is low. high.

Cloud computing has less


security compared to Fog
Security Computing Fog computing has high Security.

Access speed is high


depending on the VM High even more compared to
Speed connectivity. Cloud Computing.

Multiple data sources can be Multiple Data sources and devices


Data Integration integrated. can be integrated.
Feature Cloud Computing Fog Computing

In cloud computing mobility Mobility is supported in fog


Mobility is Limited. computing.

Partially Supported in Cloud


Location Awareness computing. Supported in fog computing.

Number of Server Cloud computing has Few Fog computing has Large number
Nodes number of server nodes. of server nodes.

Geographical
Distribution It is centralized. It is decentralized and distributed.

Services provided within the Services provided at the edge of


Location of service internet. the local network.

Specific data center building


Working with air conditioning Outdoor (streets,base stations, etc.)
environment systems or indoor (houses, cafes, etc.)

Wireless communication: WLAN,


WiFi, 3G, 4G, ZigBee, etc. or
Communication wired communication (part of the
mode IP network IP networks)

Dependence on the
quality of core Requires strong network Can also work in Weak network
network core. core.

Difference Between Edge Computing


and Fog Computing
Cloud computing refers to the on-demand delivery of IT services/resources over
the internet. On-demand computing service over the internet is nothing but cloud
computing. By using cloud computing users can access the services from
anywhere whenever they need.
Nowadays, a massive amount of data is generated every second around the globe.
Businesses collect and process that data from the people and get analytics to
scale their business. When lots of organizations access their data simultaneously
on the remote servers in data centers, data traffic might occur. Data traffic can
cause some delay in accessing the data, lower bandwidth, etc. But cloud
computing technology alone is not effective enough to store and process massive
amounts of data and respond quickly.
For example, in the Tesla self-driving car, the sensor constantly monitors certain
regions around the car. If it detects an obstacle or pedestrian on its way, then the
car must be stopped or move around without hitting. When an obstacle is on its
way, the data sent through the sensor must be processed quickly and help the car
to detect before it hits. A little delay in detection could be a major issue. To
overcome such challenges, edge computing and fog computing are introduced.

Edge and Fog Computing

Edge Computing
Computation takes place at the edge of a device’s network, which is known as
edge computing. That means a computer is connected with the network of the
device, which processes the data and sends the data to the cloud in real-time.
That computer is known as “edge computer” or “edge node”.
With this technology, data is processed and transmitted to the devices instantly.
Yet, edge nodes transmit all the data captured or generated by the device
regardless of the importance of the data.
Here Fog Computing was introduced and becomes an ideal solution.
Fog Computing
Fog computing is an extension of cloud computing. It is a layer in between the
edge and the cloud. When edge computers send huge amounts of data to the
cloud, fog nodes receive the data and analyze what’s important. Then the fog
nodes transfer the important data to the cloud to be stored and delete the
unimportant data or keep them with themselves for further analysis. In this way,
fog computing saves a lot of space in the cloud and transfers important data
quickly.

Difference Between Edge Computing and Fog Computing

S.NO. EDGE COMPUTING FOG COMPUTING

Less scalable than fog Highly scalable when compared to edge


01. computing. computing.

02. Billions of nodes are present. Millions of nodes are present.

Nodes in this computing are installed


Nodes are installed far away closer to the cloud(remote database
03. from the cloud. where data is stored).

Edge computing is a subdivision Fog computing is a subdivision of cloud


04. of fog computing. computing.

The bandwidth requirement is


very low. Because data comes The bandwidth requirement is high.
from the edge nodes Data originating from edge nodes is
05. themselves. transferred to the cloud.

06. Operational cost is higher. Operational cost is comparatively lower.

High privacy. Attacks on data The probability of data attacks is


07. are very low. higher.

Edge devices are the inclusion


of the IoT devices or client’s
08. network. Fog is an extended layer of cloud.
S.NO. EDGE COMPUTING FOG COMPUTING

The power consumption of nodes filter


important information from the massive
The power consumption of amount of data collected from the
09. nodes is low. device and saves it in the filter high.

Edge computing helps devices Fog computing helps in filtering


to get faster results by important information from the massive
processing the data amount of data collected from the
simultaneously received from device and saves it in the cloud by
10. the devices. sending the filtered data.

Cluster Computing:
A Computer Cluster is a local network of two or more homogeneous
computers.A computation process on such a computer network i.e. cluster is
called Cluster Computing.
Grid Computing:
Grid Computing can be defined as a network of homogeneous or
heterogeneous computers working together over a long distance to perform
a task that would rather be difficult for a single machine.
Difference between Cluster and Grid Computing:

Cluster Computing Grid Computing


Nodes must be homogeneous i.e. they Nodes may have different Operating systems
should have same type of hardware and and hardwares. Machines can be
operating system. homogeneous or heterogeneous.
Computers in a cluster are dedicated to Computers in a grid contribute their unused
the same work and perform no other processing resources to the grid computing
task. network.
Computers are located close to each Computers may be located at a huge distance
other. from one another.
Computers are connected by a high Computers are connected using a low speed
speed local area network bus. bus or the internet.
Computers are connected in a Computers are connected in a distributed or
centralized network topology. de-centralized network topology.
Scheduling is controlled by a central It may have servers, but mostly each node
server. behaves independently.
Whole system has a centralized resourceEvery node manages it’s resources
manager. independently.
Whole system functions as a single Every node is autonomous, and anyone can
system. opt out anytime.
Difference between Cloud Computing
and Cluster Computing
• Last Updated : 27 Jul, 2020
1. Cloud Computing :
Cloud Computing refers to the on demand delivery of the IT resources especially
computing power and data storage through the internet with pay per use pricing.
It generally refers to the data centers available to the users over internet. Cloud
Computing is the virtualized pool of resources. It allows us to create, configure
and customize our applications online. The user can access any resource at any
time and any where with out worrying about the management and maintenance
of actual resources. Cloud computing delivers both a combination of hardware
and software based computing resources over network.
The below figure ill90ustrates a simple architecture of Cloud Computing –

2. Cluster Computing :
Cluster computing refers to the process of sharing the computation task to
multiple computers of the cluster. The number of computers are connected on a
network and they perform a single task by forming a Cluster of computers where
the process of computing is called as cluster computing.
Cluster Computing is a high performance computing framework which helps in
solving more complex operations more efficiently with a faster processing speed
and better data integrity. Cluster Computing is a networking technology that
performs its operations based on the principle of distributed systems.
The below figure illustrates a simple architecture of Cluster Computing –
Difference between Cloud Computing and Cluster Computing :
Serial
Number Category Cloud Computing Cluster Computing
Performing a
Providing on demand IT complex task in a
1. Goal resources and services. modular approach.

Specific assigned
Resource Specific assigned resources are resources are not
2. Sharing not shareable. shareable.

In Cluster
Computing there is
Resource In cloud computing there is homogeneous
3. type heterogeneous resource type. resource type.

Virtualization hardware and No virtualization


4. Virtualization software resources. resources.
Security through
node
Security through isolation can be credential can be
5. Security achieved. achieved.

Initial capital cost


Initial capital cost for setup is for setup is very
6. Initial Cost very low. high.

Security
7. Requirement Very low Very high

Requires little
8. Maintenance Requires low maintenance. more maintenance.

More hardware
No hardware requirement requirement
9. Hardware physically. physically.

10. Node OS Multiple OS runs in VM Windows, Linux

User management is centralized


User or decentralized to vendor/third User management
11. Management party. is centralized.

12. Scalability Allowed Limited

In Cluster
Computing Cluster
In Cloud Computing User oriented
13. Architecture chosen architecture. architecture

Dynamic computing Tightly coupled


14. Characteristic infrastructure/resources/services systems/resources

In cluster
computing
application domain
Software In cloud computing application dependent
15. Dependent domain independent software. software.
Sony PlayStation
16. Example Dropbox, Gmail clusters

Difference between Cloud Computing


and Green Computing
• Last Updated : 22 Jan, 2021
1. Cloud Computing :
Cloud Computing, as name suggests, is basically a service-oriented architecture
that involves delivering hosted services over internet. It delivers faster and
accurate retrievals of applications and data. It is most efficient and better for
promoting strong workflow and is more cost effective than traditional computing
solutions.
2. Green Computing :
Green Computing, as name suggests, is basically study of designing,
manufacturing, using and disposing computing devices in way that reduces their
hazardous impact on environment. It is mostly used to promote energy efficiently
in different applications such as washers, dryers, laptops, and refrigerators.
Difference between Cloud Computing and Green Computing :
Cloud Computing Green Computing

It is all about delivery of computing services It is all about utilizing energy to


including servers, storage, databases, perform operations in most efficient
networking, etc., over internet. way possible.

It helps in using least amount of


It offers utility-oriented IT services to users computing resources for doing most
worldwide. amount of work.

Its main goal is to provide magnitude Its main goal is to attain economic
improvement in cost effective, dynamic viability and improve way of how
provisioning of IT services. computing devices are used.

It reduces use of hazardous materials,


increase energy efficiency during
It reduces energy consumption, waste, and product’s lifetime, manage power and
carbon emissions, reduce carbon foot print, energy efficiency, create sustainable
etc. business processes, etc.

It increases revenue of business It reduces carbon footprint of business


organizations and help them to achieve and provide a reputation boost, help
Cloud Computing Green Computing

business goals, provide faster business responsibly use energy and


communication, secure network keep business running on energy-lean
collaboration, promote efficient utilization of diet.
existing resources, etc.

It is that a computer and technology is


It is internet service that provides computing how much responsible for
needs to computer users. environmental change.

It allows company to diversity its network It allows companies to improve


and server infrastructure. disposal and recycling procedures.

It lowers energy bills, lower overall


It lowers IT costs, maintain business power usage, cost-effective due to less
continuity, provide scalability, allows energy usage and cooling
automatic software integrations, etc. requirements, etc.

It is less cost effective as compared to green It is more cost effective as compared


computing. to cloud computing.

Difference between Super Computers


and Embedded Computers
• Last Updated : 22 Jan, 2021
1. Super Computers :
Super Computers, as name suggests, are specialized computer system that is built
to perform difficult calculations much quickly and therefore considered most
important tool for researches. It is better technological tool for complex large-
scale computing tasks with high level of performance.
Examples : Tianhe-1, Kraken, Jugene, etc.
2. Embedded Computers :
Embedded Computers, as name suggests, are specialized computer system that is
implemented as a part of larger device, intelligent system or installation. It is only
meant for one purpose because these are purpose-build computing platforms
and are designed for specific task. Its type includes small scale, medium scale,
and sophisticated embedded systems.
Examples : Digital cameras, elevators, vending machines, etc.
Difference between Super Computers and Embedded Computers :
Super Computers Embedded Computer

These are computers that are not


incorporated into other devices and rather These are computers that are
standalone with high level of incorporated into other devices rather
performance. than being standalone computers.

These computers are specially designed to These computers are designed to


solve complex scientific and industrial perform specific and software-
problems and challenges. controlled tasks.

Its main aim is to use maximum computing


power to solve single large problem in Its main purpose is to control device and
shortest amount of time. allow user to interact with it.

It allows us to understand things that are It enables designs and optimizations that
very difficult to see or measure in real make it possible for us to enjoy
life. advantages of technology.

These computers are used for scientific and


engineering applications that must handle These computers are used to reduce size
and control large databases and should able and cost of product as well as increase
to do great amount of computation. reliability and performance.

It is larger in size and very expensive as It is small in size and cost effective as
compared to embedded computers. compared to super computers.

These computers are primarily useful


These computers are primarily useful for for safety-critical and other important
mathematical-intensive scientific systems that people use every day such
applications. as cares, medical equipment’s, etc.

It can be used for many things. It can be used for only one purpose.

Difference Between IoT Devices and


Computers
• Last Updated : 29 Dec, 2021
In this article, we will discuss the overview of the Internet of Things and
Computers and mainly will focus on the difference between IoT devices and
Computers. Let’s discuss it one by one.
Internet of Things (IoT):
The Internet of Things (IoT) is the network of physical objects/devices like
vehicles, buildings, cars, and other items embedded with electronics, software,
sensors, and network connectivity that enables these objects to collect and
exchange data. IoT devices have made human life easier. The IoT devices like
smart homes, smart cars have made the life of humans very comfortable. IoT
devices are now being a part of our day-to-day life.
Computers:
A computer is a hardware device embedded with software in it. The computer
does most of the work like calculations, gaming, web browsers, word processors,
e-mails, etc. The main function of a computer is to compute the functions, to run
the programs. It takes input from the computer and then computes/processes it
and generates the output.

Function of Computer

Overview of IoT Vs Computers:


One big difference between IoT devices and computers is that the main function
of IoT devices is not to compute(not to be a computer) and the main function of a
computer is to compute functions and to run programs. But on IoT devices that is
not its main point, it has some other function besides that. As an example like in
cars, the function of IoT devices are not to compute anti-lock breaking or to do
fuel injection, their main function from the point of view of a user is to be driven
and to move you from place to place and the computer is just to help that
function. For example, The main function of the car is not to compute like anti-
lock breaking or to do fuel injection their main function from the point of view of
a user is to drive, to move you from place to place. But when we embed software
in it then the software can be able for fuel limit detection.

Difference between IoT devices and Computers:

Computers
IOT Devices

IoT devices are special-purpose devices. Computers are general-purpose devices.


Computers
IOT Devices

IoT devices can do only a particular task


for which it is designed. Computers can do so many tasks.

The hardware and software built-in in the


The hardware and software built-in in computers are streamlined to do many
the IoT devices are streamlined for that tasks(such as calculation, gaming, music
particular task. player, etc. )

IoT devices can be cheaper and faster


at a particular task than computers, as
IoT devices are made to do that A computer can be expensive and slower
particular task. at a particular task than an IoT device.

Examples: Music Player- iPod, Alexa, Examples: Desktop computers, Laptops,


smart cars, etc. etc.

Difference between IoT and M2M


• Difficulty Level : Medium
• Last Updated : 16 May, 2020
1. Internet of Things :
IOT is known as the Internet of Things where things are said to be the
communicating devices that can interact with each other using a communication
media. Usually every day some new devices are being integrated which uses IoT
devices for its function. These devices use various sensors and actuators for
sending and receiving data over the internet. It is an ecosystem where the
devices share data through a communication media known as the internet.
2. Machine to Machine :
This is commonly known as Machine to machine communication. It is a concept
where two or more than two machines communicate with each other without
human interaction using a wired or wireless mechanism. M2M is an technology
that helps the devices to connect between devices without using internet. M2M
communications offer several applications such as security, tracking and tracing,
manufacturing and facility management.

Difference between IoT and M2M :


Basis of IoT M2M

Abbreviation Internet of Things Machine to Machine

Devices have objects that are Some degree of


responsible for decision intelligence is observed in
Intelligence making this

The connection is via Network


Connection type and using various The connection is a point
used communication types. to point

Traditional protocols and


Internet protocols are used communication
Communication such as HTTP, FTP, technology techniques are
protocol used and Telnet. used

Data is shared between other


applications that are used to Data is shared with only
improve the end-user the communicating
Data Sharing experience. parties.

Internet connection is required Devices are not dependent


Internet for communication on the Internet.

A large number of devices yet Limited Scope for


Scope scope is large. devices.

Business Type Business 2 Business(B2B) and Business 2 Business


used Business 2 Consumer(B2C) (B2B)

Supports Open API There is no support for


Open API support integrations. Open Api’s

Smart wearables, Big Data Sensors, Data and


Examples and Cloud, etc. Information, etc.

Difference Between IoT Devices and


Computers
• Last Updated : 29 Dec, 2021
In this article, we will discuss the overview of the Internet of Things and
Computers and mainly will focus on the difference between IoT devices and
Computers. Let’s discuss it one by one.
Internet of Things (IoT):
The Internet of Things (IoT) is the network of physical objects/devices like
vehicles, buildings, cars, and other items embedded with electronics, software,
sensors, and network connectivity that enables these objects to collect and
exchange data. IoT devices have made human life easier. The IoT devices like
smart homes, smart cars have made the life of humans very comfortable. IoT
devices are now being a part of our day-to-day life.
Computers:
A computer is a hardware device embedded with software in it. The computer
does most of the work like calculations, gaming, web browsers, word processors,
e-mails, etc. The main function of a computer is to compute the functions, to run
the programs. It takes input from the computer and then computes/processes it
and generates the output.

Function of Computer

Overview of IoT Vs Computers:


One big difference between IoT devices and computers is that the main function
of IoT devices is not to compute(not to be a computer) and the main function of a
computer is to compute functions and to run programs. But on IoT devices that is
not its main point, it has some other function besides that. As an example like in
cars, the function of IoT devices are not to compute anti-lock breaking or to do
fuel injection, their main function from the point of view of a user is to be driven
and to move you from place to place and the computer is just to help that
function. For example, The main function of the car is not to compute like anti-
lock breaking or to do fuel injection their main function from the point of view of
a user is to drive, to move you from place to place. But when we embed software
in it then the software can be able for fuel limit detection.

Difference between IoT devices and Computers:

Computers
IOT Devices

IoT devices are special-purpose devices. Computers are general-purpose devices.


Computers
IOT Devices

IoT devices can do only a particular task


for which it is designed. Computers can do so many tasks.

The hardware and software built-in in the


The hardware and software built-in in computers are streamlined to do many
the IoT devices are streamlined for that tasks(such as calculation, gaming, music
particular task. player, etc. )

IoT devices can be cheaper and faster


at a particular task than computers, as
IoT devices are made to do that A computer can be expensive and slower
particular task. at a particular task than an IoT device.

Examples: Music Player- iPod, Alexa, Examples: Desktop computers, Laptops,


smart cars, etc. etc.

Most Common Threats to Security and


Privacy of IoT Devices
• Difficulty Level : Hard
• Last Updated : 14 Oct, 2020
Nowadays, the Internet is growing at a very fast rate with the advancement in
technologies and techniques. Some years ago, we did not necessarily require an
advanced level security system for our networking devices because the internet is
not that much advanced in that era. According to a survey in 2017, 51% of big
companies didn’t even think about securing their devices because they felt that
their devices might not be attacked by hackers and now approx 96% of companies
think that there may be a huge increase in attacks of IoT devices in upcoming
years.
As technology is becoming advanced, attacks on internet devices are increasing
very rapidly and becoming more and more common. Now, security and privacy
have become a very important aspect of any IoT device. In this article, we will
discuss some most common threats to the security and privacy of IoT devices.
1. Weak Credentials
Generally, large manufactures ship their products with a username of “admin” and
with the password “0000” or “1234” and the consumers of these devices don’t
change them until they were forced to that by security executive. These kinds of
acts make a path for hackers to hack consumer’s privacy and let them control the
consumer’s device. In 2016, the Mirai botnet Attack as a result of using weak
credentials.
2. Complex Structure of IoT Devices
IoT devices have a very complex structure that makes it difficult to find the fault in
devices. Even if a device is hacked the owner of that device will be unaware of that
fact. Hackers can force the device to join any malicious botnets or the device may
get infected by any virus. We can not directly say that the device was hacked
because of its complex structure. A few years ago, a security agency has proved
that a smart refrigerator was found sent thousand plus spam mails. The interesting
fact was that the owner of that refrigerator even did not know about that.
3. Outdated Software and Hardware
It has been seen that IoT devices are secured when they are shipped. But the issues
come here when these devices do not get regular updates. When a company
manufactures its device, it makes the devices secure from all the threats of that
time but as we discussed earlier, the Internet and technologies are growing at a
very fast rate. So after a year or two, it becomes very easy for hackers to find the
weakness of old devices with modern technologies. That’s why security updates
are the most important ones.
4. Rapid increase in Ransomware
With the advancement of the internet, hackers are also getting advanced. In the
past few years, there is a rapid increase in malicious software or ransomware. This
is causing a big challenge for IoT device manufacturers to secure their devices.
5. Small Scale Attacks
IoT devices are attacked on a very small scale. Manufacturing companies are trying
to secure their devices for large scale attacks but no company is paying to attention
small attacks. Hackers do small attacks on IoT devices such as baby monitoring
devices or open wireless connections and then forced to join botnets.
6. Insecure Data Transfer
It is very difficult to transmit data securely in such a large amount as there are
billions of IoT enabled devices. There is always a risk of data leaking or get infected
or corrupted.
7. Smart Objects
Smart objects are the main building block of any device. These smart objects
should able to communicate with another object or device or a sensor in any
infrastructure securely. Even while these devices or objects are not aware of each
other’s network status. This is also an important issue. Hackers can hack these
devices in open wireless networks.
History of Cloud Computing
n this, we will cover the basic overview of cloud computing. And you will see
mainly our focus on history of cloud computing and will cover the history of
client server computing, distributed computing, and cloud computing. Let’s
discuss it one by one.
Cloud Computing :
Cloud Computing referred as the accessing and storing of data and provide
services related to computing over the internet. It simply referred as it remote
services on the internet manage and access data online rather than any local
drives. The data can be anything like images, videos, audios, documents,
files etc.

Cloud Computing Service Provider’s :


Cloud computing is in huge demand so, big organization providing the service
like Amazon AWS, Microsoft Azure, Google Cloud, Alibaba cloud etc. are
some Cloud Computing service Provider.
History of Cloud Computing :
In this, we will discuss the history of Cloud computing. And also cover the
history of client server computing, distributed computing, and cloud
computing.
• Before Computing was come into existence, client Server
Architecture was used where all the data and control of client resides
in Server side. If a single user want to access some data, firstly user
need to connect to the server and after that user will get appropriate
access. But it has many disadvantages. So, After Client Server
computing, Distributed Computing was come into existence, in this
type of computing all computers are networked together with the help
of this, user can share their resources when needed. It also has
certain limitations. So in order to remove limitations faced in
distributed system, cloud computing was emerged.

• During 1961, John MacCharty delivered his speech at MIT that


“Computing Can be sold as a Utility, like Water and Electricity.”
According to John MacCharty it was a brilliant idea. But people at
that time don’t want to adopt this technology. They thought the
technology they are using efficient enough for them. So, this concept
of computing was not appreciated much so and very less will
research on it. But as the time fleet the technology caught the idea
after few years this idea is implemented. So, this is implemented by
Salesforce.com in 1999.

• This company started delivering an enterprise application over the


internet and this way the boom of Cloud Computing was started.

• In 2002, Amazon started Amazon Web Services (AWS), Amazon will


provide storage, computation over the internet. In 2006 Amazon will
launch Elastic Compute Cloud Commercial Service which is open for
Everybody to use.


After that in 2009, Google Play also started providing Cloud
Computing Enterprise Application as other companies will see the
emergence of cloud Computing they also started providing their
cloud services. Thus, in 2009, Microsoft launch Microsoft Azure and
after that other companies like Alibaba, IBM, Oracle, HP also
introduces their Cloud Services. In today the Cloud Computing
become very popular and important skill.
Advantages :
• It is easier to get backup in cloud.
• It allows us easy and quick access stored information anywhere and
anytime.
• It allows us to access data via mobile.
• It reduces both hardware ad Software cost, and it is easily
maintainable.
• One of the biggest advantage of Cloud Computing is Database
Security.
Disadvantages :
• It requires good internet connection.
• User have limited control on the data.

Conventional Computing vs Quantum


Computing
• Difficulty Level : Easy
• Last Updated : 21 Dec, 2018
We’ve been using computers since early 19th century. We’re currently in the
fourth generation of computers with the microprocessors after vacuum tubes,
transistors and integrated circuits. They were all based on conventional
computing which is based on the classical phenomenon of electrical circuits
being in a single state at a given time, either on or off.
The fifth generation of computers is currently under development of
which quantum computing or quantum computers being most popular.
Quantum computers are totally different from conventional computers on
how they work. Quantum computers are based on the phenomenon of
Quantum Mechanics, the phenomenon where it is possible to be in more
than one state at a time.
Difference between conventional computing and quantum computing:
Conventional Computing Quantum Computing

Conventional computing is based Quantum computing is based on the


on the classical phenomenon of phenomenon of Quantum Mechanics,
electrical circuits being in a single such as superposition and
state at a given time, either on or entanglement, the phenomenon where
off. it is possible to be in more than one
state at a time.

Information storage and Information storage and manipulation is


manipulation is based on “bit”, based on Quantum Bit or “qubit”, which
which is based on voltage or is based on the spin of electron or
charge; low is 0 and high is 1. polarization of a single photon.

The circuit behavior is governed by


The circuit behavior is governed quantum physics or quantum
by classical physics. mechanics.

Conventional computing use Quantum computing use Qubits i.e. 0, 1


binary codes i.e. bits 0 or 1 to and superposition state of both 0 and 1
represent information. to represent information.

Superconducting Quantum Interference


CMOS transistors are the basic Device or SQUID or Quantum
building blocks of conventional Transistors are the basic building
computers. blocks of quantum computers.

In conventional computers, data


processing is done in Central
Processing Unit or CPU, which In quantum computers, data processing
consists of Arithmetic and Logic is done in Quantum Processing Unit or
Unit (ALU), processor registers QPU, which consists of a number of
and a control unit. interconnected qubits.

Introduction to quantum computing


• Difficulty Level : Easy
• Last Updated : 20 Apr, 2022
Computers are getting smaller and faster day by day because electronic
components are getting smaller and smaller. But this process is about to meet its
physical limit.
Electricity is the flow of electrons. Since the size of transistors is shrinking to the
size of a few atoms, transistors cannot be used as switches because electrons may
transfer themselves to the other side of blocked passage by the process called
quantum tunneling.
Quantum mechanics is a branch of physics that explores the physical world at a
most fundamental level. At this level, particles behave differently from the
classical world taking more than one state at the same time and interacting with
other particles that are very far away. Phenomena like superposition and
entanglement take place.

• Superposition –
In classical computing, bits have two possible states either zero or
one. In quantum computing, a qubit (short for “quantum bit”) is a unit
of quantum information—the quantum analogue to a classical bit.
Qubits have special properties that help them solve complex problems
much faster than classical bits. One of these properties
is superposition, which states that instead of holding one binary value
(“0” or “1”) like a classical bit, a qubit can hold a combination of “0” and
“1” simultaneously. Qubits have two possible outcomes zero or one but
those states are a superposition of zero and one. In the quantum world,
qubits don’t have to be in one of those states. It can be in any
proportion of those states. As soon as we measure its value it has to
decide whether it is zero or one. This is called superposition. It is the
ability of the quantum system to be in multiple states at the same time.
In classical computing for example there are 4 bits. The combination of
4 bits can represent 2^4=16 values in total and one value a given
instant. But in a combination of 4 qubits, all 16 combinations are
possible at once.
• Entanglement –
Entanglement is an extremely strong correlation that exists between
quantum particles — so strong, in fact, that two or more quantum
particles can be linked in perfect unison, even if separated by great
distances. The particles remain perfectly correlated even if separated
by great distances. Two qubits are entangled through the action of the
laser. Once they have entangled, they are in an indeterminate state. The
qubits can then be separated by any distance, they will remain linked.
When one of the qubits is manipulated, the manipulation happens
instantly to its entangled twin as well.
What can quantum computers do?

1. Quantum computers can easily crack the encryption algorithms used


today in very little time whereas it takes billions of years to best
supercomputer available today. Even though quantum computers
would be able to crack many of today’s encryption techniques,
predictions are that they would create hack-proof replacements.
2. Quantum computers are great for solving optimization problems.
Introduction to Parallel Computing
• Difficulty Level : Medium
• Last Updated : 04 Jun, 2021
Before taking a toll on Parallel Computing, first, let’s take a look at the
background of computations of computer software and why it failed for the
modern era.
Computer software was written conventionally for serial computing. This meant
that to solve a problem, an algorithm divides the problem into smaller
instructions. These discrete instructions are then executed on the Central
Processing Unit of a computer one by one. Only after one instruction is finished,
next one starts.
A real-life example of this would be people standing in a queue waiting for a
movie ticket and there is only a cashier. The cashier is giving tickets one by one
to the persons. The complexity of this situation increases when there are 2
queues and only one cashier.
So, in short, Serial Computing is following:
1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.

Look at point 3. This was causing a huge problem in the computing industry as
only one instruction was getting executed at any moment of time. This was a
huge waste of hardware resources as only one part of the hardware will be
running for particular instruction and of time. As problem statements were
getting heavier and bulkier, so does the amount of time in execution of those
statements. Examples of processors are Pentium 3 and Pentium 4.
Now let’s come back to our real-life problem. We could definitely say that
complexity will decrease when there are 2 queues and 2 cashiers giving tickets to
2 persons simultaneously. This is an example of Parallel Computing.
Parallel Computing :
It is the use of multiple processing elements simultaneously for solving any
problem. Problems are broken down into instructions and are solved
concurrently as each resource that has been applied to work is working at the
same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will
reduce the time and cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources
are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel
Computing makes better work of the hardware.

Types of Parallelism:
1. Bit-level parallelism –
It is the form of parallel computing which is based on the increasing
processor’s size. It reduces the number of instructions that the system
must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute
the sum of two 16-bit integers. It must first sum up the 8 lower-order
bits, then add the 8 higher-order bits, thus requiring two instructions to
perform the operation. A 16-bit processor can perform the operation
with just one instruction.
2. Instruction-level parallelism –
A processor can only address less than one instruction for each clock
cycle phase. These instructions can be re-ordered and grouped which
are later on executed concurrently without affecting the result of the
program. This is called instruction-level parallelism.
3. Task Parallelism –
Task parallelism employs the decomposition of a task into subtasks and
then allocating each of the subtasks for execution. The processors
perform the execution of sub-tasks concurrently.
4. Data-level parallelism (DLP) –
Instructions from a single stream operate concurrently on several data – Limited
by non-regular data manipulation patterns and by memory bandwidth
Why parallel computing?
• The whole real-world runs in dynamic nature i.e. many things happen
at a certain time but at different places concurrently. This data is
extensively huge to manage.
• Real-world data needs more dynamic simulation and modeling, and for
achieving the same, parallel computing is the key.
• Parallel computing provides concurrency and saves time and money.
• Complex, large datasets, and their management can be organized only
and only using parallel computing’s approach.
• Ensures the effective utilization of the resources. The hardware is
guaranteed to be used effectively whereas in serial computation only
some part of the hardware was used and the rest rendered idle.
• Also, it is impractical to implement real-time systems using serial
computing.
Applications of Parallel Computing:
• Databases and Data mining.
• Real-time simulation of systems.
• Science and Engineering.
• Advanced graphics, augmented reality, and virtual reality.
Limitations of Parallel Computing:
• It addresses such as communication and synchronization between
multiple sub-tasks and processes which is difficult to achieve.
• The algorithms must be managed in such a way that they can be
handled in a parallel mechanism.
• The algorithms or programs must have low coupling and high cohesion.
But it’s difficult to create such programs.
• More technically skilled and expert programmers can code a
parallelism-based program well.
Future of Parallel Computing: The computational graph has undergone a great
transition from serial computing to parallel computing. Tech giant such as Intel
has already taken a step towards parallel computing by employing multicore
processors. Parallel computation will revolutionize the way computers work in
the future, for the better good. With all the world connecting to each other even
more than before, Parallel Computing does a better role in helping us stay that
way. With faster networks, distributed systems, and multi-processor computers,
it becomes even more necessary.

Introduction of Optical Computing


• Last Updated : 06 Jan, 2020
Optical computing (also known as optoelectronic computing and photonic
computing) is a computation paradigm that uses photons (small packets of light
energy) produced by laser/ diodes for digital computation. Photons have proved
to give us a higher bandwidth than the electrons we use in conventional
computer systems. The optical computers, would give us a higher performance
and hence be faster than the electronic ones.
The speed of computation depends on two factors: how fast the information can
be transferred and how fast that information can be processed that is data
computation. Photons basically use wave propagation and the interference
pattern of waves to determine outputs. This allows for instantaneous
computation without inducing latency. Data is processed while it’s propagating.
There is no need to stop the data movement and flow for its processing. This
speed factor would transform the computer industry.
The building block of any conventional electronic computer is a transistor. For
optical computing, we achieve an equivalent optical transistor by making use of
materials with non-linear refractive indices. Such materials can be used for
making optical logic gates, which go into the CPU. An optical logic gate is simply a
switch that controls one light beam by another. It is “ON” when light is being
transmitted, and it is “OFF” when it blocks the light.
Photons are almost massless, hence we need very less amount of energy to excite
them. Also, instead of operating in a serial fashion like most of the classical
computers, optical computing operates in a parallel way, which helps it to tackle
complex problems using light reflection, as well as have increased bandwidth as
compared to electron-based systems. Coming to security, as optical computing
processes data while it is in motion, very less data is exposed. This leads to
increased security than the conventional systems.
Advantages:
• Low heating
• Can tackle complex computations very quickly
• Can be scaled to larger networks efficiently.
• Increased computation speed
• Higher bandwidth with very low data loss transmission.
• Free from electrical short circuits.
Disadvantages:
• Components of optical computers would be very costly.
• Size is very bulky.
• Integrating optical gates is complex.
• Interference can be caused by dust or any imperfections.

Issues in Cloud Computing


• Difficulty Level : Basic
• Last Updated : 17 Jan, 2022
Cloud Computing is a new name for an old concept. The delivery of computing
services from a remote location. Cloud Computing is Internet-based computing,
where shared resources, software, and information are provided to computers
and other devices on demand.
These are major issues in Cloud Computing:
1. Privacy: The user data can be accessed by the host company with or without
permission. The service provider may access the data that is on the cloud at any
point in time. They could accidentally or deliberately alter or even delete
information.
2. Compliance: There are many regulations in places related to data and hosting.
To comply with regulations (Federal Information Security Management Act,
Health Insurance Portability and Accountability Act, etc.) the user may have to
adopt deployment modes that are expensive.
3. Security: Cloud-based services involve third-party for storage and security.
Can one assume that a cloud-based company will protect and secure one’s data if
one is using their services at a very low or for free? They may share users’
information with others. Security presents a real threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing
on the environment. Citing the server’s effects on the environmental effects of
cloud computing, in areas where climate favors natural cooling and renewable
electricity is readily available, the countries with favorable conditions, such as
Finland, Sweden, and Switzerland are trying to attract cloud computing data
centers. But other than nature’s favors, would these countries have enough
technical infrastructure to sustain the high-end clouds?
5. Abuse: While providing cloud services, it should be ascertained that the client
is not purchasing the services of cloud computing for a nefarious purpose. In
2009, a banking Trojan illegally used the popular Amazon service as a command
and control channel that issued software updates and malicious instructions to
PCs that were infected by the malware So the hosting companies and the servers
should have proper measures to address these issues.

6, Higher Cost: If you want to use cloud services uninterruptedly then you need
to have a powerful network with higher bandwidth than ordinary internet
networks, and also if your organization is broad and large so ordinary cloud
service subscription won’t suit your organization. Otherwise, you might face
hassle in utilizing an ordinary cloud service while working on complex projects
and applications. This is a major problem before small organizations, that
restricts them from diving into cloud technology for their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service
provider goes through all norms and documentations and check whether their
services match your requirements and sufficient well-maintained resource
infrastructure with proper upkeeping. Once you subscribed to the service you
almost hand over your data into the hands of a third party. If you are able to
choose proper cloud service then in the future you don’t need to worry about the
recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task
because a cloud architecture contains a large resources infrastructure and other
challenges and risks as well, user satisfaction, etc. As users usually pay for how
much they have consumed the resources. So, sometimes it becomes hard to
decide how much should be charged in case the user wants scalability and extend
the services.
9. Lack of resources/skilled expertise: One of the major issues that companies
and enterprises are going through today is the lack of resources and skilled
employees. Every second organization is seeming interested or has already been
moved to cloud services. That’s why the workload in the cloud is increasing so
the cloud service hosting companies need continuous rapid advancement. Due to
these factors, organizations are having a tough time keeping up to date with the
tools. As new tools and technologies are emerging every day so more
skilled/trained employees need to grow. These challenges can only be minimized
through additional training of IT and development staff.
10. Pay-per-use service charges: Cloud computing services are on-demand
services a user can extend or compress the volume of the resource as per needs.
so you paid for how much you have consumed the resources. It is difficult to
define a certain pre-defined cost for a particular quantity of services. Such types
of ups and downs and price variations make the implementation of cloud
computing very difficult and intricate. It is not easy for a firm’s owner to study
consistent demand and fluctuations with the seasons and various events. So it is
hard to build a budget for a service that could consume several months of the
budget in a few days of heavy use.

Characteristics of Cloud Computing


• Difficulty Level : Easy
• Last Updated : 30 Aug, 2019
There are basically 5 essential characteristics of Cloud Computing.
1. On-demand self-services:
The Cloud computing services does not require any human
administrators, user themselves are able to provision, monitor and
manage computing resources as needed.
2. Broad network access:
The Computing services are generally provided over standard
networks and heterogeneous devices.
3. Rapid elasticity:
The Computing services should have IT resources that are able to
scale out and in quickly and on as needed basis. Whenever the
user require services it is provided to him and it is scale out as soon
as its requirement gets over.
4. Resource pooling:
The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and
occupant in an uncommitted manner. Multiple clients are provided
service from a same physical resource.
5. Measured service:
The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider
with an account of what has been used. This is done for various
reasons like monitoring billing and effective use of resource.

Cloud Management in Cloud


Computing
• Difficulty Level : Hard
• Last Updated : 31 Mar, 2021
Prerequisite : Cloud Computing
Cloud computing management is maintaining and controlling the cloud
services and resources be it public, private or hybrid. Some of its aspects include
load balancing, performance, storage, backups, capacity, deployment etc. To do
so a cloud managing personnel needs full access to all the functionality of
resources in the cloud. Different software products and technologies are
combined to provide a cohesive cloud management strategy and process.
As we know Private cloud infrastructure is operated only for a single
organization, so that can be managed by the organization or by a third party.
Public cloud services are delivered over a network that is open and available for
public use. In this model, the IT infrastructure is owned by a private company
and members of the public can purchase or lease data storage or computing
capacity as needed. Hybrid cloud environments are a combination of public and
private cloud services from different providers. Most organizations store data on
private cloud servers for privacy concerns, while leveraging public cloud
applications at a lower price point for less sensitive information. The
combination of both the public and private cloud are known as Hybrid cloud
servers.
Need of Cloud Management :
Cloud is nowadays preferred by huge organizations as their primary data
storage. A small downtime or an error can cause a great deal of loss and
inconvenience for the organizations. So as to design, handle and maintain a cloud
computing service specific members are responsible who make sure things work
out as supposed and all arising issues are addressed.
Cloud Management Platform :
A cloud management platform is a software solution that has a robust and
extensive set of APIs that allow it to pull data from every corner of the IT
infrastructure. A CMP allows an IT organization to establish a structured
approach to security and IT governance that can be implemented across the
organization’s entire cloud environment.
Cloud Management Tasks :
The below figure represents different cloud management tasks :
Cloud Management Tasks

• Auditing System Backups –


It is required to audit the backups from time to time to ensure
restoration of randomly selected files of different users. This might be
done by the organization or by the cloud provider.
• Flow of data in the system –
The managers are responsible for designing a data flow diagram that
shows how the data is supposed to flow throughout the organization.
• Vendor Lock-In –
The managers should know how to move their data from a server to
another in case the organization decides to switch providers.
• Knowing provider’s security procedures –
The managers should know the security plans of the provider,
especially Multitenant use, E-commerce processing, Employee
screening and Encryption policy.
• Monitoring the Capacity, Planning and Scaling abilities –
The manager should know if their current cloud provider is going to
meet their organization’s demand in the future and also their scaling
capabilities.
• Monitoring audit log –
In order to identify errors in the system, logs are audited by the
managers on a regular basis.
• Solution Testing and Validation –
It is necessary to test the cloud services and verify the results and for
error-free solutions.

Difference Between Cloud Computing


and Fog Computing
• Last Updated : 06 May, 2020
Cloud Computing: The delivery of on-demand computing services is known as
cloud computing. We can use applications to storage and processing power over
the internet. It is a pay as you go service. Without owning any computing
infrastructure or any data centers, anyone can rent access to anything from
applications to storage from a cloud service provider.
We can avoid the complexity of owning and maintaining infrastructure by using
cloud computing services and pay for what we use.
In turn, cloud computing services providers can benefit from significant
economies of scale by delivering the same services to a wide range of customers.
Fog Computing: Fog computing is a decentralized computing infrastructure or
process in which computing resources are located between the data source and
the cloud or any other data center. Fog computing is a paradigm that provides
services to user requests at the edge networks. The devices at the fog layer
usually perform operations related to networking such as routers, gateways,
bridges, and hubs. Researchers envision these devices to be capable of
performing both computational and networking operations, simultaneously.
Although these devices are resource-constrained compared to the cloud servers,
the geological spread and the decentralized nature help in offering reliable
services with coverage over a wide area. Fog computing is the physical location
of the devices, which are much closer to the users than the cloud servers.
Below is a table of differences between Cloud Computing and Fog Computing:

Feature Cloud Computing Fog Computing

Cloud computing has high


latency compared to fog
Latency computing Fog computing has low latency

Cloud Computing does not


provide any reduction in Fog Computing reduces the
data while sending or amount of data sent to cloud
Capacity transforming data computing.

Response time of the system Response time of the system is


Responsiveness is low. high.

Cloud computing has less


security compared to Fog
Security Computing Fog computing has high Security.

Access speed is high


depending on the VM High even more compared to
Speed connectivity. Cloud Computing.

Multiple data sources can be Multiple Data sources and devices


Data Integration integrated. can be integrated.
Feature Cloud Computing Fog Computing

In cloud computing mobility Mobility is supported in fog


Mobility is Limited. computing.

Partially Supported in Cloud


Location Awareness computing. Supported in fog computing.

Number of Server Cloud computing has Few Fog computing has Large number
Nodes number of server nodes. of server nodes.

Geographical
Distribution It is centralized. It is decentralized and distributed.

Services provided within the Services provided at the edge of


Location of service internet. the local network.

Specific data center building


Working with air conditioning Outdoor (streets,base stations, etc.)
environment systems or indoor (houses, cafes, etc.)

Wireless communication: WLAN,


WiFi, 3G, 4G, ZigBee, etc. or
Communication wired communication (part of the
mode IP network IP networks)

Dependence on the
quality of core Requires strong network Can also work in Weak network
network core. core.

Difference Between Edge Computing


and Fog Computing
• Last Updated : 30 Nov, 2021
Cloud computing refers to the on-demand delivery of IT services/resources over
the internet. On-demand computing service over the internet is nothing but cloud
computing. By using cloud computing users can access the services from
anywhere whenever they need.
Nowadays, a massive amount of data is generated every second around the globe.
Businesses collect and process that data from the people and get analytics to
scale their business. When lots of organizations access their data simultaneously
on the remote servers in data centers, data traffic might occur. Data traffic can
cause some delay in accessing the data, lower bandwidth, etc. But cloud
computing technology alone is not effective enough to store and process massive
amounts of data and respond quickly.
For example, in the Tesla self-driving car, the sensor constantly monitors certain
regions around the car. If it detects an obstacle or pedestrian on its way, then the
car must be stopped or move around without hitting. When an obstacle is on its
way, the data sent through the sensor must be processed quickly and help the car
to detect before it hits. A little delay in detection could be a major issue. To
overcome such challenges, edge computing and fog computing are introduced.

Edge and Fog Computing

Edge Computing
Computation takes place at the edge of a device’s network, which is known as
edge computing. That means a computer is connected with the network of the
device, which processes the data and sends the data to the cloud in real-time.
That computer is known as “edge computer” or “edge node”.
With this technology, data is processed and transmitted to the devices instantly.
Yet, edge nodes transmit all the data captured or generated by the device
regardless of the importance of the data.
Here Fog Computing was introduced and becomes an ideal solution.
Fog Computing
Fog computing is an extension of cloud computing. It is a layer in between the
edge and the cloud. When edge computers send huge amounts of data to the
cloud, fog nodes receive the data and analyze what’s important. Then the fog
nodes transfer the important data to the cloud to be stored and delete the
unimportant data or keep them with themselves for further analysis. In this way,
fog computing saves a lot of space in the cloud and transfers important data
quickly.

Difference Between Edge Computing and Fog Computing

S.NO. EDGE COMPUTING FOG COMPUTING

Less scalable than fog Highly scalable when compared to edge


01. computing. computing.

02. Billions of nodes are present. Millions of nodes are present.

Nodes in this computing are installed


Nodes are installed far away closer to the cloud(remote database
03. from the cloud. where data is stored).

Edge computing is a subdivision Fog computing is a subdivision of cloud


04. of fog computing. computing.

The bandwidth requirement is


very low. Because data comes The bandwidth requirement is high.
from the edge nodes Data originating from edge nodes is
05. themselves. transferred to the cloud.

06. Operational cost is higher. Operational cost is comparatively lower.

High privacy. Attacks on data The probability of data attacks is


07. are very low. higher.

Edge devices are the inclusion


of the IoT devices or client’s
08. network. Fog is an extended layer of cloud.
S.NO. EDGE COMPUTING FOG COMPUTING

The power consumption of nodes filter


important information from the massive
The power consumption of amount of data collected from the
09. nodes is low. device and saves it in the filter high.

Edge computing helps devices Fog computing helps in filtering


to get faster results by important information from the massive
processing the data amount of data collected from the
simultaneously received from device and saves it in the cloud by
10. the devices. sending the filtered data.

Difference between Grid computing and


Cluster computing
• Last Updated : 08 Jun, 2021
Cluster Computing:
A Computer Cluster is a local network of two or more homogeneous
computers.A computation process on such a computer network i.e. cluster is
called Cluster Computing.
Grid Computing:
Grid Computing can be defined as a network of homogeneous or
heterogeneous computers working together over a long distance to perform
a task that would rather be difficult for a single machine.
Difference between Cluster and Grid Computing:

Cluster Computing Grid Computing


Nodes must be homogeneous i.e. they Nodes may have different Operating systems
should have same type of hardware and and hardwares. Machines can be
operating system. homogeneous or heterogeneous.
Computers in a cluster are dedicated to Computers in a grid contribute their unused
the same work and perform no other processing resources to the grid computing
task. network.
Computers are located close to each Computers may be located at a huge distance
other. from one another.
Computers are connected by a high Computers are connected using a low speed
speed local area network bus. bus or the internet.
Computers are connected in a Computers are connected in a distributed or
centralized network topology. de-centralized network topology.
Scheduling is controlled by a central It may have servers, but mostly each node
server. behaves independently.
Whole system has a centralized resourceEvery node manages it’s resources
manager. independently.
Whole system functions as a single Every node is autonomous, and anyone can
system. opt out anytime.

Difference between Cloud Computing


and Grid Computing
• Difficulty Level : Easy
• Last Updated : 04 Sep, 2020
Cloud Computing:
Cloud Computing is a Client-server computing architecture. In cloud computing,
resources are used in centralized pattern and cloud computing is a high
accessible service. It is a pay and use business means, in cloud computing, the
users pay for the use

Grid Computing:
Grid Computing is a Distributed computing architecture. In grid computing,
resources are used in collaborative pattern, and also in grid computing, the users
do not pay for use.
Let’s see the difference between cloud and grid computing which are given
below:

S.NO Cloud Computing Grid Computing

Cloud computing is a Client-server While it is a Distributed computing


1. computing architecture. architecture.

Cloud computing is a centralized While grid computing is a decentralized


2. executive. executive.

In cloud computing, resources are While in grid computing, resources are


3. used in centralized pattern. used in collaborative pattern.

It is more flexible than grid While it is less flexible than cloud


4. computing. computing.

In cloud computing, the users pay While in grid computing, the users do
5. for the use. not pay for use.

Cloud computing is a high While grid computing is a low


6. accessible service. accessible service.
S.NO Cloud Computing Grid Computing

It is highly scalable as compared to While grid computing is low scalable in


7. grid computing. comparison to cloud computing.

It can be accessed through While it is accessible through grid


8. standard web protocols. middleware.

Grid Computing
• Difficulty Level : Medium
• Last Updated : 17 Jan, 2022
Grid Computing can be defined as a network of computers working together to
perform a task that would rather be difficult for a single machine. All machines
on that network work under the same protocol to act as a virtual supercomputer.
The task that they work on may include analyzing huge datasets or simulating
situations that require high computing power. Computers on the network
contribute resources like processing power and storage capacity to the network.
Grid Computing is a subset of distributed computing, where a virtual
supercomputer comprises machines on a network connected by some bus,
mostly Ethernet or sometimes the Internet. It can also be seen as a form
of Parallel Computing where instead of many CPU cores on a single machine, it
contains multiple cores spread across various locations. The concept of grid
computing isn’t new, but it is not yet perfected as there are no standard rules and
protocols established and accepted by people.
Working:
A Grid computing network mainly consists of these three types of machines
1. Control Node:
A computer, usually a server or a group of servers which administrates
the whole network and keeps the account of the resources in the
network pool.
2. Provider:
The computer contributes its resources to the network resource pool.
3. User:
The computer that uses the resources on the network.
When a computer makes a request for resources to the control node, the control
node gives the user access to the resources available on the network. When it is
not in use it should ideally contribute its resources to the network. Hence a
normal computer on the node can swing in between being a user or a provider
based on its needs. The nodes may consist of machines with similar platforms
using the same OS called homogeneous networks, else machines with different
platforms running on various different OSs called heterogeneous networks. This
is the distinguishing part of grid computing from other distributed computing
architectures.
For controlling the network and its resources a software/networking protocol is
used generally known as Middleware. This is responsible for administrating the
network and the control nodes are merely its executors. As a grid computing
system should use only unused resources of a computer, it is the job of the
control node that any provider is not overloaded with tasks.
Another job of the middleware is to authorize any process that is being executed
on the network. In a grid computing system, a provider gives permission to the
user to run anything on its computer, hence it is a huge security threat for the
network. Hence a middleware should ensure that there is no unwanted task
being executed on the network.
The meaning of the term Grid Computing has changed over the years, according
to “The Grid: Blueprint for a new computing infrastructure” by Ian Foster and
Carl Kesselman published in 1999, the idea was to consume computing power
like electricity is consumed from a power grid. This idea is similar to the current
concept of cloud computing, whereas now grid computing is viewed as a
distributed collaborative network. Currently, grid computing is being used in
various institutions to solve a lot of mathematical, analytical, and physics
problems.
Advantages of Grid Computing:
1. It is not centralized, as there are no servers required, except the control
node which is just used for controlling and not for processing.
2. Multiple heterogeneous machines i.e. machines with different
Operating Systems can use a single grid computing network.
3. Tasks can be performed parallelly across various physical locations and
the users don’t have to pay for them (with money).
Disadvantages of Grid Computing :
1. The software of the grid is still in the involution stage.
2. A super fast interconnect between computer resources is the need of
hour.
3. Licensing across many servers may make it prohibitive for some
applications.
4. Many groups are reluctant with sharing resources .

Difference between Grid computing


and Cluster computing
• Last Updated : 08 Jun, 2021
Cluster Computing:
A Computer Cluster is a local network of two or more homogeneous computers.A
computation process on such a computer network i.e. cluster is called Cluster
Computing.
Grid Computing:
Grid Computing can be defined as a network of homogeneous or heterogeneous
computers working together over a long distance to perform a task that would
rather be difficult for a single machine.
Difference between Cluster and Grid Computing:

Cluster Computing Grid Computing


Nodes must be homogeneous i.e. they Nodes may have different Operating systems
should have same type of hardware and and hardwares. Machines can be
operating system. homogeneous or heterogeneous.
Computers in a cluster are dedicated to Computers in a grid contribute their unused
the same work and perform no other processing resources to the grid computing
task. network.
Computers are located close to each Computers may be located at a huge distance
other. from one another.
Computers are connected by a high Computers are connected using a low speed
speed local area network bus. bus or the internet.
Computers are connected in a Computers are connected in a distributed or
centralized network topology. de-centralized network topology.
Scheduling is controlled by a central It may have servers, but mostly each node
server. behaves independently.
Whole system has a centralized resourceEvery node manages it’s resources
manager. independently.
Whole system functions as a single Every node is autonomous, and anyone can
system. opt out anytime.

Difference between Parallel Computing


and Distributed Computing
• Difficulty Level : Medium
• Last Updated : 25 Nov, 2019
Parallel Computing:
In parallel computing multiple processors performs multiple tasks assigned to
them simultaneously. Memory in parallel systems can either be shared or
distributed. Parallel computing provides concurrency and saves time and money.
Distributed Computing:
In distributed computing we have multiple autonomous computers which seems
to the user as single system. In distributed systems there is no shared memory
and computers communicate with each other through message passing. In
distributed computing a single task is divided among different computers.
Difference between Parallel Computing and Distributed Computing:
Distributed
S.NOParallel Computing Computing

System
components are
Many operations located at
are performed different
1. simultaneously locations

Single computer Uses multiple


2. is required computers

Multiple
Multiple computers
processors perform
perform multiple multiple
3. operations operations

It may have
shared or It have only
distributed distributed
4. memory memory

Improves
Computer system
communicate scalability,
Processors with each other fault tolerance
communicate through Improves the and resource
with each other message system sharing
5. through bus passing. 6. performance capabilities

Difference between Soft Computing and


Hard Computing
• Difficulty Level : Medium
• Last Updated : 14 Jun, 2019
Soft Computing could be a computing model evolved to resolve the non-linear
issues that involve unsure, imprecise and approximate solutions of a tangle.
These sorts of issues square measure thought of as real-life issues wherever the
human-like intelligence is needed to resolve it.
Hard Computing is that the ancient approach employed in computing that
desires Associate in Nursing accurately declared analytical model. the outcome of
hard computing approach is a warranted, settled, correct result and defines
definite management actions employing a mathematical model or algorithmic
rule. It deals with binary and crisp logic that need the precise input file
consecutive. Hard computing isn’t capable of finding the real world problem’s
solution.
Difference between Soft Computing and Hard Computing:
S.NO Soft Computing Hard Computing

Soft Computing is liberal of


inexactness, uncertainty,
partial truth and Hard computing needs a
1. approximation. exactly state analytic model.

Soft Computing relies on


formal logic and probabilistic Hard computing relies on binary
2. reasoning. logic and crisp system.

Hard computing has the


Soft computing has the features of
features of approximation exactitude(precision) and
3. and dispositionality. categoricity.

Soft computing is stochastic Hard computing is deterministic


4. in nature. in nature.

Soft computing works on Hard computing works on exact


5. ambiguous and noisy data. data.

Soft computing can perform Hard computing performs


6. parallel computations. sequential computations.

Soft computing produces Hard computing produces


7. approximate results. precise results.
S.NO Soft Computing Hard Computing

Soft computing will emerge Hard computing requires


8. its own programs. programs to be written.

Soft computing incorporates


9. randomness . Hard computing is settled.

Soft computing will use Hard computing uses two-


10. multivalued logic. valued logic.

Difference between AI and Soft


Computing
• Difficulty Level : Hard
• Last Updated : 25 Mar, 2020
Artificial Intelligence:
AI manages more comprehensive issues of automating a system. This
computerization should be possible by utilizing any field such as image
processing, cognitive science, neural systems, machine learning etc. AI manages
the making of machines, frameworks and different gadgets savvy by enabling
them to think and do errands as all people generally do.

Soft Computing:
Soft Computing could be a computing model evolved to resolve the non-linear
issues that involve unsure, imprecise and approximate solutions of a tangle.
These sorts of issues square measure thought of as real-life issues wherever the
human-like intelligence is needed to resolve it.
Difference between AI and Soft Computing:
S.NO. A.I. SOFT COMPUTING

Artificial Intelligence is the


art and science of Soft Computing aims to exploit tolerance
developing intelligent for uncertainty, imprecision, and partial
1 machines. truth.

Soft Computing comprises techniques


AI plays a fundamental role which are inspired by human reasoning
in finding missing pieces and have the potential in handling
between the interesting imprecision, uncertainty and partial
2 real world problems. truth.

Branches of AI :
Branches of soft computing :
1. Reasoning
2. Perception 1. Fuzzy systems
3 3. Natural language 2. Evolutionary computation
processing 3. Artificial neural computing
S.NO. A.I. SOFT COMPUTING

AI has countless
applications in healthcare
and widely used in They are used in science and
analyzing complicated engineering disciplines such as data
4 medical data. mining, electronics, automotive, etc.

Goal is to stimulate human-


level intelligence in It aims at accommodation with the
5 machines. pervasive imprecision of the real world.

They not require all programs to be


They require programs to written, they can evolve its own
6 be written. programs.

They require exact input They can deal with ambiguous and noisy
7 sample. data.

Single Layered Neural Networks in R


Programming
• Last Updated : 22 Jul, 2020
Neural networks also known as neural nets is a type of algorithm in machine
learning and artificial intelligence that works the same as the human brain
operates. The artificial neurons in the neural network depict the same behavior
of neurons in the human brain. Neural networks are used in risk analysis of
business, forecasting the sales, and many more. Neural networks are adaptable to
changing inputs so that there is no need for designing the algorithm again based
on the inputs. In this article, we’ll discuss single-layered neural network with its
syntax and implementation of the neuralnet() function in R programming.
Following function requires neuralnet package.
Types of Neural Networks
Neural Networks can be classified into multiple types based on their depth
activation filters, Structure, Neurons used, Neuron density, data flow, and so on.
The types of Neural Networks are as follows:
1. Perceptron
2. Feed Forward Neural Networks
3. Convolutional Neural Networks
4. Radial Basis Function Neural Networks
5. Recurrent Neural Networks
6. Sequence to Sequence Model
7. Modular Neural Network
Depending upon the number of layers, there are two types of neural networks:
1. Single Layered Neural Network: A single layer neural network
contains input and output layer. The input layer receives the input
signals and the output layer generates the output signals accordingly.
2. Multilayer Neural Network: Multilayer neural network contains
input, output and one or more than one hidden layer. The hidden layers
perform intermediate computations before directing the input to the
output layer.
Single Layered Neural Network
A single-layered neural network often called perceptrons is a type of feed-
forward neural network made up of input and output layers. Inputs provided are
multi-dimensional. Perceptrons are acyclic in nature. The sum of the product of
weights and the inputs is calculated in each node. The input layer transmits the
signals to the output layer. The output layer performs computations. Perceptron
can learn only a linear function and requires less training output. The output can
be represented in one or two values(0 or 1).
Implementation in R
R language provides neuralnet() function which is available
in neuralnet package to perform single layered neural network.
Syntax:
neuralnet(formula, data, hidden)
Parameters:
formula: represents formula on which model has to be fitted
data: represents dataframe
hidden: represents number of neurons in hidden layers
To know about more optional parameters of the function, use below command in
console: help(“neuralnet”)
Example 1:
In this example, let us create the single-layered neural network or perceptron of
iris plant species of setosa and versicolor based on sepal length and sepal width.
Step 1: Install the required package

# Install the required package

install.packages("neuralnet")

Step 2: Load the package


# Load the package

library(neuralnet)

Step 3: Load the dataset

# Load dataset

df <- iris[1:100, ]

Step 4: Fitting neural network

nn = neuralnet(Species ~ Sepal.Length

+ Sepal.Width, data = df,

hidden = 0, linear.output = TRUE)

Step 5: Plot neural network

# Output to be present as PNG file

png(file = "neuralNetworkGFG.png")

# Plot

plot(nn)

# Saving the file

dev.off()
Output:

Example 2:
In this example, let us create more reliable neural network using multi-layer
neural network and make predictions based on the dataset.
Step 1: Install the required package

# Install the required package

install.packages("neuralnet")

Step 2: Load the package

# Load the package

library(neuralnet)

Step 3: Load the dataset


# Load dataset

df <- mtcars

Step 4: Fitting neural network

nn <- neuralnet(am ~ vs + cyl + disp + hp + gear

+ carb + wt + drat, data = df,

hidden = 3, linear.output = TRUE)

Step 5: Plot neural network

# Output to be present as PNG file

png(file = "neuralNetwork2GFG.png")

# Plot

plot(nn)

# Saving the file

dev.off()

Step 6: Create test dataset

# Create test dataset

vs = c(0, 1, 1)
cyl =c(6, 8, 8)

disp = c(170, 250, 350)

hp = c(120, 240, 300)

gear = c(4, 5, 4)

carb = c(4, 3, 3)

wt = c(2.780, 3.210, 3.425)

drat = c(3.05, 4.02, 3.95)

test <- data.frame(vs, cyl, disp, hp,

gear, carb, wt, drat)

Step 7: Make prediction of test dataset

Predict <- compute(nn, test)

cat("Predicted values:\n")

print(Predict$net.result)

Step 8: Convert prediction into binary values

probability <- Predict$net.result

pred <- ifelse(probability > 0.5, 1, 0)

cat("Result in binary values:\n")

print(pred)
Output:

Predicted values:
[,1]
[1,] 0.3681382
[2,] 0.9909768
[3,] 0.9909768

Result in binary values:


[,1]
[1,] 0
[2,] 1
[3,] 1
Explanation:
In above output, “am” value for each row of test dataset is predicted using multi-
layer neural network. As in neural network created by the function, predicted
values greater than 0.49 makes “am” value of car to be 1.
Advantages of Single-layered Neural Network
• Single layer neural networks are easy to set up and train them as there
is absence of hidden layers
• It has explicit links to statistical models
Disadvantages of Single-layered Neural Network
• It can work better only for linearly separable data.
• Single layer neural network has low accuracy as compared to multi-
layer neural network.

Multi Layered Neural Networks in R


Programming
• Last Updated : 17 Sep, 2021
A series or set of algorithms that try to recognize the underlying relationship in a
data set through a definite process that mimics the operation of the human brain
is known as Neural Network. Hence, the neural networks could refer to the
neurons of the human, either artificial or organic in nature. A neural network can
easily adapt to the changing input to achieve or generate the best possible result
by the network and does not need to redesign the output criteria.

Types of Neural Network


Neural Networks can be classified into multiple types based on their Layers and
depth activation filters, Structure, Neurons used, Neuron density, data flow, and
so on. The types of Neural Networks are as follows:
• Perceptron
• Multi-Layer Perceptron or Multi-Layer Neural Network
• Feed Forward Neural Networks
• Convolutional Neural Networks
• Radial Basis Function Neural Networks
• Recurrent Neural Networks
• Sequence to Sequence Model
• Modular Neural Network
Multi-Layer Neural Network
To be accurate a fully connected Multi-Layered Neural Network is known as
Multi-Layer Perceptron. A Multi-Layered Neural Network consists of multiple
layers of artificial neurons or nodes. Unlike Single-Layer Neural Network, in
recent times most of the networks have Multi-Layered Neural Network. The
following diagram is a visualization of a multi-layer neural network.
Explanation:
Here the nodes marked as “1” are known as bias units. The leftmost layer or
Layer 1 is the input layer, the middle layer or Layer 2 is the hidden layer and
the rightmost layer or Layer 3 is the output layer. It can say that the above
diagram has 3 input units (leaving the bias unit), 1 output unit, and 3 hidden
units.
A Multi-layered Neural Network is the typical example of the Feed Forward
Neural Network. The number of neurons and the number of layers consists of
the hyperparameters of Neural Networks which need tuning. In order to find
ideal values for the hyperparameters, one must use some cross-validation
techniques. Using the Back-Propagation technique, weight adjustment training is
carried out.

Formula for Multi-Layered Neural Network


Suppose we have xn inputs(x1, x2….xn) and a bias unit. Let the weight applied be w1,
w2…..wn. Then find the summation and bias unit on performing dot product
among inputs and weights as:

r = Σmi=1 wixi + bias


On feeding the r into activation function F(r) we find the output for the hidden
layers. For the first hidden layer h1, the neuron can be calculated as:

h11 = F(r)
For all the other hidden layers repeat the same procedure. Keep repeating the
process until reach the last weight set.

Implementing Multi-Layered Neural Network in R


In R Language, install the neuralnet package to work on the concepts of Neural
Network. The neuralnet package demands an all-numeric matrix or data frame.
Control the hidden layers by mentioning the value against the hidden parameter
of the neuralnet() function which can be a vector for many hidden layers. Use
the set.seed() function every time to generate random numbers.
Example:
Use the neuralnet package in order to fit a linear model. Let us see steps to fit a
Multi-Layered Neural network in R.

• Step 1: The first step is to pick the dataset. Here in this example, let’s
work on the Boston dataset of the MASS package. This dataset typically
deals with the housing values in the fringes or suburbs of Boston. The
goal is to find the medv or median values of the houses occupied by its
owner by using all the other available continuous variables. Use
the set.seed() function to generate random numbers.

• r

set.seed(500)

library(MASS)

data <- Boston

• Step 2: Then check for missing values or data points in the dataset.
If any, then fix the data points which are missing.
• r

apply(data, 2, function(x) sum(is.na(x)))

• Output:
crim zn indus chas nox rm age dis rad
tax ptratio black lstat medv
0 0 0 0 0 0 0 0 0
0 0 0 0 0
• Step 3: Since no data points are missing proceed towards preparing
the data set. Now randomly split the data into two sets, Train set and
Test set. On preparing the data, try to fit the data on a linear
regression model and then test it on the test set.
• r

index <- sample(1 : nrow(data),

round(0.75 * nrow(data)))

train <- data[index, ]

test <- data[-index, ]

lm.fit <- glm(medv~., data = train)

summary(lm.fit)

pr.lm <- predict(lm.fit, test)

MSE.lm <- sum((pr.lm - test$medv)^2) / nrow(test)

• Output:
Deviance Residuals:
Min 1Q Median 3Q Max
-14.9143 -2.8607 -0.5244 1.5242 25.0004

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 43.469681 6.099347 7.127 5.50e-12 ***
crim -0.105439 0.057095 -1.847 0.065596 .
zn 0.044347 0.015974 2.776 0.005782 **
indus 0.024034 0.071107 0.338 0.735556
chas 2.596028 1.089369 2.383 0.017679 *
nox -22.336623 4.572254 -4.885 1.55e-06 ***
rm 3.538957 0.472374 7.492 5.15e-13 ***
age 0.016976 0.015088 1.125 0.261291
dis -1.570970 0.235280 -6.677 9.07e-11 ***
rad 0.400502 0.085475 4.686 3.94e-06 ***
tax -0.015165 0.004599 -3.297 0.001072 **
ptratio -1.147046 0.155702 -7.367 1.17e-12 ***
black 0.010338 0.003077 3.360 0.000862 ***
lstat -0.524957 0.056899 -9.226 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for gaussian family taken to be 23.26491)

Null deviance: 33642 on 379 degrees of freedom


Residual deviance: 8515 on 366 degrees of freedom
AIC: 2290

Number of Fisher Scoring iterations: 2


• Step 4: Now normalize the data set before training a Neural network.
Hence, scaling and splitting the data. The scale() function returns a
matrix which needs to be coerced in data frame.
• r

maxs <- apply(data, 2, max)

mins <- apply(data, 2, min)

scaled <- as.data.frame(scale(data,

center = mins,

scale = maxs - mins))

train_ <- scaled[index, ]

test_ <- scaled[-index, ]


• Step 5: Now fit the data into Neural network. Use
the neuralnet package.
• r

library(neuralnet)

n <- names(train_)

f <- as.formula(paste("medv ~",

paste(n[!n %in% "medv"],

collapse = " + ")))

nn <- neuralnet(f,

data = train_,

hidden = c(4, 2),

linear.output = T)

Now our model is fitted to the Multi-Layered Neural network. Now combine all
the steps and also plot the neural network to visualize the output. Use
the plot() function to do so.
• r

# R program to illustrate

# Multi Layered Neural Networks

# Use the set.seed() function

# To generate random numbers

set.seed(500)
# Import required library

library(MASS)

# Working on the Boston dataset

data <- Boston

apply(data, 2, function(x) sum(is.na(x)))

index <- sample(1 : nrow(data),

round(0.75 * nrow(data)))

train <- data[index, ]

test <- data[-index, ]

lm.fit <- glm(medv~., data = train)

summary(lm.fit)

pr.lm <- predict(lm.fit, test)

MSE.lm <- sum((pr.lm - test$medv)^2) / nrow(test)

maxs <- apply(data, 2, max)

mins <- apply(data, 2, min)

scaled <- as.data.frame(scale(data,

center = mins,

scale = maxs - mins))

train_ <- scaled[index, ]


test_ <- scaled[-index, ]

# Applying Neural network concepts

library(neuralnet)

n <- names(train_)

f <- as.formula(paste("medv ~",

paste(n[!n %in% "medv"],

collapse = " + ")))

nn <- neuralnet(f, data = train_,

hidden = c(4, 2),

linear.output = T)

# Plotting the graph

plot(nn)

Output:
Numpy Gradient – Descent Optimizer
of Neural Networks
• Difficulty Level : Hard
• Last Updated : 18 Oct, 2021
In differential calculus, the derivative of a function tells us how much the output
changes with a small nudge in the input variable. This idea can be extended to
multivariable functions as well. This article shows the implementation of the
Gradient Descent Algorithm using NumPy. The idea is very simple- start with an
arbitrary starting point and move towards the minimum (that is -ve of gradient
value), and return a point that is as close to the minimum.
GD() is a user-defined function employed for this purpose. It takes the following
parameters:
• gradient is a function which or it can be a python callable object which
takes a vector & returns the gradient of a function which we are trying
to minimize.
• start is the arbitrary starting point which we give to the function, it is a
single independent variable. It can also be a list, Numpy array for
multivariable.
• learn_rate controls the magnitude by which the vectors get updated.
• n_iter is the number of iterations the operation should run.
• tol is the tolerance level that specifies the minimum movement in each
iteration.
Given below is the implementation to produce out required functionality.
Example:
• Python3

import numpy as np

def GD(f, start, lr, n_iter=50, tol=1e-05):

res = start

for _ in range(n_iter):

# gradient is calculated using the np.gradient

# function.

new_val = -lr * np.gradient(f)

if np.all(np.abs(new_val) <= tol):

break

res += new_val
# we return a vector as the gradient can be

# multivariable function. if the function has 1

# dependent variable then it returns a scalar value.

return res

# Example 1

f = np.array([1, 2, 4, 7, 11, 16], dtype=float)

print(f"The vector notation of global minima:{GD(f,10,0.01)}")

# Example 2

f = np.array([2, 4], dtype=float)

print(f'The vector notation of global minima: {GD(f,10,0.1)}')

Output:
The vector notation of global minima:[9.5 9.25 8.75 8.25 7.75 7.5 ]
The vector notation of global minima: [2.0539126e-15 2.0539126e-15]
Lets see relevant concepts used in this function in detail.
Tolerance Level Application
The below line of code enables GD() to terminate early and return before n_iter is
completed if the update is less than or equal to tolerance level this particularly
speeds up the process when we reach a local minimum or a saddle point where
the increment movement is very slow due to very low gradient thus it speeds up
the convergence rate.

• Python3
if np.all(np.abs(new_val) <= tol):

break

Learning Rate Usage (Hyper-parameter)


• The learning rate is a very crucial hyper-parameter as it affects the
behavior of the gradient descent algorithm. For example, if we change
the learning rate from 0.2 to 0.7 we get another solution that’s very
close to 0, but because of the high learning rate there is a large change
in x and i.e it passes the minimum value multiple times, hence it
oscillates before settling to zero. This oscillation increases the
convergence time of the entire algorithm.
• A small learning rate can lead to slow convergence and to make the
matter worst if the no of iterations is limiting small then the algorithm
might even return before it finds the minimum.
Given below is an example to show how learning rate affects out result.
Example:
• Python3

import numpy as np

def GD(f, start, lr, n_iter=50, tol=1e-05):

res = start

for _ in range(n_iter):

# gradient is calculated using the np.gradient function.

new_val = -lr * np.gradient(f)

if np.all(np.abs(new_val) <= tol):

break
res += new_val

# we return a vector as the gradient can be multivariable


function.

# if the function has 1 dependent variable then it returns a


scalar value.

return res

f = np.array([2, 4], dtype=float)

# low learing rate doesn't allow to converge at global minima

print(f'The vector notation of global minima: {GD(f,10,0.001)}')

Output:
[9.9 9.9]
The value returned by the algorithm is not even close to 0. This indicates that our
algorithm returns before converging to global minima

Types of Recurrent Neural Networks


(RNN) in Tensorflow
• Last Updated : 02 Feb, 2022
Recurrent neural network (RNN) is more like Artificial Neural Networks
(ANN) that are mostly employed in speech recognition and natural language
processing (NLP). Deep learning and the construction of models that mimic the
activity of neurons in the human brain uses RNN.
Text, genomes, handwriting, the spoken word, and numerical time series data
from sensors, stock markets, and government agencies are examples of data that
recurrent networks are meant to identify patterns in. A recurrent neural network
resembles a regular neural network with the addition of a memory state to the
neurons. A simple memory will be included in the computation.
Recurrent neural networks are a form of deep learning method that uses a
sequential approach. We always assume that each input and output in a neural
network is reliant on all other levels. Recurrent neural networks are so named
because they perform mathematical computations in consecutive order.
Types of RNN :
1. One-to-One RNN:

One-to-One RNN

The above diagram represents the structure of the Vanilla Neural Network. It
is used to solve general machine learning problems that have only one input and
output.
Example: classification of images.
2. One-to-Many RNN:

One-to-Many RNN

A single input and several outputs describe a one-to-one Recurrent Neural


Network. The above diagram is an example of this.
Example: The image is sent into Image Captioning, which generates a sentence of
words.
3. Many-to-One RNN:
Many-to-One RNN

This RNN creates a single output from the given series of inputs.
Example: Sentiment analysis is one of the examples of this type of network, in which
a text is identified as expressing positive or negative feelings.
4. Many-to-Many RNN:

Many-to-Many RNN

This RNN receives a set of inputs and produces a set of outputs.


Example: Machine Translation, in which the RNN scans any English text and then
converts it to French.
Advantages of RNN :
1. RNN may represent a set of data in such a way that each sample is
assumed to be reliant on the previous one.
2. To extend the active pixel neighbourhood, a Recurrent Neural Network
is combined with convolutional layers.
Disadvantages of RNN :
1. RNN training is a difficult process.
2. If it is using tanh or ReLu like activation function, it wouldn’t be able to
handle very lengthy sequences.
3. The Vanishing or Exploding Gradient problem in RNN

Optimization techniques for Gradient


Descent
• Difficulty Level : Medium
• Last Updated : 03 Aug, 2018
Gradient Descent is an iterative optimiZation algorithm, used to find the
minimum value for a function. The general idea is to initialize the parameters to
random values, and then take small steps in the direction of the “slope” at each
iteration. Gradient descent is highly used in supervised learning to minimize the
error function and find the optimal values for the parameters.
Various extensions have been designed for gradient descent algorithm. Some of
them are discussed below:
• Momentum method: This method is used to accelerate the gradient
descent algorithm by taking into consideration the exponentially
weighted average of the gradients. Using averages makes the algorithm
converge towards the minima in a faster way, as the gradients towards
the uncommon directions are canceled out. The pseudocode for
momentum method is given below.
• V = 0
• for each iteration i:
• compute dW
• V = β V + (1 - β) dW
• W = W - α V
V and dW are analogous to acceleration and velocity respectively. α is
the learning rate, and β is normally kept at 0.9.
• RMSprop: RMSprop was proposed by University of Toronto’s Geoffrey
Hinton. The intuition is to apply an exponentially weighted average
method to the second moment of the gradients (dW2). The pseudocode
for this is as follows:
• S = 0
• for each iteration i
• compute dW
• S = β S + (1 - β) dW2
• W = W - α dW⁄√S + ε
• Adam Optimization: Adam optimization algorithm incorporates the
momentum method and RMSprop, along with bias correction. The
pseudocode for this approach is as follows,
• V = 0
• S = 0
• for each iteration i
• compute dW
• V = β1 S + (1 - β1) dW
• S = β2 S + (1 - β2) dW2
• V = V⁄{1 - β }
1
i
• S = S⁄{1 - β }
2
i

• W = W - α V⁄√S + ε
Kingma and Ba, the proposers of Adam, recommended the following
values for the hyperparameters.
α = 0.001
β1 = 0.9
β2 = 0.999
ε = 10-8

Gradient Descent in Linear Regression


• Difficulty Level : Easy
• Last Updated : 19 Nov, 2021
In linear regression, the model targets to get the best-fit regression line to predict
the value of y based on the given input value (x). While training the model, the
model calculates the cost function which measures the Root Mean Squared error
between the predicted value (pred) and true value (y). The model targets to
minimize the cost function.
To minimize the cost function, the model needs to have the best value of θ1 and
θ2. Initially model selects θ1 and θ2 values randomly and then iteratively update
these value in order to minimize the cost function until it reaches the minimum.
By the time model achieves the minimum cost function, it will have the best
θ1 and θ2 values. Using these finally updated values of θ1 and θ2 in the hypothesis
equation of linear equation, the model predicts the value of x in the best manner
it can.
Therefore, the question arises – How do θ1 and θ2 values get updated?
Linear Regression Cost Function:

Gradient Descent Algorithm For Linear Regression


-> θj : Weights of the hypothesis.
-> hθ(xi) : predicted y value for ith input.
-> j : Feature index number (can be 0, 1, 2, ......, n).
-> α : Learning Rate of Gradient Descent.
We graph cost function as a function of parameter estimates i.e. parameter range
of our hypothesis function and the cost resulting from selecting a particular set of
parameters. We move downward towards pits in the graph, to find the minimum
value. The way to do this is taking derivative of cost function as explained in the
above figure. Gradient Descent step-downs the cost function in the direction of
the steepest descent. The size of each step is determined by parameter α known
as Learning Rate.
In the Gradient Descent algorithm, one can infer two points :

• If slope is +ve : θj = θj – (+ve value). Hence value of θj decreases.


• If slope is -ve : θj = θj – (-ve value). Hence value of θj increases.

The choice of correct learning rate is very important as it ensures that Gradient
Descent converges in a reasonable time. :

• If we choose α to be very large, Gradient Descent can overshoot the


minimum. It may fail to converge or even diverge.
• If we choose α to be very small, Gradient Descent will take small steps
to reach local minima and will take a longer time to reach minima.

For linear regression Cost, the Function graph is always convex shaped.
• Python3

# Implementation of gradient descent in linear regression

import numpy as np

import matplotlib.pyplot as plt

class Linear_Regression:

def __init__(self, X, Y):

self.X = X

self.Y = Y

self.b = [0, 0]

def update_coeffs(self, learning_rate):

Y_pred = self.predict()

Y = self.Y
m = len(Y)

self.b[0] = self.b[0] - (learning_rate * ((1/m) *

np.sum(Y_pred - Y)))

self.b[1] = self.b[1] - (learning_rate * ((1/m) *

np.sum((Y_pred - Y) * self.X)))

def predict(self, X=[]):

Y_pred = np.array([])

if not X: X = self.X

b = self.b

for x in X:

Y_pred = np.append(Y_pred, b[0] + (b[1] * x))

return Y_pred

def get_current_accuracy(self, Y_pred):

p, e = Y_pred, self.Y

n = len(Y_pred)

return 1-sum(

[
abs(p[i]-e[i])/e[i]

for i in range(n)

if e[i] != 0]

)/n

#def predict(self, b, yi):

def compute_cost(self, Y_pred):

m = len(self.Y)

J = (1 / 2*m) * (np.sum(Y_pred - self.Y)**2)

return J

def plot_best_fit(self, Y_pred, fig):

f = plt.figure(fig)

plt.scatter(self.X, self.Y, color='b')

plt.plot(self.X, Y_pred, color='g')

f.show()

def main():

X = np.array([i for i in range(11)])

Y = np.array([2*i for i in range(11)])


regressor = Linear_Regression(X, Y)

iterations = 0

steps = 100

learning_rate = 0.01

costs = []

#original best-fit line

Y_pred = regressor.predict()

regressor.plot_best_fit(Y_pred, 'Initial Best Fit Line')

while 1:

Y_pred = regressor.predict()

cost = regressor.compute_cost(Y_pred)

costs.append(cost)

regressor.update_coeffs(learning_rate)

iterations += 1

if iterations % steps == 0:
print(iterations, "epochs elapsed")

print("Current accuracy is :",

regressor.get_current_accuracy(Y_pred))

stop = input("Do you want to stop (y/*)??")

if stop == "y":

break

#final best-fit line

regressor.plot_best_fit(Y_pred, 'Final Best Fit Line')

#plot to verify cost function decreases

h = plt.figure('Verification')

plt.plot(range(iterations), costs, color='b')

h.show()

# if user wants to predict using the regressor:

regressor.predict([i for i in range(10)])

if __name__ == '__main__':

main()
Output:

Mathematical explanation for Linear


Regression working
• Difficulty Level : Easy
• Last Updated : 12 Sep, 2021
Suppose we are given a dataset

Given is a Work vs Experience dataset of a company and the task is to predict the
salary of a employee based on his / her work experience.
This article aims to explain how in reality Linear regression mathematically
works when we use a pre-defined function to perform prediction task.
Let us explore how the stuff works when Linear Regression algorithm gets
trained.
Iteration 1 – In the start, θ0 and θ1 values are randomly chosen. Let us suppose,
θ0 = 0 and θ1 = 0.
• Predicted values after iteration 1 with Linear regression
hypothesis.
• Cost Function – Error

• Gradient Descent – Updating θ0 value


Here, j = 0

• Gradient Descent – Updating θ1 value


Here, j = 1
Iteration 2 – θ0 = 0.005 and θ1 = 0.02657
• Predicted values after iteration 1 with Linear regression
hypothesis.

Now, similar to iteration no. 1 performed above we will again calculate Cost
function and update θj values using Gradient Descent.
We will keep on iterating until Cost function doesn’t reduce further. At that point,
model achieves best θ values. Using these θ values in the model hypothesis will
give the best prediction results.

Difference between Cloud Computing and Grid Computing


30, May 19

Difference Between Cloud Computing and Fog Computing


20, Apr 20

Difference between Cloud Computing and Cluster Computing


19, Jul 20

Serverless Computing and FaaS Model - The Next Stage in Cloud Computing
29, Sep 20

You might also like