0% found this document useful (0 votes)
304 views81 pages

CC Unit 1

This document outlines the objectives and course outcomes of a cloud computing course. The objectives are to understand key cloud concepts, technologies, issues, players and how cloud is the next generation computing paradigm. The course outcomes are for students to understand cloud concepts, technologies, architectures, issues like resource management and security, and be able to install and use cloud technologies by evaluating appropriate solutions. The document then lists the syllabus covering topics like introduction to cloud, enabling technologies, architectures, resource management, security, technologies and advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
304 views81 pages

CC Unit 1

This document outlines the objectives and course outcomes of a cloud computing course. The objectives are to understand key cloud concepts, technologies, issues, players and how cloud is the next generation computing paradigm. The course outcomes are for students to understand cloud concepts, technologies, architectures, issues like resource management and security, and be able to install and use cloud technologies by evaluating appropriate solutions. The document then lists the syllabus covering topics like introduction to cloud, enabling technologies, architectures, resource management, security, technologies and advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

VELAMMAL ENGINEERING COLLEGE

❑ OBJECTIVES:
*To understand the concept of cloud computing.
*To appreciate the evolution of cloud from the
existing technologies.
*To have knowledge on the various issues in cloud
computing.
*To be familiar with the lead players in cloud.
*To appreciate the emergence of cloud as the next
generation computing paradigm.
❑ COURSE OUTCOMES:
On Completion of the course, the students should be able to:
CO1: Articulate the main concepts, key technologies,
strengths and limitations of cloud computing.
CO2: Learn the key and enabling technologies that help in
the development of cloud.
CO3: Develop the ability to understand and use the
architecture of compute and storage cloud, service and
delivery models.
CO4: Explain the core issues of cloud computing such as
resource management and security.
CO5: Be able to install and use current cloud technologies.
CO6: Evaluate and choose the appropriate technologies,
algorithms and approaches for implementation and use of
cloud.
Syllabus
UNIT I - INTRODUCTION 9
Introduction to Cloud Computing – Definition of Cloud – Evolution of Cloud
Computing – Underlying Principles of Parallel
and Distributed Computing – Cloud Characteristics – Elasticity in Cloud – On-
demand Provisioning.

UNIT II - CLOUD ENABLING TECHNOLOGIES 10


Service Oriented Architecture – REST and Systems of Systems – Web
Services – Publish- Subscribe Model – Basics of Virtualization – Types of
Virtualization – Implementation Levels of Virtualization – Virtualization
Structures – Tools and Mechanisms – Virtualization of CPU – Memory – I/O
Devices –Virtualization Support and Disaster Recovery.
Syllabus
UNIT III CLOUD ARCHITECTURE, SERVICES AND STORAGE 8
Layered Cloud Architecture Design – NIST Cloud Computing Reference
Architecture – Public, Private and Hybrid Clouds - laaS – PaaS – SaaS –
Architectural Design Challenges – Cloud Storage – Storage-as-a-Service –
Advantages of Cloud Storage – Cloud Storage Providers – S3.

UNIT IV - RESOURCE MANAGEMENT AND SECURITY IN CLOUD


10
Inter Cloud Resource Management – Resource Provisioning and Resource
Provisioning Methods – Global Exchange of Cloud Resources – Security
Overview – Cloud Security Challenges – Software-as-a-Service Security –
Security Governance – Virtual Machine Security – IAM – Security Standards.
Syllabus
UNIT V CLOUD TECHNOLOGIES AND ADVANCEMENTS 8
Hadoop – MapReduce – Virtual Box -- Google App Engine – Programming
Environment for Google App Engine –– Open Stack – Federation in the Cloud
– Four Levels of Federation – Federated Services and Applications – Future
of Federation.

TEXT BOOKS:
1. Kai Hwang, Geoffrey C. Fox, Jack G. Dongarra, "Distributed and Cloud
Computing, From Parallel Processing to the Internet of Things", Morgan
Kaufmann Publishers, 2012.

2. Rittinghouse, John W., and James F. Ransome, ―Cloud Computing:


Implementation, Management and Security‖, CRC Press, 2017
What is Cloud Computing
Basic Concepts
Deployment Models
service model
Saas – software as a service
paas – Platform as a service
iaas – infrastructure as a service
virtualization
opportunities & challenges
benefits
disadvantages
features
What is Cloud Computing?
Cloud Computing is a general term used to describe a new
class of network based computing that takes place over
the Internet,
– basically a step on from Utility Computing
– a collection/group of integrated and networked hardware,
software and Internet infrastructure (called a platform).
– Using the Internet for communication and transport provides
hardware, software and networking services to clients
These platforms hide the complexity and details of the
underlying infrastructure from users and applications by
providing very simple graphical interface or API
(Applications Programming Interface).
What is Cloud Computing?
In addition, the platform provides on demand services, that
are always on, anywhere, anytime and any place.
• Pay for use and as needed,
– elastic scale up and down in capacity and
functionalities
• The hardware and software services are available to
– general public, enterprises, corporations and
businesses markets
What is Cloud Computing?
Cloud computing is an umbrella term used to refer to
Internet based development and services

A number of characteristics define cloud data,


applications services and infrastructure:
– Remotely hosted: Services or data are hosted on remote
infrastructure.
– Ubiquitous: Services or data are available from anywhere.
– Commoditized: The result is a utility computing model
similar to traditional that of traditional utilities, like gas
and electricity - you pay for what you would want!
What is Cloud Computing?
Many companies are delivering services from the
cloud. Some notable examples include the following:
– Google — Has a private cloud that it uses for delivering
Google Docs and many other services to its users,
including email access, document applications, text
translations, maps, web analytics, and much more.
– Microsoft — Has Microsoft@ Office 3650 online service
that allows for content and business intelligence tools to
be moved into the cloud, and Microsoft currently makes its
office applications available in a cloud.
– [Link] — Runs its application set for its customers
in a cloud, and its [Link] and [Link] products
provide developers with platforms to build customized
cloud services.
5 Real-World Examples of Cloud
Computing
• Ex: Dropbox, Gmail, Facebook.
• Ex: Maropost for Marketing, Hubspot, Adobe
Marketing Cloud.
• Ex: SlideRocket, Ratatype, Amazon
Web Services.
• Ex: ClearDATA, Dell's Secure Healthcare Cloud,
IBM Cloud.
• Uses: IT consolidation, shared services,
citizen services.
Basic Concepts
Basic Concepts There are certain services and
models working behind the scene making the
cloud computing feasible and accessible to
end users.
Following are the working models for cloud
computing:
1. Deployment Models
2. Service Models
Deployment Models
Deployment models define the type of access to
the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of
access:
Public, Private, Hybrid and Community.
Deployment Models
PUBLIC CLOUD : The Public Cloud allows systems and services
to be easily accessible to the general public. Public cloud may
be less secure because of its openness, e.g., e-mail.
PRIVATE CLOUD : The Private Cloud allows systems and
services to be accessible within an organization. It offers
increased security because of its private nature.
COMMUNITY CLOUD : The Community Cloud allows systems
and services to be accessible by group of organizations.
HYBRID CLOUD : The Hybrid Cloud is mixture of public and
private cloud. However, the critical activities are performed
using private cloud while the non- critical activities are
performed using public cloud.
Service Models
Service Models are the reference models on
which the Cloud Computing is based. These
can be categorized into three basic service
models as listed below:
1. Infrastructure as a Service (laaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
Service Models
Service Models - IaaS
Infrastructure as a Service (laaS) laaS is the
delivery of technology infrastructure as an on
demand scalable service.
laaS provides access to fundamental resources
such as physical machines, virtual machines,
virtual storage, etc.
➢Usually billed based on usage
➢Usually multi tenant virtualized environment
➢Can be coupled with Managed Services for OS and application
support
Service Models - IaaS
Service Models - PaaS
Platform as a Service (PaaS) the runtime environment
for applications, PaaS provides development &
deployment tools, etc. PaaS provides all of the facilities
required to support the complete life cycle of building
and delivering web applications and services entirely
from the Internet.

Typically applications must be developed with a


particular platform in mind.
•Multi tenant environments
•Highly scalable multi tier architecture
Service Models - PaaS
Service Models - SaaS
Software as a Service (SaaS) SaaS model allows to use
software applications as a service to end users. SaaS is a
software delivery methodology that provides licensed multi-
tenant access to software and its functions remotely as a
Web-based service.
– Usually billed based on usage
– Usually multi tenant environment
– Highly scalable architecture
Service Models - SaaS
Virtualization
Virtualization Virtual workspaces:
• An abstraction of an execution environment that can be made
dynamically available to authorized clients by using well-defined
protocols,
• Resource quota (e.g. CPU, memory share),
• Software configuration (e.g. O/S, provided services).
Implement on Virtual Machines (VMS):
• Abstraction of a physical host machine,
• Hvpervisor intercepts and emulates instructions from VMS, and
allows management of
• VMS, VMWare, Xen, etc.
Provide infrastructure API:
• Plug-ins to hardware/support structures
Virtualization
Virtualization in General
Advantages of virtual machines:
• Run operating systems where the physical hardware is
unavailable,
• Easier to create new machines, backup machines, etc.,
• Software testing using "clean" installs of operating systems
and software,
• Emulate more machines than are physically available,
• Timeshare lightly loaded systems on one host,
• Debug problems (suspend and resume the problem machine),
• Easy migration of virtual machines (shutdown needed or not).
• Run legacy systems!
What is the Purpose and Benefits ?
Cloud computing enables companies and applications, which are system
infrastructure dependent, to be infrastructure-less.
By using the Cloud infrastructure on "pay as used and on demand", all of us
can save in capital and operational investment!
Clients can:
Put their data on the platform instead of on their own desktop PCs and/or
on their own servers.
They can put their applications on the cloud and use the servers within
the cloud to do processing and data manipulations etc.
Cloud - Sourcing
Why is it becoming a Big Deal:
– Using high-scale/low-cost providers,
– Any time/place access via web browser,
– Rapid scalability; incremental cost and load sharing,
– Can forget need to focus on local IT.
Concerns:
– Performance, Reliability, and SLAs,
– Control of data, and service parameters,
– Application features and choices,
– Interaction between Cloud providers,
– No standard API - mix of SOAP and REST!
– Privacy, security, compliance, trust
The use of the cloud provides a number of opportunities:
 It enables services to be used without any understanding of their
infrastructure.
 Cloud computing works using economies of scale:
 It potentially lowers the outlay expense for start up companies, as they
would no longer need to buy their own software or servers.
 Cost would be by on-demand pricing.
 Vendors and Service providers claim costs by establishing an ongoing
revenue stream.
 Data and services are stored remotely but accessible from
"anywhere".
In parallel there has been backlash against cloud computing:
▪ Use of cloud computing means dependence on others and that could
possibly limit flexibility and innovation:
▪ The others are likely become the bigger Internet companies like
Google and IBM, who may monopolies the market.
▪ Some argue that this use of supercomputers is a return to the time
of mainframe computing that the PC was a reaction against.
▪ Security could prove to be a big issue:
▪ It is still unclear how safe out-sourced data is and when using these
services ownership of data is not always clear
There are also issues relating to policy and access:
❑ If your data is stored abroad whose policy do you adhere to?
❑ What happens if the remote server goes down?
❑ How will you then access files?
❑ There have been cases of users being locked out of accounts and
losing access to data.
❖ Cost Savings - Companies can reduce their capital expenditures and use operational
expenditures for increasing their computing capabilities. This is a lower barrier to entry
and also requires fewer in-house IT resources to provide system support.
❖ Scalability/Flexibility — Companies can start with a small deployment and grow to a
large deployment fairly rapidly, and then scale back if necessary. Also, the flexibility of
cloud computing allows companies to use extra resources at peak times, enabling them
to satisfy consumer demands.
❖ Reliability — Services using multiple redundant sites can support business continuity
and disaster recovery.
❖ Maintenance — Cloud service providers do the system maintenance, and access is
through APIs that do not require application installations onto PCs, thus further
reducing maintenance requirements.
❖ Mobile Accessible — Mobile workers have increased productivity due to systems
accessible in n-infrastructure-available-from-anywhere
❖ Requires a constant Internet connection:
❖ Cloud computing is impossible if you cannot connect to the
Internet.
❖ Since you use the Internet to connect to both your applications and
documents, if you do not have an Internet connection you cannot
access anything, even your own documents.
❖ A dead Internet connection means no work and in areas where
Internet connections are few or inherently unreliable, this could be
a deal-breaker
Stored data might not be secure:
➢ With cloud computing, all your data is stored on the cloud.

➢ The questions is How secure is the cloud?

➢ Can unauthorized users gain access to your confidential data?

Stored data can be lost:


➢ Theoretically, data stored in the cloud is safe, replicated across
multiple machines.
➢ But on the off chance that your data goes missing, you have no
physical or local backup.
➢ Put simply, relying on the cloud puts you at risk if the cloud
lets you down.
➢ Many of the activities loosely grouped together under
cloud computing have already been happening and
centralized computing activity is not a new phenomena
➢ Grid Computing was the last research-led centralized
approach
➢ However there are concerns that the mainstream
adoption of cloud computing could cause many
problems for users
➢ Many new open source systems appearing that you can
install and run on your local cluster
➢ should be able to run a variety of applications on
these systems
Definition Of Cloud
The term cloud has been used historically as a metaphor
for the Internet. This usage was originally derived from
its common description in network diagrams as an
outline of a cloud, used to represent the transport of data
across carrier backbones (which owned the cloud) to an
endpoint location on the other side of the cloud.
Cloud Computing is the use of commodity hardware and software
computing resources to delivered an infinite elastic online public utility.

A simple definition of cloud computing involves delivering different types of


services over the Internet. From software and analytics to secure and safe data
storage and networking resources, everything can be delivered via the cloud.
The Emergence of Cloud
Computing
Utility computing can be defined as the provision of
computational and storage resources as a metered
service, similar to those provided by a traditional
public utility company.
This, of course, is not a new idea. This form of
computing is growing in popularity, however, as
companies have begun to extend the model to a
cloud computing paradigm providing virtual servers
that IT departments and users can access on
demand.
The Global Nature of the Cloud
The cloud sees no borders and thus has made the
world a much smaller place. The Internet is global in
scope but respects only established communication
paths. People from everywhere now have access to
other people from anywhere else.
Globalization of computing assets may be the
biggest contribution the cloud has made to date. For
this reason, the cloud is the subject of many
complex geopolitical issues.
Grid Computing
or

Cloud Computing?
Grid computing is often confused with cloud computing. Grid
computing is a form of distributed computing that
implements a virtual supercomputer made up of a cluster of
networked or Internetworked computers acting in unison to
perform very large tasks.

Many cloud computing deployments today are powered by


grid computing implementations and are billed like utilities,
but cloud computing can and should be seen as an evolved
next step away from the grid utility model.
Is the Cloud Model Reliable?
The majority of today’s cloud computing
infrastructure consists of time-tested and highly
reliable services built on servers with varying levels
of virtualized technologies, which are delivered via
large data centers operating under service-level
agreements that require 99.99% or better uptime.
Commercial offerings have evolved to meet the
quality-of-service requirements of customers and
typically offer such service-level agreements to their
customers
What About Legal Issues When
Using Cloud Models?
1. Notify individuals about the purposes for which information is
collected and used.
2. Give individuals the choice of whether their information can
be disclosed to a third party.
3. Ensure that if it transfers personal information to a third
party, that third party also provides the same level of privacy
protection.
4. Allow individuals access to their personal information.
5. Take reasonable security precautions to protect collected data
from loss, misuse, or disclosure.
6. Take reasonable steps to ensure the integrity of the data
collected.
7. Have in place an adequate enforcement mechanism.
What Are the Key Characteristics
of Cloud Computing?
• Centralization of infrastructure and lower costs

• Increased peak-load capacity

• Efficiency improvements for systems that are often


underutilized

• Dynamic allocation of CPU, storage, and network bandwidth

• Consistent performance that is monitored by the provider of


the service
The Evolution of Cloud
Computing
It is important to understand the evolution of computing in
order to get an appreciation of how we got into the cloud
environment. Looking at the evolution of the computing
hardware itself, from the first generation to the current
(fourth) generation of computers, shows how we got from
there to here.
The hardware, however, was only part of the evolutionary
process. As hardware evolved, so did software. As networking
evolved, so did the rules for how computers communicate.
The development of such rules, or protocols, also helped
drive the evolution of Internet software.
Hardware Evolution –
First-Generation Computers

It was a general-purpose electromechanical programmable computer.


Mark I was designed and developed at Harvard University in 1943.
The Harvard Mark I computer.
(Image from [Link]/acis/history/[Link], retrieved 9 Jan 2009.)
Hardware Evolution

The British-developed Colossus computer.


(Image from [Link], retrieved 9 Jan 2009.)
Hardware Evolution –
Second-Generation Computers

Another general-purpose computer of this era was ENIAC (Electronic


Numerical Integrator and Computer, shown in Figure 1.3), which was
built in 1946.
The ENIAC computer. (Image from [Link]/.../computer/
[Link], retrieved 9 Jan 2009.)
Hardware Evolution –
Third-Generation Computers

Even though first integrated circuit was produced in September


1958, microchips were not used in computers until 1963.
The Intel 4004 processor. (Image from [Link]/cpu/20051118/
[Link], retrieved 9 Jan 2009.)
Hardware Evolution –
Fourth-Generation Computers

The fourth-generation computers that were being


developed at this time utilized a microprocessor that
put the computer’s processing capabilities on a single
integrated circuit chip. By combining random access
memory (RAM), developed by Intel, fourth-
generation computers were faster than ever before
and had much smaller footprints. The PC era had
begun in earnest by the mid-1980s.
Internet Software Evolution

The conceptual foundation for creation of the Internet was significantly


developed by three individuals.
First individuals Vannervar Bush introduced the concept of the MEMEX
in the 1930s as a microfilm-based “device in which an individual stores all
his books, records, and communications.

Vannevar Bush’s MEMEX. (Image from [Link]/


blogs_estudiantes/luisaulestia, retrieved 9 Jan 2009.)
Internet Software Evolution
The second individual to have a profound effect in shaping the Internet
was Norbert Wiener. Wiener was an early pioneer in the study of
stochastic and noise processes. His work in stochastic and noise processes
was relevant to electronic engineering, communication, and control
systems. He also founded the field of cybernetics.

Marshall McLuhan put forth the idea of a global village that was
interconnected by an electronic nervous system as part of our popular
culture.

In 1957, the Soviet Union launched the first satellite, Sputnik I,


prompting U.S. President Dwight Eisenhower to create the Advanced
Research Projects Agency (ARPA) agency to regain the technological
lead in the arms race.
Internet Software Evolution

ARPA (renamed DARPA, the Defense Advanced Research Projects Agency, in 1972)
appointed J. C. R. Licklider to head the new Information Processing Techniques Office
(IPTO). Licklider was given a mandate to further the research of the SAGE system. The
SAGE system (see Figure) was a continental air-defense network commissioned by the U.S.
military and designed to help protect the United States against a space based nuclear
attack. SAGE stood for Semi-Automatic Ground Environment.

The SAGE system. (Image from USAF Archives, retrieved from http://
[Link]/GEN/recording/images5/[Link].)
Internet Software Evolution
Licklider worked for several years at ARPA, where he set the stage for the
creation of the ARPANET. He also worked at Bolt Beranek and Newman
(BBN), the company that supplied the first computers connected on the
ARPANET.
After he had left ARPA, Licklider succeeded in convincing his replacement
to hire a man named Lawrence Roberts, believing that Roberts was just
the person to implement Licklider’s vision of the future network
computing environment.
So, as it turned out, the first networking protocol that was used on the
ARPANET was the Network Control Program (NCP). The NCP provided
the middle layers of a protocol stack running on an ARPANET-connected
host computer.

An application layer, built on top of the NCP, provided services such as


email and file transfer. These applications used the NCP to handle
connections to other host computers.
Internet Software Evolution

A minicomputer was created specifically to realize the design of the Interface


Message Processor(IMP). This approach provided a system-independent
interface to the ARPANET that could be used by any computer system.

An Interface Message Processor. (Image from [Link]/wp-content/


uploads/2007/02/[Link], retrieved 9 Jan 2009.)
IMP Architecture

Overview of the IMP architecture.


Internet Software Evolution - Establishing a
Common Protocol for the Internet

Since the lower-level protocol layers were


provided by the IMP host interface, the NCP
essentially provided a transport layer
consisting of the ARPANET Host-to-Host
Protocol (AHHP) and the Initial Connection
Protocol (ICP). The AHHP specified how to
transmit a unidirectional, flow-controlled data
stream between two hosts.
Internet Software Evolution –
Evolution of Ipv6

The amazing growth of the Internet throughout


the 1990s caused a vast reduction in the number of
free IP addresses available under IPv4. IPv4 was
never designed to scale to global levels. To increase
available address space, it had to process data packets
that were larger (i.e., that contained more bits of
data). This resulted in a longer IP address and that
caused problems for existing hardware and software.
Internet Software Evolution –
Building a Common Interface to the Internet
While Marc Andreessen and the NCSA team were
working on their browsers, Robert Cailliau at CERN
independently proposed a project to develop a
hypertext system. He joined forces with Berners-Lee
to get the web initiative into high gear. Cailliau
rewrote his original proposal and lobbied CERN
management for funding for programmers. He and
Berners-Lee worked on papers and presentations in
collaboration, and Cailliau helped run the very first
WWW conference.
Internet Software Evolution –
Building a Common Interface to the Internet

The first web browser, created by Tim Berners-Lee. (Image from


[Link]/cyber/[Link], retrieved 9 Jan 2009.)
Internet Software Evolution –
Building a Common Interface to the Internet

The original NCSA Mosaic browser. (Image from [Link]


Server Virtualization
Virtualization is a method of running multiple
independent virtual operating systems on a single
physical computer. This approach maximizes the
return on investment for the computer.

The term was coined in the 1960s in reference to a


virtual machine (sometimes called a pseudo-
machine). The creation and management of virtual
machines has often been called platform virtualization.
Underlying Principles of Parallel
and

Distributed Computing
The terms parallel computing and distributed
computing are often used interchangeably, even
though they means lightly different things. The term
parallel implies a tightly coupled system, whereas
distributed refers to a wider class of system,
including those that are tightly coupled
Underlying Principles of Parallel
and

Distributed Computing

Eras of computing,1940s - 2030s


Underlying Principles of Parallel
and

Distributed Computing
More precisely, the term parallel computing refers to a
model in which the computation is divided among
several processors sharing the same memory. The
architecture of a parallel computing system is often
characterized by the homogeneity of components: each
processor is of the same type and it has the same
capability as the others. The shared memory has a single
address space, which is accessible to all the processors.
Parallel programs are then broken down into several
units of execution that can be allocated to different
processors and can communicate with each other by
means of the shared memory.
Elements of parallel computing

The first steps in this direction led to the


development of parallel computing, which
encompasses techniques, architectures, and systems
for performing multiple activities in parallel. As we
already discussed, the term parallel computing has
blurred its edges with the term distributed
computing
What is parallel processing?

Processing of multiple tasks simultaneously on


multiple processors is called parallel processing.
The parallel program consists of multiple active
processes(tasks) simultaneously solving a given
problem. A given task is divided into multiple
subtasks using a divide-and-conquer technique, and
each sub task is processed on a different central
processing unit (CPU). Programming on a
multiprocessor system using the divide-and-conquer
technique is called parallel programming.
What is parallel processing?
The development of parallel processing is being influenced by
many factors. The prominent among them include the following:
– Computational requirements are ever increasing in the areas of both
scientific and business computing.
– Sequential architectures are reaching physical limitations as they are
constrained by the speed of light and thermodynamics laws.
– Hardware improvements in pipelining, superscalar, and the like are
nonscalable and require sophisticated compiler technology.
– Vector processing works well for certain kinds of problems. It is suitable
mostly for scientific problems (involving lots of matrix operations) and
graphical processing.
– The technology of parallel processing is mature and can be exploited
commercially; there is already significant R&D work on development
tools and environments.
– Significant development in networking technology is paving the way for heterogeneous
computing.
Hardware architectures for parallel
processing
The core elements of parallel processing are CPUs. Based on
the number of instruction and data streams that can be
processed simultaneously, computing systems are classified
into the following four categories
• Single-instruction, single-data (SISD) systems
•Single-instruction, multiple-data (SIMD) systems
•Multiple-instruction, single-data (MISD) systems
•Multiple-instruction, multiple-data (MIMD) systems
Single-instruction, single-data
(SISD) systems
An SISD computing system is a uniprocessor machine capable
of executing a single instruction, which operates on a single
data stream. In SISD, machine instructions are processed
sequentially; hence computers adopting this model are
popularly called sequential computers. Most conventional
computers are built using the SISD model.

Single-instruction, single-data (SISD) architecture.


Single-instruction, multiple-data
(SIMD) systems
An SIMD computing system is a multiprocessor machine capable of
executing the same instruction on all the CPUs but operating on different
data streams

Single-instruction, multiple-data (SIMD) architecture.


Multiple-instruction, single-data
(MISD) systems
An MISD computing system is a multiprocessor machine capable of
executing different instructions on different PEs but all of them operating
on the same data set

Multiple-instruction, Single-data (MISD) architecture.


Multiple-instruction, multiple-data
(MIMD) systems
An MIMD computing system is a multiprocessor machine capable of
executing multiple instructions on multiple data sets Each PE in the
MIMD model has separate instruction and data streams; hence machines
built using this model are well suited to any kind of application. Unlike
SIMD and MISD machines, PEs in MIMD machines work
asynchronously.

Multiple-instruction, Multiple-data (MIMD) architecture.


Shared memory MIMD machines
In the shared memory MIMD model, all the PEs are connected to a
single global memory and they all have access to it Systems based on this
model are also called tightly coupled multiprocessor systems.

Shared (left) and distributed (right) memory MIMD architecture.


Approaches to parallel programming
A sequential program is one that runs on a single processor and has a
single line of control. To make many processors collectively work on a
single program, the program must be divided into smaller independent
chunks so that each processor can work on separate chunks of the
problem.
A wide variety of parallel programming approaches are available. The
most prominent among them are the following:
• Data parallelism
• Process parallelism
• Farmer-and-worker model
These three models are all suitable for task-level parallelism. In the case
of data parallelism, the divide-and-conquer technique is used to split data
into multiple sets, and each data set is processed on different PEs using
the same instruction.
Levels of parallelism
Levels of parallelism are decided based on the lumps of code (grain size)
that can be a potential candidate for parallelism. Their approaches have a
common goal: to boost processor efficiency by hiding latency.
Elements of Distributed computing
we extend these concepts and explore how multiple
activities can be performed by leveraging systems
composed of multiple heterogeneous machines and
systems.
General concepts and definitions
of Distributed Computing
A distributed system is a collection of independent
computers that appears to its users as a single coherent
system.

A distributed system is one in which components located


at networked computers communicate and coordinate
their actions only by passing messages.

As specified in this definition, the components of a


distributed system communicate with some sort of
message passing. This is a term that encompasses several
communication models.
Components of a distributed system
A distributed system is the result of the interaction
of several components that traverse the entire
computing stack from hardware to software. It
emerges from the collaboration of several elements
that—by working together—give users the illusion
of a single coherent system
Components of a distributed system

A layered view of a distributed system.


Components of a distributed system

A cloud computing distributed system.


Architectural styles for distributed
computing
Architectural styles are mainly used to determine
the vocabulary of components and connectors that
are used as instances of the style together with a set
of constraints on how they can be combined.
We organize the architectural styles into two major
classes:
• Software architectural styles
• System architectural styles

You might also like