Cloud Computing Past Papers 2017
Short Questions
Q1: What is SLA? Also write the contents of this contract?
Ans: A service-level agreement (SLA) is a commitment between a service provider
and a client. Particular aspects of the service ,quality, availability, responsibilities
are agreed between the service provider and the service user.
Q2: Describe federated identify management?
Ans: A federated identity in information technology is the means of linking a
person's electronic identity and attributes, stored across multiple distinct identity
management systems. Federated identity is related to single sign-on (SSO), in
which a user's single authentication ticket, or token, is trusted across multiple IT
systems or even organizations.
Q3: What is the fundamental difference between the hosted and hypervisor
virtual machine?
Ans: A hypervisor or virtual machine manager (VMM) is
computer software, firmware or hardware that creates and runs virtual machines.
A computer on which a hypervisor runs one or more virtual machines is called
a host machine, and each virtual machine is called a guest machine. The
hypervisor presents the guest operating systems with a virtual operating
platform and manages the execution of the guest operating systems.
Q4: Define MSP model?
Ans: In managed service provider (MSP) model a company remotely manages a
customer's IT infrastructure and/or end-user systems, typically on a proactive
basis and under a subscription model.
Q5: What are the characteristics of fault tolerance system?
Ans: The basic characteristics of fault tolerance require:
No single point of failure
No single point of repair
Fault isolation to the failing component
Fault containment to prevent propagation of the failure Availability
In addition, fault tolerant systems are characterized in terms of both planned
service outages and unplanned service outages. A five nines system would
therefore statistically provide 99.999% availability. These are usually measured at
the application level and not just at a hardware level.
Q6: How distributed management task force enables more effective
management of IT systems worldwide?
Ans: The aim of DMTF is the exchange of management information in a platform-
independent and technology-neutral way, streamlining integration and reducing
costs by enabling end-to-end multi-vendor interoperability in management
systems. The DMTF creates open manageability standards spanning cloud,
virtualization, network, servers and storage.
Q7: Define open virtualization format?
Ans: Open Virtualization Format (OVF) is an open standard for packaging and
distributing virtual appliances or, more generally, software to be run in virtual
machines. The standard describes an "open, secure, portable, efficient and
extensible format for the packaging and distribution of software to be run
in virtual machines".
Q8: Differentiate between Zimbra and Zoho?
Ans: Zimbra is a software platform that allows you to share files and folders
securely and communicate with team members all over the world. Zoho calendar
allows you to schedule and manage your meetings and events across popular
services such as Microsoft , outlook , or google calendar.
Q9: What is new NSSP?
Ans: The National Syndromic Surveillance Program (NSSP) promotes and advances
development of a syndromic surveillance system for the timely exchange of
syndromic data. These data are used to improve nationwide situational
awareness and enhance responsiveness to hazardous events and disease
outbreaks to protect America’s health, safety, and security.
Q10: How the cloud services are measured?
Ans: Cloud technology is bringing in many benefits to the organizations and the
services are measured in several ways. Elasticity cloud has the ability to create
more resources to enhance performance for a single user or numerous users at a
single point of time.
Q11: Why should one prefer public cloud over private cloud?
Ans: The main reason to choose public cloud is that you aren’t responsible for any
of the management of a public cloud hosting solution. Your data is stored in the
provider’s data center and the provider is responsible for the management and
maintenance of the data center. This type of cloud environment is appealing to
many companies because it reduces lead times in testing and deploying new
products.d
Q12: What is the difference between scalability and elasticity?
Ans: SCALABILITY is ability of a system to increase the workload on its current
hardware resources (scale up).
ELASTICITY is ability of a system to increase the workload on its current and
additional (dynamically added on demand) hardware resources (scale
out).Elasticity is strongly related to deployed-on-cloud applications.
Q13: What are the different layers of cloud computing?
Ans: Cloud computing is composed of an assortment of layered components,
beginning at the most basic physical layer of server infrastructure and storage and
moving up through the network and application layers.
Q14: Define mobile platform virtualization?
Ans: Mobile virtualization uses a "hypervisor" as a central tool to run virtual
devices. Multiple operating systems can be installed on the same mobile device to
promote multi functionality. With mobile virtualization, the user’s own personal
device can run on one operating system and a company-issued device can run on
another.
Q15: Write down the names of two SOAs?
Ans: A service can be implemented either in .Net or J2EE, and the application
consuming the service can be on a different platform or language. Creative mash-
ups like HousingMaps use two separate online services, GoogleMaps and
www.craigslist.com, to display.
Q16: Name two collaboration applications used for mobile platforms?
Ans: Meebo IM , Box., Mango Suite Mobile. Yammer, Flowdock, GoToMeeting.
Q17: What is resilience?
Ans: resilience is the ability to provide and maintain an acceptable level of service
in the face of faults and challenges to normal operation.” Threats and challenges
for services can range from simple misconfiguration over large scale natural
disasters to targeted attacks.
Q18: What is the meaning of term “lack of global clock” in distributed
computing?
Ans: Lack of global clock means:
I. No concept of global time
ii. It’s difficult to reason about the temporal ordering of events
1. Cooperation between processes (e.g., producer/consumer, client/server)
2. Arrival of requests to the OS (e.g., for resources)
3. Collecting up to date global state.
Q19: What is meant by FDI?
Ans: Foreign Direct Investment (FDI) is fund flow between countries by which one
can gain some benefit from their investment, while another can enhance the
productivity and find a better position through performance. The effectiveness and
efficiency depends upon the investor’s perception: If an investment is long term,
then it contributes positively towards the economy. If it is short term for the
purpose of making profit, then its economic impact may be less significant.
Q20: Define risk awareness?
Ans: 20 Risk awareness is the recognition of the potential for hazards, risks, and
incidents.
Q21: Define the term monolithic computing?
Ans: In monolitihic computing, a monolithic application describes a single-
tiered software application in which the user interface and data access code are
combined into a single program from a single platform. A monolithic application is
self-contained, and independent from other computing applications. The design
philosophy is that the application is responsible not just for a particular task, but
can perform every step needed to complete a particular function.
Q22: Write down the two characteristics of distributed systems?
Ans: Concurrency the components of a distributed computation may run at the
same time. Independent failure modes the components of a distributed
computation and the network connecting them may fail independently of each
other. No global time.
Q23: Differentiate replication transparency from failure transparency?
Ans: A distributed system often employs data replication to ensure a fast
response from databases and to enable the system to be resilient to hardware
errors. Replication transparency is the term used to describe the fact that the user
should be unaware that data is replicated. failure transparency refers to the
extent to which errors and subsequent recoveries of hosts and services within the
system are invisible to users and applications.
Q24: Define simple message transfer protocol (SMTP)?
Ans: Simple Mail Transfer Protocol (SMTP) is an Internet
standard for email transmission. Mail servers and other mail transfer agents use
SMTP to send and receive mail messages on TCP port 25.
Q25: Define in brief how TLB helps MMU?
Ans: A translation look a side buffer (TLB) is a memory cache that is used to
reduce the time taken to access a user memory location. It is a part of the
chip’s memory management unit (MMU). The TLB stores the recent translations
of virtual memory to physical memory and can be called an address-translation
cache.
Q26: What is Zimbra?
Ans: Zimbra is an enterprise-class email, calendar and collaboration solution built
for the cloud, both public and private. With a redesigned browser-based
interface, Zimbra offers the most innovative messaging experience available
today, connecting end users to the information and activity in their personal
clouds.
Q27: What is elastic IP addressing?
Ans: An Elastic IP address is a static IPv4 address designed for dynamic cloud
computing. An Elastic IP address is associated with your AWS account. With an
Elastic IP address, you can mask the failure of an instance or software by rapidly
remapping the address to another instance in your account.
Subjective Part (16*13)
Q.2: Technology trends continue to increase the processing capabilities of
mobile devices. The latest smartphones include up to 8 processing cores and
often have fairly powerful GPUs as well. In spite of this, it may make sense to
offload computation to the cloud or a cloudlet. Describe two circumstances
under which it might be beneficial to offload application functionality from such
a powerful smartphone?
Ans:
Q.3: Explain virtualization in detail and also discuss its various types along with
pro and cons of each type?
Ans: Virtualization
In computing, virtualization refers to the act of creating a virtual (rather than
actual) version of something, including virtual computer
Zardware platforms, storage devices, and computer network resources.
Virtualization began in the 1960s, as a method of logically dividing the system
resources provided by mainframe computers between different applications.
Since then, the meaning of the term has broadened. Virtualization lets you easily
outsource your hardware and eliminate any energy costs associated with its
operation. Although it may not work for everyone, however the efficiency,
security and cost advantages are considerable for you to consider employing it as
part of your operations. But whatever type of virtualization you may need, always
look for service providers that provide straightforward tools to manage your
resources and monitor usage, so that you don’t have to spend a lot of time
managing your virtual servers and virtualization can indeed be efficient for you.
What types of virtualization are there?
Virtualization can take many forms depending on the type of application use and
hardware utilization. The main types are listed below:
Hardware Virtualization
Hardware virtualization also known as hardware-assisted virtualization or server
virtualization runs on the concept that an individual independent segment of
hardware or a physical server, may be made up of multiple smaller hardware
segments or servers, essentially consolidating multiple physical servers
into virtual servers that run on a single primary physical server. Each small server
can host a virtual machine, but the entire cluster of servers is treated as a single
device by any process requesting the hardware. The hardware resource allotment
is done by the hypervisor. The main advantages include increased processing
power as a result of maximized hardware utilization and application uptime.
Subtypes:
Full Virtualization – Guest software does not require any modifications since the
underlying hardware is fully simulated.
Emulation Virtualization – The virtual machine simulates the hardware and
becomes independent of it. The guest operating system does not require any
modifications.
Paravirtualization – the hardware is not simulated and the guest software run
their own isolated domains.
Pros
Using Virtualization for Efficient Hardware Utilization. Virtualization decreases
costs by reducing the need for physical hardware systems. You can allocate
memory, space and CPU in just a second making you more self-independent from
hardware vendors.
Software Virtualization
Software Virtualization involves the creation of an operation of multiple virtual
environments on the host machine. It creates a computer system complete with
hardware that lets the guest operating system to run. For example, it lets you run
Android OS on a host machine natively using a Microsoft Windows OS, utilizing
the same hardware as the host machine does.
Subtypes:
Operating System Virtualization – hosting multiple OS on the native OS.
Application Virtualization – hosting individual applications in a virtual
environment separate from the native OS.
Service Virtualization – hosting specific processes and services related to a
particular application.
Memory Virtualization
Physical memory across different servers is aggregated into a single virtualized
memory pool. It provides the benefit of an enlarged contiguous working memory.
You may already be familiar with this, as some OS such as Microsoft Windows OS
allows a portion of your storage disk to serve as an extension of your RAM.
Subtypes:
Application-level control – Applications access the memory pool directly.
Operating system level control – Access to the memory pool is provided through
an operating system.
Storage Virtualization
Multiple physical storage devices are grouped together, which then appear as a
single storage device. This provides various advantages such as homogenization of
storage across storage devices of multiple capacity and speeds, reduced
downtime, load balancing and better optimization of performance and speed.
Partitioning your hard drive into multiple partitions is an example of this
virtualization.
Subtypes:
Block Virtualization – Multiple storage devices are consolidated into one.
File Virtualization – Storage system grants access to files that are stored over
multiple hosts.
Data Virtualization
It lets you easily manipulate data, as the data is presented as an abstract layer
completely independent of data structure and database systems. Decreases data
input and formatting errors.
Network Virtualization
In network virtualization, multiple sub-networks can be created on the same
physical network, which may or may not is authorized to communicate with each
other. This enables restriction of file movement across networks and enhances
security, and allows better monitoring and identification of data usage which lets
the network administrator’s scale up the network appropriately. It also increases
reliability as a disruption in one network doesn’t affect other networks, and the
diagnosis is easier.
Subtypes:
Internal network: Enables a single system to function like a network.
External network: Consolidation of multiple networks into a single one, or
segregation of a single network into multiple ones.
Desktop Virtualization
This is perhaps the most common form of virtualization for any regular IT
employee. The user’s desktop is stored on a remote server, allowing the user to
access his desktop from any device or location. Employees can work conveniently
from the comfort of their home. Since the data transfer takes place over secure
protocols, any risk of data theft is minimized.
Server virtualization
Server virtualization is actually not as new as some people might think. It has by
around for many years. The difference today is that server virtualization has
increasingly gotten cheaper, easier to use, and the companies that provide the
technology are offering more services. Essentially, server virtualization is in the
name: making the server virtual. A better way of understanding this is it partitions
a physical server "into a number of small, virtual servers with the help of
virtualization software.” In these virtual servers, there is the possibility of running
multiple operating system instances at the same time, greatly reducing the cost of
buying individual servers.
Pros of server virtualization
Reduced costs
Automation
Backup and recovery
Cons of server virtualization
High upfront costs
Security
Time spent learning
Q.4: Discuss key characteristics of cloud computing with appropriate examples,
also elaborate main challenges for the cloud?
Ans: The special publication includes the five essential characteristics of cloud
computing:
On-demand self-service: A consumer can unilaterally provision computing
capabilities, such as server time and network storage, as needed automatically
without requiring human interaction with each service provider.
Broad network access: Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick
client platforms (e.g., mobile phones, tablets, laptops and workstations).
Resource pooling: The provider's computing resources are pooled to serve
multiple consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may
be able to specify location at a higher level of abstraction (e.g., country, state or
datacenter). Examples of resources include storage, processing, memory and
network bandwidth.
Rapid elasticity: Capabilities can be elastically provisioned and released, in some
cases automatically, to scale rapidly outward and inward commensurate with
demand. To the consumer, the capabilities available for provisioning often appear
to be unlimited and can be appropriated in any quantity at any time.
Measured service: Cloud systems automatically control and optimize resource
use by leveraging a metering capability at some level of abstraction appropriate
to the type of service (e.g., storage, processing, bandwidth and active user
accounts). Resource usage can be monitored, controlled and reported, providing
transparency for the provider and consumer.
Cloud Computing Challenges
1: Security
Since the advent of the public cloud, enterprises have worried about potential
security risks, and that hasn't changed. In the Right Scale survey, it was the
number one challenge cited by respondents, with 77 percent saying that cloud
security is a challenge, including 29 percent who called it a significant challenge.
2: Managing Cloud spending
As previously mentioned, the Right Scale report found that for some
organizations managing cloud spending has overtaken security as the top cloud
computing challenge. By their own estimates, companies are wasting about 30
percent of the money they spend on the cloud. Organizations make a number of
mistakes that can help drive up their costs. Often, developers or other IT workers
spin up a cloud instance meant to be used for a short period of time and forget to
turn it back off. And many organizations find themselves stymied by the
inscrutable cloud pricing schemes that offer multiple opportunities for discounts
that organizations might not be utilizing. Multiple technological solutions can help
companies with cloud cost management challenges. For example cloud cost
management solutions, automation, containers, serverless services, autoscaling
features and the many management tools offered by the cloud vendors may help
reduce the scope of the problem. Some organizations have also found success by
creating a central cloud team to manage usage and expenses.
3: Lack of Resources/Expertise
Lack of resources and expertise ranked just behind security and cost management
among the top cloud implementation challenges in the RightScale survey. Nearly
three-quarters (73 percent) of respondent listed it as a challenge with 27 percent
saying it was a significant challenge.
While many IT workers have been taking steps to boost their cloud computing
expertise, employers continue to find it difficult to find workers with the skills
they need. And that trend seems likely to continue. The Robert Half Technology
2018 Salary Guide noted, "Technology workers with knowledge of the latest
developments in cloud, open source, mobile, big data, security and other
technologies will only become more valuable to businesses in the years ahead.
Many companies are hoping to overcome this challenge by hiring more workers
with cloud computing certifications or skills. Experts also recommend providing
training to existing staff to help get them up to speed with the technology.
4: Governance
Governance and control were fourth in the list of cloud computing challenges in
the Right Scale survey with 71 percent of respondents calling it a challenge,
including 25 percent who see it as a significant challenge. In this case, one of the
greatest benefits of cloud computing — the speed and ease of deploying new
computing resources — can become a potential downfall. Many organizations
lack visibility into the "shadow IT" used by their employees, and governance
becomes particularly challenging in hybrid cloud and multi-cloud environments.
Experts say organizations can alleviate some of these cloud computing
management issues by following best practices, including establishing and
enforcing standards and policies. And multiple vendors offer cloud management
software to simplify and automate the process.
5: Compliance
The recent flurry of activity surrounding the EU General Data Protection
Regulation (GDPR) has returned compliance to the forefront for many enterprise
IT teams. Among those surveyed by Right Scale, 68 percent cited compliance as a
top cloud computing challenge, and 21 percent called it a significant challenge.
Interestingly, one aspect of the GDPR law may make compliance easier in the
future. The law requires many organizations to appoint a data protection officer
who oversees data privacy and security. Assuming these individuals are well-
versed in the compliance needs for the organizations where they work,
centralizing responsibility for compliance should help companies meet any legal
or statutory obligations.
6: Managing Multi-Cloud Environments
Most organizations aren’t using just one cloud. According to the Right Scale
findings, 81 percent of enterprises are pursuing a multi-cloud strategy, and 51
percent have a hybrid cloud strategy (public and private clouds integrated
together). In fact, on average, companies are using 4.8 different public and
private clouds. Multi-cloud environments add to the complexity faced by the IT
team. To overcome this challenge, experts recommend best practices like doing
research, training employees, actively managing vendor relationships and re-
thinking processes and tooling.
7: Migration
While launching a new application in the cloud is a fairly straightforward process,
moving an existing application to a cloud computing environment is far more
difficult. A Dimensional Research study sponsored by Velostrata found that 62
percent of those surveyed said their cloud migration projects were more difficult
than expected. In addition, 64 percent of migration projects took longer than
expected, and 55 percent exceeded their budgets. More specifically, many of the
companies migrating applications to the cloud reported time-consuming trouble-
shooting (47 percent), difficulty configuring security (46 percent), slow data
migration (44 percent), trouble getting migration tools to work properly (40
percent), difficulty syncing data before cutover (38 percent) and downtime during
migration (37 percent). To overcome those challenges the IT leaders surveyed
said they wished they had performed more pre-migration testing (56 percent), set
a longer project timeline (50 percent), hired an in-house expert (45 percent) and
increased their budgets (42 percent).
8: Vendor Lock-In
Currently, a few vendors, namely Amazon Web Services, Microsoft Azure, Google
Cloud Platform and IBM Cloud, dominate the public cloud market. For both
analysts and enterprise IT leaders, this raises the specter of vendor lock-in. In
a Strato scale Hybrid Cloud Survey, more than 80 percent of those surveyed
expressed moderate to high levels of concern about the problem. "The increasing
dominance of the hyperscale IaaS providers creates both enormous opportunities
and challenges for end users and other market participants," said Sid Nag,
research director at Gartner. "While it enables efficiencies and cost benefits,
organizations need to be cautious about IaaS providers potentially gaining
unchecked influence over customers and the market. In response to multi cloud
adoption trends, organizations will increasingly demand a simpler way to move
workloads, applications and data across cloud providers' IaaS offerings without
penalties." Experts recommend that before organizations adopt a particular cloud
service they consider how easy it will be to move those workloads to another
cloud should future circumstances warrant.
9: Immature Technology
Many cloud computing services are on the cutting edge of technologies like
artificial intelligence, machine learning, augmented reality, virtual reality and
advanced big data analytics. The potential downside to access to this new and
exciting technology is that the services don't always live up to enterprise
expectations in terms of performance, usability and reliability. In the Teradata
survey, 83 percent of the large enterprises surveyed said that the cloud was the
best place to run analytics, but 91 percent said analytics workloads weren't
moving to the cloud as quickly as they should. Part of the problem, cited by 49
percent of respondents, was immature or low-performing technology. And
unfortunately, the only potential cures for the problem are to adjust expectations,
try to build your own solution or wait for the vendors to improve their offerings.
10: Integration
Lastly, many organizations, particularly those with hybrid cloud environments
report challenges related to getting their public cloud and on-premise tools and
applications to work together. In the Teradata survey, 30 percent of respondents
said connecting legacy systems with cloud applications was a barrier to adoption.
Similarly, in a Software One report on cloud spending, 39 percent of those
surveyed said connecting legacy systems was one of their biggest concerns when
using the cloud. This challenge, like the others mentioned in this article, is unlikely
to disappear any time in the near future. Integrating legacy systems and new
cloud-based applications requires time, skill and resources. But many
organizations are finding that the benefits of cloud computing outweigh the
potential downside of the technology. Look for the trend toward cloud adoption
to continue, despite the potential cloud computing challenges.
Q.5: Differentiate among CaaS, SaaS, IaaS, and PaaS with suitable example or
case study?
Ans:
Q.6: Why it is necessary to have standards of application development and
security? Explain in light of common standards in cloud computing?
Ans:
Q.7: Operating systems use privileged instructions to manage hardware
resources like page tables and I/O devices. When executed with a VM on a
modern hypervisor providing full hardware virtualization, what happens when a
guest OS executes a privileged instruction of this sort? Explain your answer with
suitable example.
Ans:
Q.8: Explain privacy in cloud in detail moreover elaborate encrypted federation
vs trusted federation?
Ans:
Q.9: Write a brief note on web services delivered from the cloud with
appropriate examples.
Ans:
Q.10: What techniques are used for handling sensitive and privileged
instructions to virtualize the CPU on the *86 architecture?
Ans:
Q.11: Highlight main features of MSP model to cloud, Cloud datacenters and
role of open source software in datacenters?
Ans: