0% found this document useful (0 votes)
545 views84 pages

JNTUA Cloud Computing Notes - R19

Uploaded by

indhureddy8688
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
545 views84 pages

JNTUA Cloud Computing Notes - R19

Uploaded by

indhureddy8688
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

www.android.universityupdates.in | www.universityupdates.in | https://telegram.

me/jntua

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY ANANTAPUR


B.Tech (CSE)– IV-I Sem L T P C
3 0 0 3

(19A05703a) CLOUD COMPUTING


(Professional Elective-III)

Course Objectives:

This course is designed to:


x Define cloud services and models
x Demonstrate design the architecture for new cloud application.
x Explain how to re-architect the existing application for the cloud.
Unit-I: Introduction to Cloud Computing, Characteristics of Cloud Computing, Cloud Models,
Cloud Services Examples, Cloud based services and Applications, Cloud Concepts and
Technologies, Virtualization, Load Balancing, Scalability and Elasticity, Deployment,
Replication, Monitoring, Software defined networking, Network function virtualization, Map
Reduce, Identity and Access Management, Service Level Agreements, Billing.

Learning Outcomes

At the end of the unit, students will be able to:


x Outline the Cloud characteristics and models.(L2)
x Classify different models, different technologies in cloud.(L2)

Unit-II: Cloud Services and Platforms: Compute Services, Storage Services, Database Services,
Application Services, Content Delivery Services, Analytics Services, Deployment and
Management Services, Identity and Access Management Services, Open Source Private Cloud
Software, Apache Hadoop, Hadoop MapReduce Job Execution, Hadoop Schedulers, Hadoop
Cluster Setup.

Learning Outcomes:

At the end of the unit, students will be able to:


x Summarize the Services and Platform of cloud.(L2)
x Demonstrate Hadoop Cluster Setup. (L2)

Unit-III:Cloud Application Design: Design Considerations, Reference Architectures, Cloud


Application Design Methodologies, Data Storage Approaches,

Multimedia Cloud: Introduction, Case Study: Live Video Streaming App, Streaming Protocols,
Case Study: Video Transcoding APP.

173 Page

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Learning Outcomes:

At the end of the unit, students will be able to:


x Design and build cloud applications.(L6)
x Describe the multimedia cloud. (L2)
Unit-IV: Python for Amazon Web Services, Python for Google Cloud Platform, Python for
Windows Azure, Python for MapReduce, Python Packages of Interest, Python Web Application
Framework – Django, Designing a RESTful Web API.

Learning Outcomes:

At the end of the unit, students will be able to:


x Select different cloud services from different vendors (L2)
x Utilize Python language to access cloud services (L3)

Unit-V: Cloud Application Development in Python, Design Approaches, Image Processing APP,
Document Storage App, MapReduce App, Social Media Analytics App, Cloud Application
Benchmarking and Tuning, Cloud Security, Cloud Computing for Education.

Learning Outcomes:

At the end of the unit, students will be able to:


x Investigate different Cloud applications. (L4)
x Design cloud applications using Python. (L6)

Course Outcomes:

Upon completion of the course, the students should be able to:

x Outline the procedure for Cloud deployment (L2)


x Distinguish different cloud service models and deployment models (L4)
x Compare different cloud services. (L5)
x Design applications for an organization which use cloud environment. ( L6)

174 Page

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Textbooks:

1. Arshadeep Bhaga, Vijay Madisetti, “Cloud Computing A Handson Approach”,


Universities Press, 2018.
References:

1. Chris Hay, Brian Prince, “Azure in Action” Manning Publications [ISBN:


9781935182481],2010.
2. Henry Li, “Introducing Windows Azure” Apress; 1 edition [ISBN: 978-14302-2469-
3],2009.
3. Eugenio Pace, Dominic Betts, Scott Densmore, Ryan Dunn, Masashi Narumoto,
MatiasWoloski, “Developing Applications for the Cloud on the Microsoft Windows
Azure Platform” Microsoft Press; 1 edition [ISBN: 9780735656062],2010.
4. Eugene Ciurana, “Developing with Google App Engine” Apress; 1 edition [ISBN: 978-
1430218319],2009.
5. Charles Severance, “Using Google App Engine” O'Reilly Media; 1 edition, [ISBN: 978-
0596800697], 2009.

175 Page

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 1

1. What is Cloud? Explain the history of Cloud?


Cloud: Moving to the cloud. Running in the cloud. Stored in the cloud. Accessed from the
cloud: these days is seems like everything is happening “in the cloud”. But what exactly is this
nebulous concept?
The short answer is that it's somewhere at the other end of your internet connection – a place
where you can access apps and services, and where your data can be stored securely. The cloud
is a big deal for three reasons:
 It doesn't need any effort on your part to maintain or manage it.
 It's effectively infinite in size, so you don't need to worry about it running out of capacity.
 You can access cloud-based applications and services from anywhere – all you need is a
device with an internet connection.
History of Cloud Computing:
Before emerging the cloud computing, there was Client/Server computing which is basically
a centralized storage in which all the software applications, all the data and all the controls
are resided on the server side.
If a single user wants to access specific data or run a program, he/she need to connect to the
server and then gain appropriate access, and then he/she can do his/her business.
Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed.
On the basis of above computing, there was emerged of cloud computing concepts that
later implemented
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be
sold like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant
ideas, it was ahead if its time, as for the next few decades, despite interest in the model, the
technology simply was not ready for it.
But of course time has passed and the technology caught that idea and after few years we
mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website.
The applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.
In 2002, Amazon started Amazon Web Services, providing services like storage,
computation and even human intelligence. However, only starting with the launch of the
Elastic Compute Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were
earlier, some were later. In 2009, Microsoft launched Windows Azure, and companies like

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 2

Oracle and HP have all joined the game. This proves that today, cloud computing has
become mainstream.

2. What is Cloud Computing Architecture?


Cloud Computing Architecture is a combination of components required for a Cloud
Computing service. A Cloud computing architecture consists of several components like a
frontend platform, a backend platform or servers, a network or Internet service, and a cloud-
based delivery service.
Let’s have a look into Cloud Computing and see what Cloud Computing is made of. Cloud
computing comprises two components, the front end, and the back end. The front end consists
of the client part of a cloud computing system. It comprises interfaces and applications that are
required to access the Cloud computing or Cloud programming platform.

Cloud Computing Architecture


While the back end refers to the cloud itself, it comprises the resources required for cloud
computing services. It consists of virtual machines, servers, data storage, security mechanisms,
etc. It is under the provider’s control.
Cloud computing distributes the file system that spreads over multiple hard disks and machines.
Data is never stored in one place, and in case one unit fails, the other will take over
automatically. The user disk space is allocated on the distributed file system, while another
important component is an algorithm for resource allocation. Cloud computing is a strong
distributed environment, and it heavily depends upon strong algorithms.
Cloud Computing Architecture: The Architecture of Cloud computing contains many
different components. It includes Client infrastructure, applications, services, runtime clouds,
storage spaces, management, and security. These are all the parts of a Cloud computing
architecture.
Front End: The client uses the front end, which contains a client-side interface and application.
Both of these components are important to access the Cloud computing platform. The front end
includes web servers (Chrome, Firefox, Opera, etc.), clients, and mobile devices.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 3

Back End: The backend part helps you manage all the resources needed to provide Cloud
computing services. This Cloud architecture part includes a security mechanism, a large amount
of data storage, servers, virtual machines, traffic control mechanisms, etc.

Cloud Computing Architecture Diagram

3. What are the Important Components of Cloud Computing Architecture? What are its
benefits?
Here are some important components of Cloud computing architecture:
1. Client Infrastructure: Client Infrastructure is a front-end component that provides a GUI. It
helps users to interact with the Cloud.
2. Application: The application can be any software or platform which a client wants to access.
3. Service: The service component manages which type of service you can access according to
the client’s requirements.
Three Cloud computing services are:
 Software as a Service (SaaS)
 Platform as a Service (PaaS)
 Infrastructure as a Service (IaaS)

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 4

4. Runtime Cloud: Runtime cloud offers the execution and runtime environment to the virtual
machines.
5. Storage: Storage is another important Cloud computing architecture component. It provides
a large amount of storage capacity in the Cloud to store and manage data.
6. Infrastructure: It offers services on the host level, network level, and application level.
Cloud infrastructure includes hardware and software components like servers, storage, network
devices, virtualization software, and various other storage resources that are needed to support
the cloud computing model.
7. Management: This component manages components like application, service, runtime
cloud, storage, infrastructure, and other security matters in the backend. It also establishes
coordination between them.
8. Security: Security in the backend refers to implementing different security mechanisms for
secure Cloud systems, resources, files, and infrastructure to the end-user.
9. Internet: Internet connection acts as the bridge or medium between frontend and backend. It
allows you to establish the interaction and communication between the frontend and backend.
Benefits of Cloud Computing Architecture: Following are the cloud computing architecture
benefits:
 Makes the overall Cloud computing system simpler.
 Helps to enhance your data processing.
 Provides high security.
 It has better disaster recovery.
 Offers good user accessibility.
 Significantly reduces IT operating costs.
4. What are the Characteristics of Cloud Computing?
The following are the essential characteristics of Cloud Computing:

Flexibilit: Cloud Computing lets users access data or services using internet-enabled devices
(such as smartphones and laptops). Whatever you want is instantly available on the cloud, just a
click away. Sharing and working on data thus becomes easy and comfortable. Many
organizations these days prefer to store their work on cloud systems, as it makes collaboration
easy and saves them a lot of cost and resources. Its ever-increasing set of features and services
is also accelerating its growth.
Scalability: Scalability is the ability of the system to handle the growing amount of work by
adding resources to the system. Continuous business expansion demands a rapid expansion of
cloud services. One of the most versatile features of Cloud Computing is that it is scalable. Not
only does it have the ability to expand the number of servers, or infrastructure, according to the
demand.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 5

Resource pooling: Computing resources (like networks, servers, storage) that serve individual
users can be securely pooled to make it look like a large infrastructure. This can be done by
implementing a multiple-tenant model, just like a huge apartment where each individual has his
own flat but at the same time every individual shares the apartment. A cloud service provider
can share resources among clients, providing each client with services as per their requirements.
Broad network access: One of the most interesting features of cloud computing is that it
knows no geographical boundaries. Cloud computing has a vast access area and is accessible
via the internet. You can access your files and documents or upload your files from anywhere in
the world, all you need is a good internet connection and a device, and you are set to go.
On-demand self-service: It is based on a self-service model where users can manage their
services like- allotted storage, functionalities, server uptime, etc., making users their own boss.
The users can monitor their consumption and can select and use the tools and resources they
require right away from the cloud portal itself. This helps users make better decisions and
makes them responsible for their consumption.
Cost-effective: Since users can monitor and control their usage, they can also control the cost
factor. Cloud service providers do not charge any upfront cost and most of the time they
provide some space for free. The billing is transparent and entirely based upon their usage of
resources. Cloud computing reduces the expenditure of an organization considerably.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 6

Security: Data security in cloud computing is a major concern among users. Cloud service
providers store encrypted data of users and provide additional security features such as user
authentication and security against breaches and other threats. Authentication refers to
identifying and confirming the user as an authorized user. If the user is not authorized, the
access is denied. Cloud vendors provide several layers of abstraction to improve the security
and speed of accessing data.
Automation: Automation enables IT teams, and developers, to create modify and maintain
cloud resources. Cloud infrastructure requires minimum human interaction. Everything, from
configuration to maintenance and monitoring, is most of the time automated. Automation is a
great characteristic of cloud computing and is very much responsible for the increase in demand
and rapid expansion of cloud services.
Maintenance: Maintenance of the cloud is an easy and automated process with minimum or no
extra cost requirements. With each upgrade in cloud infrastructure and software, maintenance is
becoming more easy and economical.
Measured services: Cloud resources and services such as storage, bandwidth, processing
power, networking capabilities, intelligence, software and services, development tools,
analytics, etc. used by the consumer are monitored and analyzed by the service providers. In
other words, the services you use are measured.
Resilience: Resilience in cloud computing means its ability to recover from any interruption. A
Cloud service provider has to be prepared against any disasters or unexpected circumstances
since a lot is at stake. Disaster management earlier used to pose problems for service providers
but now due to a lot of investments and advancements in this field, clouds have become a lot
more resilient. Like, for example, cloud service providers arrange many backup nodes (server).
Organizational demand for cloud computing has increased exponentially. Cloud service
providers, grasping the business opportunity, have continuously provided quality services to
their clients. Cloud technology has performed up to its potential and still, it has a huge prospect
for growth.
5. What is On-Demand Self-Service
On-demand self-service: means that a consumer can request and receive access to a service
offering, without an administrator or some sort of support staff having to fulfill the request
manually. The request processes and fulfillment processes are all automated. This offers
advantages for both the provider and the consumer of the service.
Implementing user self-service allows customers to quickly procure and access the services they
want. This is a very attractive feature of the cloud. It makes getting the resources you need very
quick and easy. With traditional environments, requests often took days or weeks to be fulfilled,
causing delays in projects and initiatives. You don’t have to worry about that in cloud
environments.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 7

User self-service also reduces the administrative burden on the provider. Administrators are
freed from the day-to-day activities around creating users and managing user requests. This
allows an organization’s IT staff to focus on other, hopefully more strategic, activities.
Self-service implementations can be difficult to build, but for cloud providers they are definitely
worth the time and money. User self-service is generally implemented via a user portal. There
are several out-of-the-box user portals that can be used to provide the required functionality, but
in some instances a custom portal will be needed. On the front end, users will be presented with
a template interface that allows them to enter the appropriate information. On the back end, the
portal will interface with management application programming interfaces (APIs) published by
the applications and services. It can present quite a challenge if the backend systems do not
have APIs or other methods that allow for easy automation.
When implementing user self-service, you need to be aware of potential compliance and
regulatory issues. Often, compliance programs like Sarbanes-Oxley (SOX) require controls be
in place to prevent a single user from being able to use certain services or perform certain
actions without approval. As a result, some processes cannot be completely automated. It’s
important that you understand which process can or cannot be automated in implementing self-
service in your environment.

6. What Is Broad Network Access?


Broad Network Access: Broad network access is the ability of network infrastructure to
connect with a wide variety of devices, including thin and thick clients, such as mobile phones,
laptops, workstations, and tablets, to enable seamless access to computing resources across
these diverse platforms. It is a key characteristic of cloud technology.
The term broad network access can be traced back to the early days of cloud computing, when
accessing resources was a complex and costly affair. Resources were finite and, for the most
part, extremely limited as devices could only access networking and storage systems that were
hosted locally. The cloud introduced a radical shift by democratizing access to compute,
storage, and network resources. Broad network access is a defining characteristic of the cloud,
without which the private and public cloud services we know today would not exist.
Broad network access is what makes the cloud available to any device from any location. A
cloud provider must ensure that it provides its customers with broad network access
capabilities. Otherwise, one would be able to use the cloud service only from a limited set of
platforms.
Today, broad network access is available across every large-scale public cloud vendor, be
it Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. You can
configure these clouds to power any device, right from an employee’s smart watch to your
company’s largest storage system. Apart from this, broad network access is a key parameter to
check when setting up a private cloud.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 8

While public clouds bring broad network access capabilities by default, private servers don’t do
the same. For instance, you could set up an on-premise server to connect with only local devices
close to the enterprise core. HR systems are a good example. In a traditional office, employees
would be able to log into their attendance portal and clock in only from their designated
workstations inside the office campus. The server where the attendance app is posted does not
have broad network access, and therefore cannot be accessed from a remote location or a
mobile device.
However, broad network access is increasingly becoming a key demand for private cloud
solutions. This is due to three reasons:
 Remote work and the occasional WFH were common even before 2020. Now, in the
wake of the pandemic, it is vital to support cloud access and app-based workflows from
any device.
 Bring your own device (BYOD) allows employees to use a device of their choice, which
may be a personal device as well. Without broad network access, BYOD isn’t possible.
 Companies may opt for a private instead of a public cloud landscape for security and
compliance reasons. However, they wouldn’t want to sacrifice the flexibility and
convenience of device-agnostic network access.
Therefore, it is vital to weave broad network access into your cloud SLAs so that employees
and business processes can easily access the resources they need to perform at their optimum.

7. What Does Resource Pooling Mean?


Resource pooling is used in cloud computing environments to describe a situation in which
providers serve multiple clients, customers or "tenants" with provisional and scalable services.
These services can be adjusted to suit each client's needs without any changes being apparent to
the client or end user.
The idea behind resource pooling is that through modern scalable systems involved in cloud
computing and software as a service (SaaS), providers can create a sense of infinite or
immediately available resources by controlling resource adjustments at a meta level. This
allows customers to change their levels of service at will without being subject to any of the
limitations of physical or virtual resources.
The kinds of services that can apply to a resource pooling strategy include data storage services,
processing services and bandwidth provided services. Other related terms include rapid
elasticity, which also involves the dynamic provisioning of services, and on-demand self-
service, where customers could change their levels of service without actually contacting a
service provider. All of this automated service provisioning is a lot like other kinds of business
process automation, which replaced more traditional, labor-intensive strategies with new
innovations that rely on increasingly powerful virtual networks and data handling resources. In
these cases, the goal is to separate the client experience from the actual administration of assets,

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 9

so that the process of delivery is opaque and the services seem to be automatically and infinitely
available.

8. What is Rapid Elasticity in Cloud Computing?


Rapid Elasticity: Elasticity is a 'rename' of scalability, a known non-functional requirement in
IT architecture for many years already. Scalability is the ability to add or remove capacity,
mostly processing, memory, or both, from an IT environment.
Ability to dynamically scale the services provided directly to customers' need for space and
other services. It is one of the five fundamental aspects of cloud computing.
It is usually done in two ways:
o Horizontal Scalability: Adding or removing nodes, servers, or instances to or from a
pool, such as a cluster or a farm.
o Vertical Scalability: Adding or removing resources to an existing node, server, or
instance to increase the capacity of a node, server, or instance.
Most implementations of scalability are implemented using the horizontal method, as it is the
easiest to implement, especially in the current web-based world we live in. Vertical Scaling is
less dynamic because this requires reboots of systems, sometimes adding physical components
to servers.
A well-known example is adding a load balancer in front of a farm of web servers that
distributes the requests.
Why call it Elasticity?
Traditional IT environments have scalability built into their architecture, but scaling up or down
isn't done very often. It has to do with Scaling and the amount of time, effort, and cost.
Servers have to be purchased, operations need to be screwed into server racks, installed and
configured, and then the test team needs to verify functioning, and only after that's done can you
get the big There are. And you don't just buy a server for a few months - typically, it's three to
five years. So it is a long-term investment that you make.
The latch is doing the same, but more like a rubber band. You 'stretch' the ability when you
need it and 'release' it when you don't have it. And this is possible because of some of the other
features of cloud computing, such as "resource pooling" and "on-demand self-service".
Combining these features with advanced image management capabilities allows you to scale
more efficiently.
Three forms for scalability:
Manual Scaling: Manual scalability begins with forecasting the expected workload on a cluster
or farm of resources, then manually adding resources to add capacity. Ordering, installing, and
configuring physical resources takes a lot of time, so forecasting needs to be done weeks, if not

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

IV I Sem Cloud Computing Note Unit –I 10

months, in advance. It is mostly done using physical servers, which are installed and configured
manually.
Semi-automated Scaling: Semi-automated scalability takes advantage of virtual servers, which
are provisioned (installed) using predefined images. A manual forecast or automated warning of
system monitoring tooling will trigger operations to expand or reduce the cluster or farm of
resources.
Using predefined, tested, and approved images, every new virtual server will be the same as
others (except for some minor configuration), which gives you repetitive results. It also reduced
the manual labor on the systems significantly, and it is a well-known fact that manual actions on
systems cause around 70 to 80 percent of all errors. There are also huge benefits to using a
virtual server; this saves costs after the virtual server is de-provisioned. The freed resources can
be directly used for other purposes.
Elastic Scaling (fully automatic Scaling): Elasticity, or fully automatic scalability, takes
advantage of the same concepts that semi-automatic scalability does but removes any manual
labor required to increase or decrease capacity. Everything is controlled by a trigger from the
System Monitoring tooling, which gives you this "rubber band" effect. If more capacity is
needed now, it is added now and there in minutes. Depending on the system monitoring tooling,
the capacity is immediately reduced.

9. What Does Measured Service Mean?


Measured service is a term that is apply to cloud computing. This is a reference to services
where the cloud provider measures or monitors the provision of services for various reasons,
including billing, effective use of resources, or overall predictive planning.
The idea of measured service is one of five components of a definition of cloud computing
supported by the National Institute of Standards and Technology or NIST. These five principles
support a higher-level definition of cloud services and describe how they are typically designed.
Other aspects of this definition include the terms 'rapid elasticity’ and 'resource pooling,’ which
cover different kinds of resource allocation. There’s also 'on-demand self-service,’ which refers
to more automated service changes, and 'broad network access,’ which refers to the overall
footprint and capabilities of cloud systems.
The NIST talks about measured service as a setup where cloud systems may control a user or
tenant’s use of resources by leveraging a metering capability somewhere in the system. The
general idea is that in automated remote services, these measurement tools will provide both the
customer and the provider with an account of what has been used. In more traditional systems,
items like invoices and service change agreements would fill these same roles. Measured
service ensures that even when there is no specific interaction for a service change, that service
change is still noted so that it can be negotiated or dealt with a later date, for instance, in a
billing cycle.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Cloud Application Design Considerations


CLOUD APPLICATION DESIGN CONSIDERATIONS
When designing applications for the cloud, irrespective of the chosen platform, I
have often found it useful to consider four specific topics during my initial
discussions; scalability, availability, manageability and feasibility.

It is important to remember that the items presented under each topic within this
article are not an exhaustive list and are aimed only at presenting a starting point
for a series of long and detailed conversations with the stakeholders of your
project, always the most important part of the design of any application. The aim
of these conversations should be to produce an initial high-level design and
architecture. This is achieved by considering these four key elements holistically
within the domain of the customers project requirements, always remembering to
consider the side-effects and trade-offs of any design decision (i.e. what we gain
vs. what we lose, or what we make more difficult).

Scalability

Conversations about scalability should focus on any requirement to add additional


capacity to the application and related services to handle increases in load and
demand. It is particularly important to consider each application tier when
designing for scalability, how they should scale individually and how we can avoid
contention issues and bottlenecks. Key areas to consider include:

Capacity

 Will we need to scale individual application layers and, if so, how can we achieve
this without affecting availability?
 How quickly will we need to scale individual services?
 How do we add additional capacity to the application or any part of it?
 Will the application need to run at scale 24x7, or can we scale-down outside
business hours or at weekends for example?

Platform / Data

 Can we work within the constraints of our chosen persistence services while
working at scale (database size, transaction throughput, etc.)?
 How can we partition our data to aid scalability within persistence platform
constraints (e.g. maximum database sizes, concurrent request limits, etc.)?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 How can we ensure we are making efficient and effective use of platform
resources? As a rule of thumb, I generally tend towards a design based on many
small instances, rather than fewer large ones.
 Can we collapse tiers to minimise internal network traffic and use of resources,
whilst maintaining efficient scalability and future code maintainability?

Load

 How can we improve the design to avoid contention issues and bottlenecks? For
example, can we use queues or a service bus between services in a co-operating
producer, competing consumer pattern?
 Which operations could be handled asynchronously to help balance load at peak
times?
 How could we use the platform features for rate-leveling (e.g. Azure Queues,
Service Bus, etc.)?
 How could we use the platform features for load-balancing (e.g. Azure Traffic
Manager, Load Balancer, etc.)?

Availability

Availability describes the ability of the solution to operate in a manner useful to


the consumer in spite of transient and enduring faults in the application and
underlying operating system, network and hardware dependencies. In reality, there
is often some crossover between items useful for availability and scalability.
Conversations should cover at least the following items:

Uptime Guarantees

 What Service Level Agreements (SLA’s) are the products required to meet?
 Can these SLA’s be met? Do the different cloud services we are planning to use all
conform to the levels required? Remember that SLA’s are composite.

Replication and failover

 Which parts of the application are most at risk from failure?


 In which parts of the system would a failure have the most impact?
 Which parts of the application could benefit from redundancy and failover options?
 Will data replication services be required?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 Are we restricted to specific geopolitical areas? If so, are all the services we are
planning to use available in those areas?
 How do we prevent corrupt data from being replicated?
 Will recovery from a failure put excess pressure on the system? Do we need to
implement retry policies and/or a circuit-breaker?

Disaster recovery

 In the event of a catastrophic failure, how do we rebuild the system?


 How much data, if any, is it acceptable to lose in a disaster recovery scenario?
 How are we handling backups? Do we have a need for backups in addition to data-
replication?
 How do we handle “in-flight” messages and queues in the event of a failure?
 Are we idempotent? Can we replay messages?
 Where are we storing our VM images? Do we have a backup?

Performance

 What are the acceptable levels of performance? How can we measure that? What
happens if we drop below this level?
 Can we make any parts of the system asynchronous as an aid to performance?
 Which parts of the system are the mostly highly contended, and therefore more
likely to cause performance issues?
 Are we likely to hit traffic spikes which may cause performance issues? Can we
auto-scale or use queue-centric design to cover for this?

Security

This is clearly a huge topic in itself, but a few interesting items to explore which
relate directly to cloud-computing include:

 What is the local law and jurisdiction where data is held? Remember to include the
countries where failover and metrics data are held too.
 Is there a requirement for federated security (e.g. ADFS with Azure Active
Directory)?
 Is this to be a hybrid-cloud application? How are we securing the link between our
corporate and cloud networks?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 How do we control access to the administration portal of the cloud provider?


 How do we restrict access to databases, etc. from other services (e.g. IP Address
white-lists, etc.)?
 How do we handle regular password changes?
 How does service-decoupling and multi-tenancy affect security?
 How we will deal with operating system and vendor security patches and updates?

Manageability

This topic of conversation covers our ability to understand the health and
performance of the live system and manage site operations. Some useful cloud
specific considerations include:

Monitoring

 How are we planning to monitor the application?


 Are we going to use off-the-shelf monitoring services or write our own?
 Where will the monitoring/metrics data be physically stored? Is this in line with
data protection policies?
 How much data will our plans for monitoring produce?
 How will we access metrics data and logs? Do we have a plan to make this data
useable as volumes increase?
 Is there a requirement for auditing as well as logging?
 Can we afford to lose some metrics/logging/audit data (i.e. can we use an
asynchronous design to “fire and forget” to help aid performance)?
 Will we need to alter the level of monitoring at runtime?
 Do we need automated exception reporting?

Deployment

 How do we automate the deployment?


 How do we patch and/or redeploy without disrupting the live system? Can we still
meet the SLA’s?
 How do we check that a deployment was successful?
 How do we roll-back an unsuccessful deployment?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 How many environments will we need (e.g. development, test, staging, production)
and how will deploy to each of them?
 Will each environment need separate data storage?
 Will each environment need to be available 24x7?

Feasibility

When discussing feasibility we consider the ability to deliver and maintain the
system, within budgetary and time constraints. Items worth investigating include:

 Can the SLA’s ever be met (i.e. is there a cloud service provider that can give the
uptime guarantees that we need to provide to our customer)?
 Do we have the necessary skills and experience in-house to design and build cloud
applications?
 Can we build the application to the design we have within budgetary constraints
and a timeframe that makes sense to the business?
 How much will we need to spend on operational costs (cloud providers often have
very complex pricing structures)?
 What can we sensibly reduce (scope, SLAs, resilience)?
 What trade-offs are we willing to accept?

Conclusion

The consideration of these four topics (availability, scalability, manageability and


feasibility) will help you discover areas in your application that require some
cloud-specific thought, specifically in the early stages of a project. The items listed
under each are definitely not exhaustive, but should give you a good starting point
for discussion.

The Need For Cloud Reference Architecture (CRA)


Before digging into the definition of the Cloud Reference Architecture (CRA) and
its benefits, it is better to look at how things can go wrong without having one.
You will quickly realize that it is better to spend some time before migration to
plan your cloud migration journey with security and governance in mind. Doing

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

that will not only save you time and money but will help you meet your security
and governance needs. So let’s get started.

When organizations start planning their cloud migration, and like anything else
new, they start by trying and testing some capabilities. Perhaps they start hosting
their development environment in the cloud while keeping their production one on-
premises.

It is also common to see small and isolated applications being migrated first,
perhaps because of their size, low criticality and to give the cloud a chance to
prove it is trust worthy. After all, migration to the cloud is a journey and doesn’t
happen overnight.

Then the benefits of cloud solutions became apparent and companies started to
migrate multiple large-scale workloads. As more and more workloads move to the
cloud, many organizations find themselves dealing with workload islands that are
managed separately with different security models and independent data flows.

Even worse, with the pressure to quickly get new applications deployed in the
cloud with strict deadlines, developers find themselves rushing to consume new
cloud services without reasonable consideration to organization’s security and
governance needs.
The unfortunate result in most cases is to end up with a cloud infrastructure that is
hard to manage and maintain. Each application could end up deployed in a separate
island with its own connectivity infrastructure and with poor access management.

Managing cost of running workloads in the cloud becomes also challenge. There is
no clear governance and accountability model which leads to a lot of management
overhead and security concerns.
The lack of governance, automation, naming convention and security models are
also even hard to achieve afterwards. In fact, it is nightmare to look at a poorly
managed cloud infrastructure and then trying to apply security and governance

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

afterword because these need to be planned a head before even deploying any
cloud resources.

Even worse, data can be hosted in geographies that violates corporate’s compliance
requirements, which is a big concern for most organizations. I remember once
asking one of my customers if they knew where their cloud data is hosted, and
most of them just don’t know.

The Benefits of Cloud Reference Architecture (CRA)


Simply put, the Cloud Reference Architecture (CRA) helps organizations address
the need for detailed, modular and current architecture guidance for building
solutions in the cloud.
The Cloud Reference Architecture (CRA) serves as a collection of design
guidance and design patterns to support structured approach to deploy services and
applications in the cloud. This means that every workload is deployed with
security, governance and compliance in mind from day one.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

The ISO/IEC 17789 Cloud Computing Reference Architecture defines four


different views for the Cloud Reference Architecture (CRA) :
 User View

 Functional View

 Implementation View

 Deployment View.

We will be focusing on the Deployment View of the Cloud Reference


Architecture (CRA) for now.
The Cloud Reference Architecture (CRA) Deployment View provides a framework
to be used for all cloud deployment projects, which reduces the effort during
design and provides an upfront guidance for a deployment aligned to architecture,
security and compliance.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

You can think of the Cloud Reference Architecture (CRA) Deployment View as the
blueprint for all cloud projects. What you get from this blueprint, the end goal if
you are wondering, is to help you quickly develop and implement cloud-based
solutions, while reducing complexity and risk.
Therefore, having a foundation architecture not only helps you ensure security,
manageability and compliance but also consistency for deploying resources. It
includes network, security, management infrastructure, naming convention, hybrid
connectivity and more.
I know what you might be thinking right now, how does one blueprint fit the need
for organizations with different sizes? Since not all organizations are the same, the
Cloud Reference Architecture (CRA) Deployment View does not outline a single
design that fits all sizes. Rather, it provides a framework for decisions based on
core cloud services, features and capabilities.
The Need for Enterprise Scaffold
One of the main concepts of the Cloud Reference Architecture (CRA) that I would
like to share with you today is the concept of an enterprise scaffold.
Let’s start from the beginning. When you decide to migrate to the cloud and take
advantage of all what the cloud has to offer, there are couple of concerns that you
should address first. Things like:

 A way to manage and track cost effectively (how can you know what resources are
deployed so you can account for it and bill it back accurately).
 Establishing governance framework to address key issues like data sovereignty.

 Deploy with mindset of security first (defining clear management roles, access
management, and security controls across all deployments).
 Building trust in the cloud (have peace of mind that cloud resources are managed
and protected from day one).
These concerns are top priority for every organization when migrating to the cloud
and should be addressed early in the cloud migration planning phase.
To address all these key concerns, you need to think of adopting a framework or
an enterprise scaffold that can help you move to the cloud with confidence. Think
about how engineers build a building. They start by creating the basis of the

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

structure (scaffold) that provides anchor points for more permanent systems to be
mounted.
The same applies when deploying workloads in the cloud. You need an enterprise
scaffold that provides structure to the cloud environment and anchors for services
built on top. It is the foundation that builders (IT teams) use to build services with
speed of delivery in mind. The enterprise scaffold ensures that workloads you
deploy in the cloud meet the minimum security and governance practices your
organization is adopting while giving developers the ability to deploy services and
applications quickly to meet their goals and deadlines, which is a win win solution.

To accomplish this, we need to define the components of the cloud reference


architecture that we will use to build secure, compliant and flexible framework
that developers can build application on top with agility and speed of delivery in
mind.
At the core of building an enterprise scaffold for cloud migration is the Enterprise
Structure Layer which act as the foundation on which all other layers are built.
Here you define a hierarchy that maps to your organization departments and cost
centers to govern spending and get visibility of cost across departments, line of
business applications or business units. On top, you define a Management
Hierarchy that gives you even more flexibility when assigning permissions and
applying policies to enforce your governance in the cloud.
With that carefully defined, you start adopting key best practices and patterns that
maps to your organization’s maturity level. You can think of these as
the Deployment Essentials which includes establishing a proper naming
convention, deploying with automation and using Infrastructure as Code instead of
using the web interface to deploy resources which can cause a snow ball effect of
changes that in the future become hard to manage, track or even audit. The idea
here is to have a consistent way of deploying resources over and over again. Not
only it gives you that speed of delivery we all want to have, but also a piece of
mind that what you verified as a compliant environment in code, is the blueprint
used to deploy resources across your subscriptions.
Now it is time to start building the foundation infrastructure and this is the Core
Networking layer. At this layer, governance can be achieved using different

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

technologies that helps you isolate and deploy security controls to monitor and
inspect traffic across your cloud infrastructure. One of the best recommendations
here is to use a hub and spoke topology and adopt the shared service model where
common resources are consumed from different LOB applications which has many
benefits that we will discuss in great details later.
In this layer, you decide how to extend your on-premises data center to the cloud.
You also define how to design and implement isolation using virtual networks and
user defined routes .This is also the time where you deploy Network Virtual
Appliances (NVAs) and firewalls to inspect data flow inside your cloud
infrastructure.

Another key feature of the cloud is the Software Defined Networks (SDNs) that
gives you the opportunity to do micro-segmentation by implementing Network
Security Groups and Application Security Groups to better control traffic even
within subnets, not only at the edge of the network which is an evolution of how
we think about isolation and protection in such elastic cloud computing
environment.
After you are done with the core networking layer, and just before deploying your
resources, you should consider how are you going to enforce Resource
Governance. This is important because the goal of the cloud reference architecture
is to give developers more control and freedom to deploy workloads quickly and
meet their deadlines, while adhering to corporate security and governance needs.
One way to achieve this balance is by applying resource tags, implementing cost
management controls, and also by translating your organizational governance rules
and policies into Azure policies that governs the usage of cloud resources.
Once all this foundation work is finished, you can start planning how to deploy
your line of business applications (LOB applications). Most likely you need to
define different application life-cycle environments like (Production, Dev, and
QA).
Here you can also establish a shared services workspace to hosts shared
infrastructure resources for your line of business applications to consume. If one of

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

your business applications requires a connectivity to on-premises resources, it can


use the VPN gateway for example deployed in the shared services
workspace instead of implementing a gateway for each application’s workspace.
The shared services workspace is a key element when defining your CRA as it
hosts shared services like domain controllers, DNS services, jumpbox devices and
security controls like firewalls.
But your job is far from finished, as security is a never-ending process, and this is
where the Security Layer comes to the picture. Here you define proper identity
and access management model using Azure RBAC. Security practices like
patching, encryption and secure DevOps are key areas in this layer.
Furthermore, to gain the visibility and control you need in such rapidly changed
environment, you need to think of a security as a service model which natively
integrate with the cloud platform and services, so here you can use Azure security
center to assess your environment for vulnerabilities but also as enabler to your
incident response in the cloud, as you need to detect and remediate security
incidents.
You can also implement Just-in Time Virtual Machine Access to lock down
management ports on your virtual machines. If you are highly regulated
environment, you can also look at VNET Service Endpoints to protect access to
PaaS Services like Azure Storage so that accessing these services does not pass
through the public internet.
With all this in mind, you need to consider Business Continuity, high availability
and backup, and here I want to remind you of the shared reasonability model of the
cloud. You are responsible of many things which might include planning how to
do backups, how to design for high availability and even for disaster recovery.
And finally, How to think of monitoring and auditing in the cloud. Is there is a
performance bottleneck that you should address right away? Do you require that
changes to your cloud environment is audited, so where are you going to keep the
logs, are you going to integrate that with your on-premises SIEM solution, or use a
cloud logging mechanism, and if so, does that solution retain the logs for the
duration you need?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

CLOUD APPLICATTION DESIGN METHODOLOGIES

t is a design methodology which has been introduced to manage cloud-based or


software as a Service (SaaS) apps. Some of the key features or applications of this
design methodology are as follows:

 Use declarative formats for setup automation, to minimize time and cost for
new developers joining the project
 Have a clean contract with the underlying operating system, offering
maximum portability between execution environments
 Are suitable for deployment on modern cloud platforms (Google cloud,
Heroku, AWS etc..), obviating the need for servers and systems
administration
 Minimize divergence between development and production, enabling
continuous deployment for maximum agility and can scale up without
significant changes to tooling, architecture, or development practices

If we simplify this term further, 12 Factor App design methodology is nothing but
a collection of 12 factors which act as building blocks for deploying or developing
an app in the cloud. Listed below are the 12 Factors:

1. Codebase: A 12 Factor App is always tracked in a version control system


such as Git or Apache Subversion (SVN) in the form of code repository.
This will essentially help you to build your code on top of one codebase,
fully backed up with many deployments and revision control. As there is a
one to one relationship between a 12 factor app and the codebase repository,
so in case of multiple repositories, there is always a need to consider this
relationship as a distributed system consisting of multiple 12 factored apps.
2. Dependencies: As the app is standalone and needs to install dependencies, it
is important to explicitly declare and isolate dependencies. Moreover, it is

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

always recommended to keep your development, production and QA


identical to 12 Factor apps. This will help you to build applications in order
to scale web and other such applications that do not have any room for error.
As a solution to this, you can use a dependency isolation tool such as ex
VirtualEnv for Python uniformly to remove several explicit dependency
specifications to both production and development phases and environments.
3. Config: This factor manages the configuration information for the app. Here
you store your configuration files in the environment. This factor focuses on
how you store your data – the database Uniform Resource Identifier (URI)
will be different in development, QA and production.

Backing Services: This includes backing service management services


(local database service or any third party service) which depends on over a
network connection. In case of a 12 factor app, the interface to connect these
services should be defined in a standard way. You need to treat backing
services like attached resources because you may want different databases
depending on which team you are working with. Sometimes developers will
want a lot of logs, while QA will want less. With this method, even each
developer can have their own config file.

4. Build, Run, Release: It is important to run separately all the build and run
stages making sure everything has the right libraries. For this, you can make
use of required automation and tools to generate build and release packages
with proper tags. This is further backed up by running the app in the
execution environment while using proper release management tools like
Capistrano for ensuring timely rollback.
5. Stateless Processes: This factor is about making sure the app is executed in
the execution environment as one or more processes. In other words, you
want to make sure that all your data is stored in a backing store, which gives
you the right to scale out anything and do what you need to do. During

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

stateless processes, you do not want to have a state that you need to pass
along as you scale up and out.
6. Port Binding: Twelve factor apps are self-contained and do not rely on
runtime injection of a web server into the execution environment to create a
web-facing service. With the help of port binding, you can directly access
your app via a port to know if it’s your app or any other point in the stack
that is not working properly.
7. Concurrency: This factor looks into the best practices for scaling the app.
These practices are used to manage each process in the app independently
i.e. start/stop, clone to different machines etc. The factor also deals with
breaking your app into much smaller pieces and then look for services out
there that you either have to write or can consume.
8. Disposability: Your app might have different multiple processes handling
different tasks. So, the ninth factor looks into the robustness of the app with
fast startup and shutdown methods. Disposability is about making sure your
app can startup and takes down fast and can handle any crash anytime. You
can use some high quality robust queuing backend (Beanstalk, RabbitMQ
etc.) that would help return unfinished jobs back to the queue in the case of a
failure.
9. Dev/Prod Parity: Development, staging and production should be as similar
as possible. In case of continuous deployment, you need to have continuous
integration based on matching environments to limit deviation and errors.
Some of the features of keeping the gap between development and
production small are as follows:
1. Make the time gap small: a developer may write code and have it
deployed hours or even just minutes later.
2. Make the personnel gap small: developers who wrote code are
closely involved in deploying it and watching its behavior in
production.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

3. Make the tools gap small: keep development and production as


similar as possible.
10. Logs: Logging mechanisms are critical for debugging. Having proper
logging mechanisms allows you to output the log info as a continuous
stream rather than managing the entire database of log files. Then,
depending on the configuration, you can decide where that log will publish.
11. Admin Processes: One-off admin processes help in collecting data from the
running application. In order to avoid any synchronization issues, you need
to ensure that all these processes are a part of all deploys.

DATA STORAGE APPROACHES

loud Storage is a service that allows to save data on offsite storage system
managed by third-party and is made accessible by a web services API.

Storage Devices

Storage devices can be broadly classified into two categories:


 Block Storage Devices
 File Storage Devices
Block Storage Devices
The block storage devices offer raw storage to the clients. These raw storage are
partitioned to create volumes.
File Storage Devices
The file Storage Devices offer storage to clients in the form of files, maintaining
its own file system. This storage is in the form of Network Attached Storage
(NAS).

Cloud Storage Classes

Cloud storage can be broadly classified into two categories:


Unmanaged Cloud Storage
 Managed Cloud Storage
Unmanaged Cloud Storage

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Unmanaged cloud storage means the storage is preconfigured for the customer.
The customer can neither format, nor install his own file system or change drive
properties.
Managed Cloud Storage
Managed cloud storage offers online storage space on-demand. The managed cloud
storage system appears to the user to be a raw disk that the user can partition and
format.

Creating Cloud Storage System

The cloud storage system stores multiple copies of data on multiple servers, at
multiple locations. If one system fails, then it is required only to change the pointer
to the location, where the object is stored.
To aggregate the storage assets into cloud storage systems, the cloud provider can
use storage virtualization software known as StorageGRID. It creates a
virtualization layer that fetches storage from different storage devices into a single
management system. It can also manage data from CIFS and NFS file systems
over the Internet. The following diagram shows how StorageGRID virtualizes the
storage into storage clouds:

Virtual Storage Containers

The virtual storage containers offer high performance cloud storage


systems. Logical Unit Number (LUN) of device, files and other objects are
created in virtual storage containers. Following diagram shows a virtual storage
container, defining a cloud storage domain:

Challenges

Storing the data in cloud is not that simple task. Apart from its flexibility and
convenience, it also has several challenges faced by the customers. The customers
must be able to:
 Get provision for additional storage on-demand.
 Know and restrict the physical location of the stored data.
 Verify how data was erased.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 Have access to a documented process for disposing of data storage hardware.


 Have administrator access control over data.

Live streaming protocols:


Live streaming is a social media feature on platforms like Facebook and
Instagram that invites brands and users to share unedited, raw footage in real
time.
In live streaming we have 6 differnet protocols:
Six common protocols include
1) HTTP Live Streaming (HLS)
2) Real-Time Messaging Protocol (RTMP)
3) Secure Reliable Transport (SRT)
4) Dynamic Adaptive Streaming over HTTP (MPEG-DASH)
5) Microsoft Smooth Streaming (MSS)
6) webRTC

Before going to know about streaming protocols let us


know about codecs:
A codec (the term is a mashup of the words code and decode) is a
computer program that uses compression to shrink a large movie
file or convert between analog and digital sound. You might see the
word used when talking about audio codecs or video codecs.
Common codecs:
Some common codecs are MP3, WMA, RealVideo, RealAudio, d
XviD,h.264,h.265….etc…..

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

1> HTTP Live Streaming (HLS)


HTTP Live Streaming, and today it is the most widely used streaming
protocol on the internet. However, this was not always the case
because when Flash was still around, the top streaming protocol was
RTMP.
HLS is an adaptive bitrate protocol and also uses HTTP servers. This
protocol is an evolving specification, as Apple continually adds
features and regularly improves HLS.
Video Codecs Supported:
H.264
H.265 / HEVC

Audio Codecs Supported:


AAC
MP3

Transport/Package Format:
MPEG-2 TS

2. Real-Time Messaging Protocol (RTMP):


RTMP was developed by Macromedia with the primary use case of
working with Adobe Flash player, but as you already know, Flash
player is now dead.
To understand the popularity of RTMP as a delivery protocol,
consider that at one point, Adobe Flash Player was installed in about
99% of desktops in the West. RTMP was heavily used for many years.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

RTMP has limited playback support nowadays. Instead, RTMP is


now used for ingestion from the encoder to the online video platform.
Video Codecs Supported:
H.264
MP4
x264

Audio Codecs Supported:


AAC-LC
AAC

Transport/Package Format:
The transport/package format for RTMP is unavailable.
3. Secure Reliable Transport (SRT): Secure Reliable Transport
(SRT) is a relatively new streaming protocol from Haivision,
a leading player in the online streaming space. SRT is an open-source
protocol that is likely the future of live streaming.
This video streaming protocol is known for its security, reliability, and
low latency streaming.
SRT is still quite futuristic because there are still some compatibility
limitations with this protocol.

The protocol itself is open-source and highly compatible, but other


streaming hardware and software have yet to develop to support this protocol.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Video Codecs Supported:


SRT is media and content agnostic, so it supports all video codecs.

Audio Codecs Supported:


SRT is media and content agnostic, so it supports all audio codecs.

Transport/Package Format:
SRT is media and content agnostic, so it supports all transport and package
formats.

4. Microsoft Smooth Streaming (MSS)


Before we dive deep into Microsoft Smooth Streaming (MSS), you should
know that it’s no longer a protocol that’s used as of 2022. But we believe it’s
helpful to still talk about it to show that just because a big name like
Microsoft was behind the protocol, no protocol is bulletproof.
Microsoft Smooth Streaming is behind what allowed you to stream content
on an XBox 360, Silverlight, Windows phone 7, and a few other connected
TV platforms back in the day. It was also used for the 2008 summer
Olympics as the streaming protocol to NBC’s online platform.

Despite the failure of MSS, Microsoft is still behind a few other protocols
like MPEG DASH. Although MSS was promising in its early days, tech
enthusiasts could see that Silverlight wasn’t going to last long and as a result,
MSS came crashing down with it.

Video Codecs Supported:


H.264
VC-1

Audio Codecs Supported:


AAC
WMA

Transport/Package Format:
MP4 fragments

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

5. Dynamic Adaptive Streaming over HTTP (MPEG-


DASH):
The last protocol in our review is MPEG-DASH. This is one of the newest
streaming protocols, and it is beginning to see broader adoption.
MPEG-DASH is also an adaptive bitrate (ABR) protocol. This means it will
automatically detect changes in the internet connection speed of the viewer
and serve the best available quality video at any given time. ABR streaming
reduces buffering and enhances the viewers’ experience.
Although most web browsers support MPEG DASH, a big downside to
consider when learning about the protocol is that iOS and Safari don’t yet
Video Codecs Supported:
H.264 (the most common codec)
H.265 / HEVC (the next-generation successor)
WebM
VP9/10
Any other codec (MPEG-DASH is codec agnostic)

Audio Codecs Supported:


AAC
MP3
Any other codec (MPEG-DASH is codec agnostic)

Transport/Package Format:
MP4 fragments
MPEG-2 TS

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

6. WebRTC:
Web Real-Time Communication (WebRTC) is relatively new compared to
the others on our list and technically not considered a streaming protocol, but
often talked about as though it is.
It is what’s largely responsible for your ability to participate in live video
conferences directly in your browser.
WebRTC supports adaptive bitrate streaming in the same way HLS and
MPEG-DASH does.
Microsoft Teams, which exploded in popularity during the pandemic, uses
WebRTC for both audio and video communications.
Video Codecs Supported:
H.264
VP8 + VP9

Audio Codecs Supported:


PCMU
PCMA
G.711
G.722
Opus

Playback Support:
Native support on Android devices
As of 2020, iOS Safari 11 and newer versions support WebRTC
Works on Google Chrome, Mozilla Firefox, and Microsoft Edge
Supported by YouTube and Google

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

What is Transcoding?
Firstly, transcoding needs to be differentiated from two other easily confused digital video
processes: compression and transmuxing/rewrapping.
“Transcoding” as an umbrella term
Essentially, transcoding is a two-step process in which (encoded) data is
decoded to an intermediate format and then encoded into a target format.
Three tasks might fall under the larger umbrella when someone refers to
transcoding video content:
Transrating

Transrating is a more specific type of transcoding that is intended specifically


to change bitrate. So it’s the same video content, video format, and codec and
the alteration is the bitrate: you might want to bring an 8Mbps bitrate down to
3Mbps, making it possible for the media to fit into less storage space or be
broadcast over a lower bandwidth connection.
Transsizing

This is another specific type of transcoding that is used to resize a video


frame (this may also be called “image scaling”), for example, bringing down
4K resolution to 1080p.
Examples of transcoding
Video live streams Video live streams allow numerous viewers to watch a live video
broadcast. ...
IP streaming IP streaming, also known as internet protocol streaming, uses satellite, cable,
or other television structures to present audio and video content. ...
Audio and video uploading ...
Edge computing ...
Locally transcoded files ...

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

What Is Video Transcoding?


The transcoding process creates a copy of a video file in a format that’s
suitable for playback on your platform of choice at the quality and in the file
size you desire. In a nutshell, transcoding decodes encoded data into an
intermediate format and then encodes it into the target format.
How Do You Transcode Video?
Even though the video-transcoding process varies from case to case, a few
practices are in popular use. Here is an example:
Step 1: Inputs
Two inputs are required:
Details of the input file or format, which you can determine by analyzing the
transcoded media.
The complete set of parameters for the output,
which you can access from the
rules within the media’s supply-chain system. Alternatively, you can derive it
from the metadata according to the variables in the workflow engine.

Step 2: Transcoding pipeline


The pipeline involves processing all the components, such as video, ANC,
and audio, in the transcoded file. When transcoding starts, the metadata is
split as needed among the components. The transcoder analyzes the file and
instructions and then configures the pipe. To shorten the process,
concatenate the steps involved instead of performing them sequentially.
Here is an example of how transcoding works with one component—a video:
1. Demux the input: Extract all the compressed data from the package, the
wrapper, or both.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

2. Decode the video: Decompress the file as close as possible to the original
uncompressed frames.
3. Process the video: Scale, interlace, deinterlace, or perform advanced image-
processing steps that change picture elements to improve the perceived result
of encoding.
4. Encode the video: Do that with the required destination codec.
5. Mux the video: Pack it into a wrapper or package, combining the video with
other components as required.

What Are the Types of Video Transcoding?


There are three main types:
Standard transcoding, which involves changing the video or audio to
transcode a video or stream. For example, streaming a digital conference
usually requires working with IP cameras set within a conference space. If
those cameras use the Real Time Streaming Protocol (RTSP) and cannot
create a video stream suitable for online playback, you can convert the
content into an adaptive bitrate stream by transcoding the video.

which involves resizing a video frame. For


Transsizing (image scaling),
example, you can lower a 4K resolution to 1,080 pixels.

Transrating, which
changes the bitrate without modifying the video content,
format, or codec. For example, to ensure that the video can fit into less
storage space or can be broadcast over a lower bandwidth connection,
reduce an 8-Mbps bitrate to 3-Mbps.

Transrating comprises three types:


Lossless to lossless, which maintains the quality of the video across formats,
enabling you to take advantage of more effective hardware or compression
algorithms.
Lossless to lossy, which reduces the quality of the video but yields a smaller
and faster file or one that’s compatible with the requirements of a certain
platform, browser, or player.
Lossy to lossless, which ensures that the video quality does not deteriorate
further during the conversion process. Note that, by adopting this process, you
cannot regain the data and quality previously lost through compression.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Case study of Bento APP:


(Bento Case Study (bento-video.github.io))
|^click on above link to open case study:

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

PYTHON FOR AMAZON WEB SERVICES


Usability
Amazon EMR is designed to simplify building and operating big data environments and applications.
Related EMR features include provisioning, managed scaling, and reconfiguring of clusters, and
EMR Studio for collaborative development.

Provision clusters in minutes

You can launch an EMR cluster in minutes. The service is designed to automate infrastructure
provisioning, cluster setup, configuration, and tuning. EMR takes care of these tasks allowing you to
focus your teams on developing differentiated big data applications.

Scale resources to meet business needs

You can set scale out and scale in using EMR Managed Scaling policies and let your EMR cluster
automatically manage the compute resources to meet your usage and performance needs. This
improves cluster utilization.

EMR Studio is an integrated development environment (IDE) that makes it easier for data scientists
and data engineers to develop, visualize, and debug data engineering and data science applications
written in R, Python, Scala, and PySpark. EMR Studio provides managed Jupyter Notebooks, and
tools like Spark UI and YARN Timeline Service to simplify debugging.

High availability

You can configure high availability for multi-master applications such as YARN, HDFS, Apache
Spark, Apache HBase, and Apache Hive. When you enable multi-master support in EMR, EMR is
designed to configure these applications for High Availability, and in the event of failures,
automatically fail-over to a standby master so that your cluster is not disrupted, and place your
master nodes in distinct racks to reduce risk of simultaneous failure. Hosts are monitored to detect
failures, and when issues are detected, new hosts are provisioned and added to the cluster
automatically.

EMR Managed Scaling

With EMR Managed Scaling you specify the minimum and maximum compute limits for your clusters
and Amazon EMR automatically resizes them for improved performance and resource utilization.
EMR Managed Scaling is designed to continuously sample key metrics associated with the
workloads running on clusters.

Reconfigure running clusters

You can now modify the configuration of applications running on EMR clusters including Apache
Hadoop, Apache Spark, Apache Hive, and Hue without re-starting the cluster. EMR Application
Reconfiguration allows you to modify applications on the fly without needing to shut down or re-
create the cluster. Amazon EMR will apply your new configurations and gracefully restart the
reconfigured application. Configurations can be applied through the Console, SDK, or CLI.

Elastic

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Amazon EMR enables you to provision capacity as you need it, and automatically or manually add
and remove capacity. This is useful if you have variable or unpredictable processing requirements.
For example, if the bulk of your processing occurs at night, you might need 100 instances during the
day and 500 instances at night. Alternatively, you might need a significant amount of capacity for a
short period of time. With Amazon EMR you can provision instances, automatically scale to match
compute requirements, and shut your cluster down when your job is complete.

There are two main options for adding or removing capacity:

 Deploy multiple clusters: If you need more capacity, you can launch a new cluster and terminate it
when you no longer need it. There is no limit to how many clusters you can have. You may want to
use multiple clusters if you have multiple users or applications. For example, you can store your
input data in Amazon S3 and launch one cluster for each application that needs to process the data.
One cluster might be optimized for CPU, a second cluster might be optimized for storage, etc.

 Resize a running cluster: EMR Managed Scaling is designed to automatically scale or manually
resize a running cluster. You may want to scale out a cluster to temporarily add more processing
power to the cluster, or scale in your cluster to save on costs when you have idle capacity. For
example, some customers add hundreds of instances to their clusters when their batch processing
occurs, and remove the extra instances when processing completes. When adding instances to your
cluster, EMR can now start utilizing provisioned capacity as soon it becomes available. When
scaling in, EMR will proactively choose idle nodes to reduce impact on running jobs.

Amazon EC2 Spot Integration

Amazon EMR enables use of Spot instances so you can save both time and money. Amazon EMR
clusters include 'core nodes' that run HDFS and ‘task nodes’ that do not; task nodes are ideal for
Spot because if the Spot price increases and you lose those instances you will not lose data stored
in HDFS. With the combination of instance fleets, allocation strategies for spot instances, EMR
Managed Scaling and more diversification options, you can now optimize EMR for resilience and
cost.

Amazon S3 Integration

The EMR File System (EMRFS) allows EMR clusters to use Amazon S3 as an object store for
Hadoop. You can store your data in Amazon S3 and use multiple Amazon EMR clusters to process
the same data set. Each cluster can be optimized for a particular workload, which can be more
efficient than a single cluster serving multiple workloads with different requirements. For example,
you might have one cluster that is optimized for I/O and another that is optimized for CPU, each
processing the same data set in Amazon S3. In addition, by storing your input and output data in
Amazon S3, you can shut down clusters when they are no longer needed.

EMRFS supports S3 server-side or S3 client-side encryption using AWS Key Management Service
(KMS) or customer-managed keys, and offers an optional consistent view which checks for list and
read-after-write consistency for objects tracked in its metadata. Also, Amazon EMR clusters can use
both EMRFS and HDFS, so you don’t have to choose between on-cluster storage and Amazon S3.

AWS Glue Data Catalog Integration

You can use the AWS Glue Data Catalog as a managed metadata repository to store external table
metadata for Apache Spark and Apache Hive. Additionally, it provides automatic schema discovery
and schema version history. This allows you to persist metadata for your external tables on Amazon
S3 outside of your cluster.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Flexible data stores


With Amazon EMR, you can leverage multiple data stores, including Amazon S3, the Hadoop
Distributed File System (HDFS), and Amazon DynamoDB.

Amazon S3

Amazon S3 is a highly durable, scalable, secure, fast, and inexpensive storage service. With the
EMR File System (EMRFS), Amazon EMR can efficiently and securely use Amazon S3 as an object
store for Hadoop. Amazon EMR has made numerous improvements to Hadoop, allowing you to
process large amounts of data stored in Amazon S3. Also, EMRFS can enable consistent view to
check for list and read-after-write consistency for objects in Amazon S3. EMRFS supports S3 server-
side or S3 client-side encryption to process encrypted Amazon S3 objects, and you can use the
AWS Key Management Service (KMS) or a custom key vendor.

When you launch your cluster, Amazon EMR streams the data from Amazon S3 to each instance in
your cluster and begins processing. One advantage of storing your data in Amazon S3 and
processing it with Amazon EMR is you can use multiple clusters to process the same data. For
example, you might have a Hive development cluster that is optimized for memory and a Pig
production cluster that is optimized for CPU both using the same input data set.

Hadoop Distributed File System (HDFS)

HDFS is the Hadoop file system. Amazon EMR’s current topology groups its instances into 3 logical
instance groups: Master Group, which runs the YARN Resource Manager and the HDFS Name
Node Service; Core Group, which runs the HDFS DataNode Daemon and the YARN Node Manager
service; and Task Group, which runs the YARN Node Manager service. Amazon EMR installs HDFS
on the storage associated with the instances in the Core Group.

Each EC2 instance comes with a fixed amount of storage, referenced as "instance store", attached
with the instance. You can also customize the storage on an instance by adding Amazon EBS
volumes to an instance. Amazon EMR allows you to add General Purpose (SSD), Provisioned (SSD)
and Magnetic volumes types. The EBS volumes added to an EMR cluster do not persist data after
the cluster is shutdown. EMR will automatically clean-up the volumes, once you terminate your
cluster.

You can also enable complete encryption for HDFS using an Amazon EMR security configuration, or
manually create HDFS encryption zones with the Hadoop Key Management Server. You can use a
security configuration option to encrypt EBS root device and storage volumes when you specify
AWS KMS as your key provider.

Amazon DynamoDB

Amazon DynamoDB is a managed NoSQL database service. Amazon EMR has direct integration
with Amazon DynamoDB so you can process data stored in Amazon DynamoDB and transfer data
between Amazon DynamoDB, Amazon S3, and HDFS in Amazon EMR.

Other AWS Data Stores

You can also use Amazon Relational Database Service (a web service designed to set up, operate,
and scale a relational database in the cloud), Amazon Glacier (an storage service that provides
secure and durable storage for data archiving and backup), and Amazon Redshift (a managed data
warehouse service). AWS Data Pipeline is a web service that helps you process and move data

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

between different AWS compute and storage services (including Amazon EMR) as well as on-
premises data sources at specified intervals.

Use your favorite open source applications


With versioned releases on Amazon EMR, you can select and use the latest open source projects
on your EMR cluster, including applications in the Apache Spark and Hadoop ecosystems. Software
is installed and configured by Amazon EMR, so you can spend more time on increasing the value of
your data without worrying about infrastructure and administrative tasks.

Big Data Tools

Amazon EMR supports Hadoop tools such as Apache Spark, Apache Hive, Presto, and Apache
HBase. Data scientists use EMR to run deep learning and machine learning tools such as
TensorFlow, Apache MXNet, and, using bootstrap actions, add use case-specific tools and libraries.
Data analysts use EMR Studio, Hue and EMR Notebooks for interactive development, authoring
Apache Spark jobs, and submitting SQL queries to Apache Hive and Presto. Data Engineers use
EMR for data pipeline development and data processing, and use Apache Hudi to simplify
incremental data management and data privacy use cases requiring record-level insert, updates,
and delete operations.

Data Processing & Machine Learning

Apache Spark is an engine in the Hadoop ecosystem for processing for large data sets. It uses in-
memory, fault-tolerant resilient distributed datasets (RDDs) and directed, acyclic graphs (DAGs) to
define data transformations. Spark also includes Spark SQL, Spark Streaming, MLlib, and GraphX..

Apache Flink is a streaming dataflow engine that enables you to run real-time stream processing on
high-throughput data sources. It supports event time semantics for out of order events, exactly-once
semantics, backpressure control, and APIs optimized for writing both streaming and batch
applications.

TensorFlow is an open source symbolic math library for machine intelligence and deep learning
applications. TensorFlow bundles together multiple machine learning and deep learning models and
algorithms and can train and run deep neural networks for many different use cases.

Record-Level Amazon S3 Data Management

Apache Hudi is an open-source data management framework used to simplify incremental data
processing and data pipeline development. Apache Hudi enables you to manage data at the record-
level in Amazon S3 to simplify Change Data Capture (CDC) and streaming data ingestion, and
provides a framework to handle data privacy use cases requiring record level updates and deletes.

SQL

Apache Hive is an open source data warehouse and analytics package that runs on top of Hadoop.
Hive is operated by Hive QL, a SQL-based language which allows users to structure, summarize,
and query data. Hive QL goes beyond standard SQL, adding first-class support for map/reduce
functions and complex extensible user-defined data types like JSON and Thrift. This capability
allows processing of complex and unstructured data sources such as text documents and log files.
Hive allows user extensions via user-defined functions written in Java. Amazon EMR has made
numerous improvements to Hive, including direct integration with Amazon DynamoDB and Amazon
S3. For example, with Amazon EMR you can load table partitions automatically from Amazon S3,

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

you can write data to tables in Amazon S3 without using temporary files, and you can access
resources in Amazon S3 such as scripts for custom map/reduce operations and additional libraries.

Presto is an open-source distributed SQL query engine optimized for low-latency, ad-hoc analysis of
data. It supports the ANSI SQL standard, including complex queries, aggregations, joins, and
window functions. Presto can process data from multiple data sources including the Hadoop
Distributed File System (HDFS) and Amazon S3.

Apache Phoenix enables low-latency SQL with ACID transaction capabilities over data stored in
Apache HBase. You can create secondary indexes for additional performance, and create different
views over the same underlying HBase table.

NoSQL

Apache HBase is an open source, non-relational, distributed database modeled after Google's
BigTable. It was developed as part of Apache Software Foundation's Hadoop project and runs on
top of Hadoop Distributed File System(HDFS) to provide BigTable-like capabilities for Hadoop.
HBase provides you a fault-tolerant, efficient way of storing large quantities of sparse data using
column-based compression and storage. In addition, HBase provides fast lookup of data because it
caches data in-memory. HBase is optimized for sequential write operations, and for batch inserts,
updates, and deletes. HBase works with Hadoop, sharing its file system and serving as a direct input
and output to Hadoop jobs. HBase also integrates with Apache Hive, enabling SQL-like queries over
HBase tables, joins with Hive-based tables, and support for Java Database Connectivity (JDBC).
With EMR, you can use S3 as a data store for HBase, enabling you to reduce operational
complexity. If you use HDFS as a data store, you can back up HBase to S3 and you can restore
from a previously created backup.

Interactive Analytics

EMR Studio is an integrated development environment (IDE) that enables data scientists and data
engineers to develop, visualize, and debug data engineering and data science applications written in
R, Python, Scala, and PySpark. EMR Studio provides managed Jupyter Notebooks, and tools like
Spark UI and YARN Timeline Service to simplify debugging.

Hue is an open source user interface for Hadoop that makes it easier to run and develop Hive
queries, manage files in HDFS, run and develop Pig scripts, and manage tables. Hue on EMR also
integrates with Amazon S3, so you can query directly against S3 and transfer files between HDFS
and Amazon S3.

Jupyter Notebook is an open-source web application that you can use to create and share
documents that contain live code, equations, visualizations, and narrative text. JupyterHub allows
you to host multiple instances of a single-user Jupyter notebook server. When you create a EMR
cluster with JupyterHub, EMR creates a Docker container on the cluster's master node. JupyterHub,
all the components required for Jupyter, and Sparkmagic run within the container.

Apache Zeppelin is an open source GUI which creates interactive and collaborative notebooks for
data exploration using Spark. You can use Scala, Python, SQL (using Spark SQL), or HiveQL to
manipulate data and visualize results. Zeppelin notebooks can be shared among several users, and
visualizations can be published to external dashboards.

Scheduling and workflow

Apache Oozie is a workflow scheduler for Hadoop, where you can create Directed Acyclic Graphs
(DAGs) of actions. Also, you can trigger your Hadoop workflows by actions or time. AWS Step

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Functions allows you to add serverless workflow automation to your applications. The steps of your
workflow can run anywhere, including in AWS Lambda functions, on Amazon Elastic Compute Cloud
(EC2), or on-premises.

Other projects and tools

EMR also supports a variety of other popular applications and tools, such as R, Apache Pig (data
processing and ETL), Apache Tez (complex DAG execution), Apache MXNet (deep learning),
Ganglia (monitoring), Apache Sqoop (relational database connector), HCatalog (table and storage
management), and more. The Amazon EMR team maintains an open source repository of bootstrap
actions that can be used to install additional software, configure your cluster, or serve as examples
for writing your own bootstrap actions.

Data access control


By default, Amazon EMR application processes use EC2 instance profile when they call other AWS
services. For multi-tenant clusters, Amazon EMR offers three options to manage user access to
Amazon S3 data.

Integration with AWS Lake Formation allows you to define and manage fine-grained authorization
policies in AWS Lake Formation to access databases, tables, and columns in AWS Glue Data
Catalog. You can enforce the authorization policies on jobs submitted through Amazon EMR
Notebooks and Apache Zeppelin for interactive EMR Spark workloads, and send auditing events to
AWS CloudTrail. By enabling this integration, you also enable federated Single Sign-On to EMR
Notebooks or Apache Zeppelin from enterprise identity systems compatible with Security Assertion
Markup Language (SAML) 2.0.

Native integration with Apache Ranger allows you to set up a new or an existing Apache Ranger
server to define and manage fine-grained authorization policies for users to access databases,
tables, and columns of Amazon S3 data via Hive Metastore. Apache Ranger is an open-source tool
to enable, monitor, and manage comprehensive data security across the Hadoop platform.

This native integration allows you to define three types of authorization policies on the Apache
Ranger Policy Admin server. You can set table, column, and row level authorization for Hive, table
and column level authorization for Spark, and prefix and object level authorization for Amazon S3.
Amazon EMR installs and configures the corresponding Apache Ranger plugins on the cluster.
These Ranger plugins sync up with the Policy Admin server for authorization polices, enforce data
access control, and send auditing events to Amazon CloudWatch Logs.

Additional features
Select the right instance for your cluster

You choose what types of EC2 instances to provision in your cluster (standard, high memory, high
CPU, high I/O, etc.) based on your application’s requirements. You have root access to every
instance and you can customize your cluster to suit your requirements.

Debug your applications

When you enable debugging on a cluster, Amazon EMR archives the log files to Amazon S3 and
then indexes those files. You can then use a graphical interface in the console to browse the logs
and view job history in an intuitive way.

Monitor your cluster

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

You can use Amazon CloudWatch to monitor custom Amazon EMR metrics, such as the average
number of running map and reduce tasks. You can also set alarms on these metrics.

Respond to events

You can use Amazon EMR event types in Amazon CloudWatch Events to respond to state changes
in your Amazon EMR clusters. Using simple rules that you can set up, match events and route them
to Amazon SNS topics, AWS Lambda functions, Amazon SQS queues, and more.

Schedule recurring workflows

You can use AWS Data Pipeline to schedule recurring workflows involving Amazon EMR. AWS Data
Pipeline is a web service that helps you reliably process and move data between different AWS
compute and storage services as well as on-premise data sources at specified intervals.

Deep learning

Use popular deep learning frameworks like Apache MXNet to define, train, and deploy deep neural
networks. You can use these frameworks on Amazon EMR clusters with GPU instances.

Control network access to your cluster

You can launch your cluster in an Amazon Virtual Private Cloud (VPC), a logically isolated section of
the AWS cloud. You have control over your virtual networking environment, including selection of
your own IP address range, creation of subnets, and configuration of route tables and network
gateways.

Manage users, permissions and encryption

You can use AWS Identity and Access Management (IAM) tools such as IAM Users and Roles to
control access and permissions. For example, you could give certain users read but not write access
to your clusters. Also, you can use Amazon EMR security configurations to set various encryption at-
rest and in-transit options, including support for Amazon S3 encryption, and Kerberos
authentication.

Install additional software

You can use bootstrap actions or a custom Amazon Machine Image (AMI) running Amazon Linux to
install additional software on your cluster. Bootstrap actions are scripts that are run on the cluster
nodes when Amazon EMR launches the cluster. They run before Hadoop starts and before the node
begins processing data. You can also preload and use software on a custom Amazon Linux AMI.

Copy data

You can move data from Amazon S3 to HDFS, from HDFS to Amazon S3, and between Amazon S3
buckets using Amazon EMR’s S3DistCp, an extension of the open source tool Distcp, which uses
MapReduce to move large amounts of data.

Custom JAR

Write a Java program, compile against the version of Hadoop you want to use, and upload to
Amazon S3. You can then submit Hadoop jobs to the cluster using the Hadoop JobClient interface.

Amazon EMR Studio

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

EMR Studio is an integrated development environment (IDE) that enables data scientists and data
engineers to develop, visualize, and debug data engineering and data science applications written in
R, Python, Scala, and PySpark.

EMR Studio provides managed Jupyter Notebooks, and tools like Spark UI and YARN Timeline
Service to simplify debugging. EMR Studio uses AWS Single Sign-On and allows you to log in
directly with your corporate credentials without logging into the AWS console. Data scientists and
analysts can install custom kernels and libraries, collaborate with peers using code repositories such
as GitHub and BitBucket, or execute parameterized notebooks as part of scheduled workflows using
orchestration services like Apache Airflow or Amazon Managed Workflows for Apache Airflow.

EMR Studio kernels and applications run on EMR clusters, so you get the benefit of distributed data
processing using the performance optimized Amazon EMR runtime for Apache
Spark. Administrators can set up EMR Studio such that analysts can run their applications on
existing EMR clusters or create new clusters using pre-defined AWS Cloud Formation templates for
EMR.

Benefits:

Simple to use

EMR Studio is designed to make it simple to interact with applications on an EMR cluster. You can
access EMR Studio with your corporate credentials using AWS Single Sign-On, without logging into
the AWS console or the cluster. You can interactively explore, process and visualize data using
notebooks, build and schedule pipelines, and debug applications without logging into EMR clusters.

Managed Jupyter Notebooks

With EMR Studio, you can start developing analytics and data science applications in R, Python,
Scala, and PySpark with managed Jupyter Notebooks. You can attach notebooks to existing EMR
clusters or auto-provision clusters using pre-configured templates to run jobs. You can collaborate
with others using repositories, and install custom Python libraries or kernels directly from Notebooks.

Easy to build applications

EMR Studio enables you to move from prototyping to production. You can trigger pipelines from
code repositories, simply run Notebooks as pipelines using orchestration tools like Apache Airflow or
Amazon Managed Workflows for Apache Airflow, or attach notebooks to a bigger cluster using a
single click.

Simplified debugging

With EMR Studio, you can debug jobs and access logs without logging into the cluster for both
active and terminated clusters. You can use native application interfaces such as Spark UI and
YARN timeline service directly from EMR Studio. EMR Studio also allows you to locate the cluster or
job to debug by using filters such as cluster state, creation time, and cluster ID.

Use cases:

Build data science and engineering applications

With EMR Studio, you can log in directly to managed notebooks without logging into the AW S
console, start notebooks in seconds, get onboarded with sample notebooks, and perform your data
exploration. You can collaborate with peers by sharing notebooks via GitHub and other repositories.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

You can also customize your environment by loading custom kernels and Python libraries from
notebooks.

Deploy production pipelines

In EMR Studio, you can use code repository to trigger pipelines. You can also parameterize and
chain notebooks to build pipelines. You can integrate notebooks into scheduled workflows using
workflow orchestration services such as Apache Airflow or Amazon Managed Workflows for Apache
Airflow. EMR Studio also allows you to re-attach notebooks to a bigger cluster to run a job.

Simplify debugging applications

In EMR Studio, you can debug notebook applications from the notebook UI. You can also debug
pipelines by first narrowing down clusters using filters like cluster state, and diagnose jobs on both
active and terminated clusters with as few clicks as possible to open native debugging UIs like Spark
UI, Tez UI, and Yarn Timeline Service.

Amazon EMR Notebooks


Amazon EMR Notebooks, a managed environment based on Jupyter and Jupyter-lab notebooks,
enables users to interactively analyze and visualize data, collaborate with peers, and build
applications using EMR clusters. EMR Notebooks is designed for Apache Spark. It supports Spark
Magic kernels, which allows you to remotely run queries and code on your EMR cluster using
languages like PySpark, Spark SQL, Spark R, and Scala.

With EMR Notebooks, there is no software or instances to manage. You can either attach the
notebook to an existing cluster or provision a new cluster directly from the console. You can attach
multiple notebooks to a single cluster, detach notebooks and re-attach them to new clusters.

EMR Notebooks allows you to:

1. Monitor and debug Spark jobs directly from your notebook


2. Install notebook-scoped libraries on a running EMR cluster
3. Associate Git repositories with your notebook for version control, and simplified code collaboration
and reuse
4. Compare and merge two notebooks using the nbdime utility

Amazon EMR Serverless


Amazon EMR Serverless is a serverless option in Amazon EMR designed to help you run open-
source big data analytics frameworks without configuring, managing, and scaling clusters or servers.
Select the open-source framework you want to run for your application, such as Apache Spark and
Apache Hive, and EMR Serverless can automatically provision and manage the underlying compute
and memory resources, including scaling those resources to meet changing data volumes and
processing requirements.

Use cases:

Variable workloads

With EMR Serverless, you can automatically scale application resources as workload demands
change, without having to preconfigure how much compute power and memory you need.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

SLA-sensitive data pipelines

You can pre-initialize application resources in EMR Serverless to help speed up response time for
SLA-sensitive data pipelines.

Development and test environments

EMR Serverless can help you quickly spin up a development and test environment that automatically
scales with unpredictable usage.

Amazon EMR on Amazon EKS


Amazon EMR on Amazon EKS enables you to submit Apache Spark jobs on demand on Amazon
Elastic Kubernetes Service (EKS) without provisioning clusters. With EMR on EKS, you can
consolidate analytical workloads with your other Kubernetes-based applications on the same
Amazon EKS cluster to improve resource utilization and simplify infrastructure management.

With Amazon EMR on Amazon EKS, you can share compute and memory resources across all of
your applications and use a single set of Kubernetes tools to centrally monitor and manage your
infrastructure. You can also use a single EKS cluster to run applications that require different
Apache Spark versions and configurations, and take advantage of automated provisioning, scaling,
faster runtimes, and development and debugging tools that EMR provides.

Benefits:

Simplify management

EMR benefits for Apache Spark on EKS include managed versions of Apache Spark 2.4 and 3.0,
automatic provisioning, scaling, performance optimized runtime, and tools like EMR Studio for
authoring jobs and an Apache Spark UI for debugging.

Optimize performance

By running analytics applications on EKS, you can reuse existing EC2 instances in your shared
Kubernetes cluster and avoid the startup time of creating a new cluster of EC2 instances dedicated
for analytics.

Use cases:

Centralize resource management

With EMR on EKS, you can automate the provisioning, management, and scaling of Apache Spark,
and use a single set of tools to centrally manage and monitor your infrastructure.

Co-location of workloads

Run multiple EMR workloads that require different frameworks, versions, and configurations on the
same EKS cluster as your other application workloads.

Rapid adoption of new EMR versions

EMR on EKS provides a managed experience for developing, troubleshooting, and optimizing your
analytics. You can deploy configurations and start jobs to test new EMR versions on the same EKS
cluster without allocating dedicated resources.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Amazon EMR on AWS Outposts


AWS Outposts bring AWS services, infrastructure, and operating models to virtually any data center,
co-location space, or on-premises facility. Amazon EMR is available on AWS Outposts, allowing you
to set up, deploy, manage, and scale Apache Hadoop, Apache Hive, Apache Spark, and Presto
clusters in your on-premises environments, just as you would in the cloud. Amazon EMR provides
capacity in Outposts, while automating time-consuming administration tasks including infrastructure
provisioning, cluster setup, configuration, or tuning, freeing you to focus on your applications.You
can create managed EMR clusters on-premises using the same AWS Management Console, APIs,
and CLI for EMR. EMR clusters launched in an Outpost will appear in the AWS console just like any
other cluster, but will be running in your Outpost.

Benefits:

Augment on-premises processing capacity

Once your Outpost is set up, you can launch a new EMR cluster on-premises and connect to
existing HDFS storage. This allows you to respond when on-premises systems need additional
processing capacity. Adding capacity to on-premises Hadoop and Spark clusters helps meet
workload demands in periods of high utilization.

Process data that needs to remain on-premises

Apache Hadoop, Apache Hive, Apache Spark, and Presto are commonly used to process,
transform, and analyze data that is part of a larger data architecture. For data that needs to remain
on-premises for governance, compliance, or other reasons, you can use EMR to deploy and run
applications like Apache Hadoop and Apache Spark on-premises, close to your data. This reduces
the need to move large amounts of on-premises data to the cloud, reducing the overall time needed
to process that data.

Accelerate data and workload migrations

If you’re in the process of migrating data and Apache Hadoop workloads to the cloud and want to
start using EMR before your migration is complete, you can use AWS Outposts to launch EMR
clusters on-premises that connect to your existing HDFS storage. You can then gradually migrate
your data to Amazon S3 as part of an evolution to a cloud architecture.

Additional Information
For additional information about service controls, security features and functionalities, including, as
applicable, information about storing, retrieving, modifying, restricting, and deleting data, please
see https://docs.aws.amazon.com/index.html. This additional information does not form part of the
Documentation for purposes of the AWS Customer Agreement available
at http://aws.amazon.com/agreement, or other agreement between you and AWS governing your
use of AWS’s services.

Learn About AWS


 What Is AWS?
 What Is Cloud Computing?
 AWS Diversity, Equity & Inclusion

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Cloud development on Azure


 Article
 12/03/2022
 4 minutes to read
 3 contributors

You're a Python developer, and you're ready to develop cloud applications for Microsoft Azure.
To help you prepare for a long and productive career, this series of three articles orients you to
the basic landscape of cloud development on Azure.

What is Azure? Datacenters, services, and resources

Microsoft's CEO, Satya Nadella, often refers to Azure as "the world's computer." A computer, as
you well know, is a collection of hardware components that are managed by an operating system.
The operating system provides a platform upon which you can build software that helps
people apply the system's computing power to any number of tasks. (That's why we use the word
"application" to describe such software.)

In Azure, the computer's hardware isn't a single machine but an enormous pool of virtualized
server computers contained in dozens of massive datacenters around the world. The Azure
"operating system" is then composed of services that dynamically allocate and de-allocate
different parts of that resource pool as applications need them. Those dynamic allocations allow
applications to respond quickly to any number of changing conditions, such as customer demand.

Each allocation is called a resource, and each resource is assigned both a unique object
identifier (a GUID) and a unique URL. Examples of resources include virtual machines (CPU
cores and memory), storage, databases, virtual networks, container registries, container
orchestrators, web hosts, and AI and analytics engines.

Resources are the building blocks of a cloud application. The cloud development process thus
begins with creating the appropriate environment into which you can deploy the different parts of
the application. Put simply, you can't deploy any code or data to Azure until you've allocated and
configured—that is provisioned—the suitable target resources.

The process of creating the environment for your application involves identifying the relevant
services and resource types involved, and then provisioning those resources. The provisioning
process is essentially how you construct the computing system to which you deploy your
application. Provisioning is also the point at which you begin renting those resources from
Azure.

There are hundreds of different types of Azure resources at your disposal. You can choose a
basic "infrastructure" resource like a virtual machine when you need to retain full control and

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

responsibility for the software you deploy. In other scenarios, you can choose a higher-level
"platform" services that provide a more managed environment where you concern yourself with
only data and application code.

While finding the right services for your application and balancing their relative costs can be
challenging, it's also part of the creative fun of cloud development. To understand the many
choices, review the Azure developer's guide. Here, let's next discuss how you actually work with
all of these services and resources.

Note

You've probably seen and perhaps have grown weary of the terms IaaS (infrastructure-as-a-
service), PaaS (platform-as-a-service), and so on. The as-a-service part reflects the reality that
you generally don't have physical access to the datacenters themselves. Instead, you use tools
like the Azure portal, Visual Studio Code, the Azure CLI, or Azure's REST API to
provision infrastructure resources, platform resources, and so on. As a service, Azure is always
standing by waiting to receive your requests.

On this developer center, we spare you the IaaS, PaaS, etc. jargon because "as-a-service" is just
inherent to the cloud to begin with!

Note

A hybrid cloud refers to the combination of private computers and datacenters with cloud
resources like Azure, and has its own considerations beyond what's covered in the previous
discussion. Furthermore, this discussion assumes new application development; scenarios that
involve rearchitecting and migrating existing on-premises applications are not covered here. For
more information on those topics, see Get started with the Cloud Adoption Framework.

Note

You might hear the terms cloud native and cloud enabled applications, which are often discussed
as the same thing. There are differences, however. A cloud enabled application is often one that
is migrated, as a whole, from an on-premises datacenter to cloud-based servers. Oftentimes, such
applications retain their original structure and are simply deployed to virtual machines in the
cloud (and therefore across geographic regions). Such a migration allows the application to scale
to meet global demand without having to provision new hardware in your own datacenter.
However, scaling must be done at the virtual machine (or infrastructure) level, even if only one
part of the application needs increased performance.

A cloud native application, on the other hand, is written from the outset to take advantage of the
many different, independently scalable services available in a cloud such as Azure. Cloud native
applications are more loosely structured (using micro-service architectures, for example), which
allows you to more precisely configure deployment and scaling for each part. Such a structure
simplifies maintenance and often dramatically reduces costs because you need pay for premium
services only where necessary.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

For more information, see Build cloud-native applications in Azure and Introduction to
cloud-native applications, the principles of which apply to applications written in any language,
including Python.

Next step

Provisioning, accessing, and managing resources >>>

Recommended content

The Azure development flow

An overview of the Azure cloud development cycle, which involves provisioning (creating and
configuring), coding, testing, deployment, and management of Azure resources.

Provisioning, accessing, and managing resources in Azure

An overview of ways you can work with Azure resources, including Azure portal, VS Code,
Azure CLI, Azure PowerShell, and Azure SDKs.

Getting started with hosting Python apps on Azure

Index of getting started material in the Azure documentation for hosting Python app code.

Use the Azure libraries (SDK) for Python

Overview of the features and capabilities of the Azure libraries for Python that help developers
be more productive when creating, using, and managing Azure resources.
Show more

How to Create An EC2 Instance in AWS:


1. Create a Trial AWS Account with Your Email Address.
2. Login to Create AWS Account using AWS Console –
https://console.aws.amazon.com/
3. Open the EC2 Instance dashboard.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

4. Create an AWS EC2 (Elastic Compute Cloud) Instance


in AWS..
a) Go to the AWS console, Select the ‘EC2 Service’.
b) Then select Instance and click on the Launch
Instance button, There are no of OS, We can select
as a server { Ubuntu, RedHat, Amazon Linux, SUSE
Linux}.
c) Select the Free tier Eligible instance and Launch it.
d) Choose the Storage type and add a tag [name].
(example EC2Instance)
e) Configure the Security group – SSH , etc
f) During the Launch, it will ask for a creating security
key. Select the Create a new key pair and give a
name as a ec2instance . Finally, Download the Key.

g) If you are using Mac OS or Linux give a permission


to the file using a following command.
h) >chmod 400 ec2instancedemo.pem
5.From your Laptop Login to the Create AWS EC2 Instance
using SSH.
>ssh –I “ec2instance.pem” [email protected]
east-2.compute.amazonaws.com
6.Once you logged in to the EC2 instance , you can excute
commands like hostname &nslookup
7. Create a demo script file and execute in the EC2
Instance.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

>vi learn.sh
>echo “learn and share – Welcome”
>chmod 777 learn.sh
>ls –a
>./learn.sh
8. Stop the EC2 Instance , then the connection to EC2
instance in terminal should be closed.

Image processing in Python


Install required library
pip install pillow
Image: Open() and show()
#Import required library
from PIL importImage

#Open Image
im=Image.open("TajMahal.jpg")

#Image rotate & show


im.rotate(45).show()
Converting to grayscale image − convert()
TajMahal_gray = Image.open('TajMahal.jpg').convert('L')
TajMahal_gray.show()

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Showing image in grayscale:


#Import required library
import cv2
importnumpyasnp
frommatplotlibimportpyplotasplt

im= cv2.imread('TajMahal.jpg',cv2.IMREAD_GRAYSCALE)
cv2.imshow('image',im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Another way to write above program with a tick/line to
mark the image.
import cv2
importnumpyasnp
frommatplotlibimportpyplotasplt

im= cv2.imread('TajMahal.jpg',cv2.IMREAD_GRAYSCALE)

plt.imshow(im,cmap='gray', interpolation ='bicubic')


# to hide tick values on X and Y axis
plt.xticks([]),plt.yticks([])
plt.plot([200,300,400],[100,200,300],'c',linewidth=5)
plt.show()

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

How to Create an App in Django ?


 Difficulty Level : Easy
 Last Updated : 30 Nov, 2022
Read
Discuss
Practice
Video
Courses
Prerequisite – How to Create a Basic Project using MVT in
Django?
Django is famous for its unique and fully managed app
structure. For every functionality, an app can be created
like a completely independent module. This article will take
you through how to create a basic app and add
functionalities using that app.
For example, if you are creating a Blog, Separate modules
should be created for Comments, Posts, Login/Logout, etc.
In Django, these modules are known as apps. There is a
different app for each task.

Benefits of using Django apps –

 Django apps are reusable i.e. a Django app can be used


with multiple projects.
 We have loosely coupled i.e. almost independent
components
 Multiple developers can work on different components
 Debugging and code organization is easy. Django has an
excellent debugger tool.
 It has in-built features like admin pages etc, which
reduces the effort of building the same from scratch
Pre-installed apps –
Django provides some pre-installed apps for users. To see

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

pre-installed apps, navigate to projectName –> projectName


–> settings.py
In your settings.py file, you will find INSTALLED_APPS.
Apps listed in INSTALLED_APPS are provided by Django for
the developer’s comfort.

Also, Visit :Django ORM – Inserting, Updating & Deleting


Data

Creating an App in Django :

Let us start building an app.


Method-1
 To create a basic app in your Django project you need to
go to the directory containing manage.py and from there
enter the command :
python manage.py startapp projectApp
Method-2
 To create a basic app in your Django project you need to
go to the directory containing manage.py and from there
enter the command :
django-admin startapp projectApp
Now you can see your directory structure as under :

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

 To consider the app in your project you need to specify


your project name in INSTALLED_APPS list as follows in
settings.py:
 Python3

# Application definition

INSTALLED_APPS = [

'django.contrib.admin',

'django.contrib.auth',

'django.contrib.contenttypes',

'django.contrib.sessions',

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

'django.contrib.messages',

'django.contrib.staticfiles',

'projectApp'

 So, we have finally created an app but to render the app


using URLs we need to include the app in our main
project so that URLs redirected to that app can be
rendered. Let us explore it.
Move to projectName-> projectName -> urls.py and add
below code in the header
from django.urls import include
 Now in the list of URL patterns, you need to specify the
app name for including your app URLs. Here is the code
for it –
 Python3

from django.contrib import admin

from django.urls import path, include

urlpatterns = [

path('admin/', admin.site.urls),

# Enter the app name in following

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

# syntax for this to work

path('', include("projectApp.urls")),

 Now You can use the default MVT model to create URLs,
models, views, etc. in your app and they will be
automatically included in your main project.
The main feature of Django Apps is independence, every
app functions as an independent unit in supporting the
main project.
Now the urls.py in the project file will not access the app’s
url.
To run your Django Web application properly the following
actions must be taken:-
1. Create a file in the apps directory called urls.py
2. Include the following code:
 Python3

from django.urls import path

#now import the views.py file into this


code

from . import views

urlpatterns=[

path('',views.index)

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

The above code will call or invoke the function which is


defined in the views.py file so that it can be seen properly
in the Web browser. Here it is assumed that views.py
contains the following code :-
 Python3

from django.http import HttpResponse

def index(request):

return HttpResponse("Hello Geeks")

After adding the above code, go to the settings.py file which


is in the project directory, and change the value of
ROOT_URLCONF from ‘project.urls’ to ‘app.urls’
From this:-

To this:

3. And then you can run the server(127.0.0.1:8000) and


you will get the desired output

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Security in cloud computing is a major concern. Data in cloud should be stored in


encrypted form. To restrict client from accessing the shared data directly, proxy and
brokerage services should be employed.

Security Planning
Before deploying a particular resource to cloud, one should need to analyze several
aspects of the resource such as:
 Select resource that needs to move to the cloud and analyze its sensitivity to risk.
 Consider cloud service models such as IaaS, PaaS, and SaaS. These models
require customer to be responsible for security at different levels of service.
 Consider the cloud type to be used such as public, private,
community or hybrid.
 Understand the cloud service provider's system about data storage and its
transfer into and out of the cloud.
The risk in cloud deployment mainly depends upon the service models and cloud types.

Understanding Security of Cloud


Security Boundaries
A particular service model defines the boundary between the responsibilities of service
provider and customer. Cloud Security Alliance (CSA) stack model defines the
boundaries between each service model and shows how different functional units relate
to each other. The following diagram shows the CSA stack model:

Key Points to CSA Model


 IaaS is the most basic level of service with PaaS and SaaS next two above levels
of services.
 Moving upwards, each of the service inherits capabilities and security concerns of
the model beneath.
 IaaS provides the infrastructure, PaaS provides platform development
environment, and SaaS provides operating environment.
 IaaS has the least level of integrated functionalities and integrated security while
SaaS has the most.
 This model describes the security boundaries at which cloud service provider's
responsibilities end and the customer's responsibilities begin.
 Any security mechanism below the security boundary must be built into the
system and should be maintained by the customer.
Although each service model has security mechanism, the security needs also depend
upon where these services are located, in private, public, hybrid or community cloud.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Understanding Data Security


Since all the data is transferred using Internet, data security is of major concern in the
cloud. Here are key mechanisms for protecting data.

 Access Control
 Auditing
 Authentication
 Authorization
All of the service models should incorporate security mechanism operating in all above-
mentioned areas.

Isolated Access to Data


Since data stored in cloud can be accessed from anywhere, we must have a
mechanism to isolate data and protect it from client’s direct access.
Brokered Cloud Storage Access is an approach for isolating storage in the cloud. In
this approach, two services are created:
 A broker with full access to storage but no access to client.
 A proxy with no access to storage but access to both client and broker.

Working Of Brokered Cloud Storage Access System


When the client issues request to access data:
 The client data request goes to the external service interface of proxy.
 The proxy forwards the request to the broker.
 The broker requests the data from cloud storage system.
 The cloud storage system returns the data to the broker.
 The broker returns the data to proxy.
 Finally the proxy sends the data to the client.
All of the above steps are shown in the following diagram:

Encryption
Encryption helps to protect data from being compromised. It protects data that is being
transferred as well as data stored in the cloud. Although encryption helps to protect data
from any unauthorized access, it does not prevent data loss.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

The Main Benefits and Challenges of Cloud Computing in Education


To help you decide if cloud education solutions are suitable for your institution, here’s a quick rundown
of the five main benefits and considerations of cloud computing in education.

5 Benefits of Cloud Computing

1. Long-Term Cost Savings


Cost reduction is a top benefit of cloud education software. Compared to managing an on-premise data
center, cloud migration supports an IT ecosystem. It does so by helping companies shift from capital
expenses to predictable monthly operation expenses. These predictable monthly expenses bring several
benefits to institutions, including:

 Reduced data storage costs


 Minimal data center maintenance
 Less money spent on replacing aging physical IT hardware
This is a cost effective way to enhance your learning environment and create new educational
opportunities.

2. Better Collaboration
Real time collaboration is an important aspect of cloud computing education. Cloud software helps to:

 Support student communication


 Create teacher management portals
 Power remote learning virtual classrooms
Cloud management in education creates plenty of new collaborative possibilities. It’s the easiest way to
create an environment where educators, students, and parents can stay on the same page.

Interested in learning more? Check out these blogs:

 Top 11 Benefits of Cloud Managed Services


 Five Benefits of Smartsourcing IT in Higher Education
 How to Overcome the Most Common Cloud Migration Challenges

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

3. Easy Access and Resource Availability


A cloud based education platform also improves physical and digital access to resources. It makes it
easier for students to access the same materials and learning resources, regardless of the devices or
internet browsers they use. Virtual solutions like cloud computing also provide ongoing learning
opportunities for all students. They are often implemented in accordance with the Web Content
Accessibility Guidelines (WCAG). The WCAG is a list of recommendations that improve the accessibility
of web-based content for people with disabilities and across mobile platforms. WCAG-compliant online
learning ensures that students with mobility issues or learning impairments can receive personalized
instructional programs that properly address their needs.

4. Scalability
Compared with scaling on-premise data centers, cloud based software helps to reduce costs associated
with facility growth. No matter how many students you have or higher education facilities you manage,
your cloud system can grow alongside you.

5. Modernizing Learning Environments


Solutions like the VMware cloud for education, available through Microsoft DaaS, are the best way to
prepare educational institutions for the future. These technologies make your school more desirable for
incoming students and allow you to provide a higher standard of learninG.

5 Considerations of Cloud Computing

Despite its benefits, there are also a few cloud computing issues and challenges in education.

1. Dependence on Internet Service Providers


An unfortunate reality of cloud computing in education is its reliance on internet access. Unlike
traditional classrooms, service outages or poor bandwidth experienced by internet service
providers can detract from online learning.

Working with a managed service provider helps to quickly determine whether the
source of the issue is the end user or the cloud provider. A solution can then be implemented to
provide you with improved access and connectivity.

2. Less Control
Although a benefit of cloud is accessibility to services and platforms in the education sector (like
Blackboard), the concern is you have less control over updates, training, and other features.

Since the solution is being handled “as a service,” the infrastructure is handled by the cloud
service provider and abstracted from your in-house team.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntua

Everything is hosted off-site, so you’ll have less control over the infrastructure and the system
setup. These are handled by your cloud service provider.

3. Vendor Commitment
Cloud solutions for higher education depend on the services of a single vendor. You typically
can’t switch between service providers.

Working with an MSP can help you choose the right vendor for your needs. Moving the
educational workload to the cloud is critical in picking the right provider.

A good provider listens to you, understands your risk and manages it from beginning to end,
eliminating any unforeseen issues that may occur.

In most cases, once you sign with a provider, you will be locked into a service contract with
them. However, most providers will let you out of a contract, but will charge you a penalty for
breaking the contract early. This may not be a problem if you’re satisfied with your services, but
it’s worth mentioning all the same.

4. Security
Cloud-based education technology is secure when set up correctly, but there are inherent security
risks when all assets are hosted online. Improperly-secured cloud systems may be vulnerable to
cyberattacks, and data security becomes a bigger concern

This concern escalates when users access resources across devices. If a device with saved
credentials gets stolen, the cloud platform becomes accessible to an unauthorized user.

To avoid these issues, you’ll need to make security a priority. This begins with a proper setup of
your cloud infrastructure and ensuring that all users are trained in cloud security best practices.

For example, considering an MDM management or multi-factor authentication from a MSP


would offset many security concerns. This would also provide more protection from end user
device vulnerability.

5. Up-Front Costs
While cost reduction is one of the primary benefits of cloud computing in education, there are
also some up-front costs.

The migration may be costly, depending on how many applications or services you’re moving to
the cloud. There’s also an opportunity cost in the time required to train staff on the new system
and security best practices.

The savings come more from long-term reductions in operational IT costs, but administrators
will need to be prepared for the long-term savings it will yield.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntua

You might also like