0% found this document useful (0 votes)
2 views

Fog and Cloud Computing

The document provides an overview of various cloud computing concepts, including community clouds, hypervisors, virtualization, and cloud security. It explains deployment models such as public, private, and hybrid clouds, as well as the importance of virtualization in optimizing hardware resources. Additionally, it discusses energy efficiency in data centers, cloud security threats, and the management of data in cloud computing environments.

Uploaded by

akashkr0802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Fog and Cloud Computing

The document provides an overview of various cloud computing concepts, including community clouds, hypervisors, virtualization, and cloud security. It explains deployment models such as public, private, and hybrid clouds, as well as the importance of virtualization in optimizing hardware resources. Additionally, it discusses energy efficiency in data centers, cloud security threats, and the management of data in cloud computing environments.

Uploaded by

akashkr0802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

F AND C IMP QUES 2

2M
UNIT 3
1. WHAT IS COMMUNITY CLOUD
A Community Cloud is a cloud computing model where multiple organizations with similar
requirements share the same infrastructure. It can be managed by one or more organizations or a
third-party provider. This model allows organizations to share resources, costs, and services, while still
maintaining a higher level of privacy and control than public cloud services.

Examples

➢ Industry-specific cloud environments for sectors like healthcare, finance, or education.

Advantage

➢ Cost-effective for similar organizations as resources and infrastructure are shared.

Disadvantage

➢ Limited scalability compared to public cloud due to shared resources.

2. DEFINE XEN HYPER-VISOR


Xen Hypervisor is a Type-1 hypervisor that enables multiple operating systems, referred to as virtual
machines or guests, to share one physical machine. It consists of an abstraction layer between the
hardware and the guest operating systems.

Features:

➢ Open-source and lightweight, widely used in enterprise and cloud environments


➢ Supports both para-virtualization and hardware-assisted virtualization
➢ Used by leading cloud providers such as Amazon Web Services
➢ Delivers high performance, security, and VM isolation

3. WHAT IS VIRTUAL BOX AND VIRTUALISATION OS


VirtualBox is an open-source and free virtualization software created by Oracle that enables one to
execute various operating systems (guest OS) on a single physical machine (host OS) simultaneously.

Features:

➢ Type-2 Hypervisor – executes atop a host OS such as Windows, Linux, or macOS.


➢ Enables testing or utilization of various OSs (e.g., Linux atop Windows) without dual booting.
➢ Supports functionality such as snapshots, shared folders, and USB device support.

A Virtualization OS is an operating system that supports virtualization technology, allowing you to


create and run multiple virtual machines on one physical computer.

Examples:

➢ Windows with Hyper-V


➢ Linux with KVM
➢ macOS with Parallels
4. WHAT IS VMWARE, SYSTEM VM, PROCESS VM
VMware is a company that provides virtualization software, allowing you to run multiple operating
systems on a single physical machine. It helps create and manage virtual machines using tools like:

➢ VMware Workstation (for desktops)


➢ VMware vSphere/ESXi (for servers)
➢ VMware Fusion (for Mac)

Key Purpose: Efficient use of hardware by running multiple isolated environments (OSes) on one
system.

A System Virtual Machine is a full virtualization environment that emulates a complete physical
computer. It allows an OS and applications to run just like they would on actual hardware.

Examples:

➢ Running Linux inside Windows using VirtualBox or VMware


➢ Android emulators for app testing

Key Feature: It replicates a full hardware system for the guest OS, offering complete isolation from the
host OS.

A Process Virtual Machine is a type of virtual machine designed to run a single program or process
independently of the underlying operating system and hardware.

Features:

➢ It starts when the process starts and ends when the process ends.
➢ Provides a platform-independent environment for executing a single application.
➢ Mainly used for program execution, portability, and isolation.

5. WHAT IS VM MONITOR
A Virtual Machine Monitor (VMM) or Hypervisor is a piece of software, firmware, or hardware that
produces and controls virtual machines (VMs).

VMM is executed on a host machine and supports multiple guest operating systems running on one
physical machine.

Resource Allocation:

➢ It assigns CPU, memory, and storage resources to every VM based on demand.


➢ Types of VMM / Hypervisors
➢ Type 1 (Bare Metal): Executes natively on hardware (e.g., VMware ESXi, Xen).
➢ Type 2 (Hosted): Executes atop an OS (e.g., VirtualBox, VMware Workstation).

Isolation & Security:

It offers isolation among VMs, which makes the system more secure and stable.

Use Case:

Widely used in cloud computing, data centres, and software test environments.
UNIT 4
1. HOW ENERGY IS EFFICIENT IN DATA CENTERS
Contemporary data centres seek to minimize energy use while ensuring performance and reliability.
Energy efficiency is attained through:

Virtualization

This minimizes the requirement for multiple physical servers by executing numerous virtual machines
on a single server, decreasing power and cooling needs.

Energy-Efficient Hardware

Low power consumption devices reduce the overall consumption.

Green Energy Sources

Most data centres are powered with renewable sources such as solar and wind to cut down on carbon
footprint.

2. EXPLAIN CLOUD SECURITY THREADS


Cloud security threats are threats and vulnerabilities associated with cloud-based systems, services,
and data. Some of the threats are:

Data Breaches

Unsecured access to sensitive or confidential information stored in the cloud.

Insecure APIs

APIs are generally used to access cloud services, and if not securely configured, they can be
compromised.

Data Loss

Irreversible loss of data through malicious attacks, software flaws, or human mistakes without backup.

3. HOW IS DATA MANAGED IN CLOUD COMPUTING


Cloud computing data management is the manner in which data is stored, accessed, preserved, and
protected within a cloud system. It encompasses:

Data Storage

Data is stored in distributed cloud storage systems, usually in multiple data centres.

Data Backup and Recovery

Data is backed up regularly to ensure data security and recover data after failure or disaster.

Data Access and Sharing

Cloud facilitates easy access and sharing of data from any device, at any time.
4. WHAT IS CLOUD SIMULATOR AND FUNDAMENTAL CLOUD SECURITY
What is a Cloud Simulator?

A Cloud Simulator is an application that simulates cloud computing environments to be used for
research, testing, or teaching without actually using cloud infrastructure.

Features:

➢ Assists in testing resource allocation, load balancing, scheduling, etc.


➢ Reduces costs by not deploying real clouds.
➢ Common examples: CloudSim, iFogSim, GreenCloud.

Applications:

➢ Cloud computing research.


➢ Performance evaluation.
➢ Teaching and learning about cloud concepts.

What is Fundamental Cloud Security?

Cloud Security is the collection of policies, technologies, and controls that are employed to safeguard
data, applications, and infrastructure in cloud environments.

Elements:

➢ Data Protection – Encryption and safe storage of user data.


➢ Identity & Access Management– Authentication that only the right users can access resources.
➢ Network Security – Employing firewalls, VPNs, and secure communication.
➢ Compliance & Privacy – Complying with legal and regulatory requirements.
➢ Threat Detection – Surveillance for malicious software, intrusions, and unwarranted access.

5. WHAT IS STORAGE SECURITY


Storage security is the group of technologies, policies, and practices that defend stored data in digital
storage systems (such as hard disks, SSDs, or cloud storage).

Secures Data at Rest

Guarantees stored data is secure against unauthorized access, theft, or loss.

Encryption

Employing encryption to render stored data unreadable unless the appropriate keys are available.

Access Control

Permits just approved users or systems to view or change stored data.

Backup & Recovery

Avoids data loss via constant backups and supports fast recovery on failure.

Cloud & On-premises

Scales down to local servers, data canters, and cloud storage platforms.
12M
UNIT 3
1.
A. EXPLAIN THE DEPLOYMENT MODELS-PUBLIC CLOUD, PRIVATE CLOUD AND HYBRID
CLOUD

PUBLIC CLOUD
➢ A public cloud is a publicly accessible cloud environment owned by a third-party cloud
provider.
➢ The IT resources on public clouds are usually provisioned via cloud delivery models.
➢ The cloud provider is responsible for the creation and on-going maintenance of the public
cloud and its IT resources.
➢ Available to everyone and anyone can go and sign up for the service.
➢ Economies of Scale due to Size.
➢ Some public cloud concerns
– Ownership
– Control
– Regulatory compliance
– Data/Application security
– Liability for SLA breaches
PRIVATE CLOUD

➢ A private cloud is owned by a single organization.


➢ Private clouds enable an organization to use cloud computing technology as a means of
centralizing access to IT resources by different parts, locations, or departments of the
organization.
➢ When a private cloud exists as a controlled environment, the problems described in the Risks
and Challenges section do not tend to apply.
➢ Cloud infrastructure built in house, Retains control of resources
➢ More security & privacy and can conform to regulatory requirement
➢ Needs capital investment and expertise to build and maintain

HYBRID CLOUD

➢ A hybrid cloud is a cloud environment comprised of two or more different cloud deployment
models. Best of Both World
➢ Workload is deployed mostly on private cloud
➢ Resources can be used from public cloud when there is a surge in peak load (Cloud Burst)
B. WHAT IS HYPERVISOR
The hypervisor monitor provides the most control, flexibility and performance, because it is not
subject to limitations of a host OS. The hypervisor relies on its own software drivers for the hardware;
however, they may limit portability to another platform. Examples of this method are VMware ESX
and IBM's mainframe z/VM.

Types of Hypervisors:

Type 1 (Bare Metal):

➢ Runs directly on hardware without a host OS.


➢ Offers full virtualization (complete hardware simulation).
➢ Examples: VMware ESX/ESXi, Oracle VM, LynxSecure, Wind River VxWorks.
➢ Guest OSes can be the same or different.
➢ CPU virtualization support (e.g., AMD-V, Intel VT-x) may need to be enabled in BIOS.

Type 2 (Hosted):

➢ Runs on top of an existing OS (host OS).


➢ Emulates hardware via software interfaces.
➢ Supports paravirtualization (host OS handles I/O via para-API).
➢ Examples: Microsoft Hyper-V, KVM, VMware Workstation, Parallels, Xen.
➢ Xen (used by AWS) runs on Linux and supports paravirtualization.

Hardware

The physical computer resources.

Micro Hypervisor

A very small, minimal hypervisor layer that sits directly on the hardware. It's responsible for basic
resource management and virtualization.

Service VM

A special virtual machine that runs the "Service OS". This OS handles device drivers and other system-
level services.

VMs

These run "Guest OS" and applications. Each VM is isolated from the others.
2. EXPLAIN VIRTUALISATION, TYPES AND NEED FOR VIRTUALISATION WITH EXAMPLE
Virtualization allows multiple operating system instances to run concurrently on a single computer; it
is a means of separating hardware from a single operating system.

Left side: A traditional setup where an application runs directly on an operating system, which in turn
interacts with the physical hardware.

Right side: A virtualized setup. Here, the same hardware is managed by a VMware Virtualization Layer.
This layer allows multiple virtual machines to run concurrently. Each VM has its own operating system
and applications, isolated from each other, but sharing the underlying hardware resources.

Before Virtualization:

➢ Single OS image per machine


➢ Software and hardware tightly coupled
➢ Running multiple applications on same machine often creates conflict
➢ Underutilized resources
➢ Inflexible and costly infrastructure

After Virtualization:

➢ Hardware-independence of operating system and applications


➢ Virtual machines can be provisioned to any system
➢ Can manage OS and application as a single unit by encapsulating them into virtual machines

Types of Virtualization
Hardware: Virtualizes physical hardware resources.

➢ Full: Mimics the whole hardware setup.


➢ Bare-Metal: Hypervisor executes directly on the hardware (Type 1).
➢ Hosted: Hypervisor executes above an already installed OS (Type 2).
➢ Partial: Virtualizes only certain hardware elements.
➢ Para: Needs guest OS modifications for improved performance.

Network: Virtualizing network resources.

➢ Internal Network Virtualization: Establishing virtual networks inside one host.


➢ External Network Virtualization: Taking virtualization outside multiple physical networks.

Storage: Virtualizing storage resources.

➢ Block Virtualization: Logical block treatment of physical storage.


➢ File Virtualization: Hiding storage locations of files.

Memory: Memory virtualization.

➢ Application Level Integration: Application-specific memory virtualization.


➢ OS Level Integration: Operating system-based memory virtualization.

Software: Virtualization of software components.

➢ OS Level: Virtualization of whole operating systems.


➢ Application: Virtualization of standalone applications.
➢ Service: Virtualization of a specific service.

Data: Virtualization of data.

➢ Database: Virtualization of database systems.

Desktop: Virtualization of desktop environments.

➢ Virtual Desktop Infrastructure (VDI): Centralized management of virtualized desktops.


➢ Hosted Virtual Desktop: Server-based virtual desktops accessed over the internet.
Physical Hardware (Bottom Layer)

This includes:

➢ CPU (the brain of the computer)


➢ I/O (Input/Output) devices
➢ RAM (memory)
➢ Disk (storage)
➢ This is your actual, physical computer.

Virtualization Layer (Middle Layer)

This is software called a Hypervisor.

What it does:

➢ Sits between your physical hardware and the operating systems.


➢ Splits the physical machine into multiple virtual machines.
➢ Each virtual machine thinks it has its own CPU, memory, etc.

Virtual Hardware & Virtual Machines (Top Layer)

➢ Each virtual machine (VM) includes:


➢ Its own Operating System (OS)
➢ Its own Application (APP)

Advantages:

➢ Better hardware use – Multiple systems run on one physical machine.


➢ Cost savings – Less physical equipment needed.

Disadvantages:

➢ Performance hit – Slightly slower than running directly on hardware.


➢ Complex setup – Needs skilled management and planning.
Hardware Virtualization
It is the abstraction of computing resources from the software that uses cloud resources. It involves
embedding virtual machine software into the server's hardware components. That software is called
the hypervisor.

Advantages:

➢ Better resource utilization.


➢ Easier OS testing and deployment.

Disadvantages:

➢ Performance overhead from the hypervisor.


➢ Requires powerful hardware for efficiency.

Software Virtualization
Software virtualization is similar to that of virtualization except that it is capable to abstract the
software installation procedure and create virtual software installation. Many applications & their
distributions became typical tasks for IT firms and departments. The mechanism for installing an
application differs.

Advantages:

➢ Simplifies software deployment and testing.


➢ Avoids conflicts between different app versions.

Disadvantages:

➢ May not support all software features.


➢ Can be tricky to manage across many systems.

Server Virtualization
In this process, the server resources are kept hidden from the user. This partitioning of physical server
into several virtual environments; result in the dedication of one server to perform a single application
or task. This technique is mainly used in web-servers which reduces the cost of web-hosting services.
Instead of having separate system for each web-server, multiple virtual servers can run on the same
system/computer.

Advantages:

➢ Reduces hardware and hosting costs.


➢ Improves server efficiency and isolation.

Disadvantages:

➢ If one physical server fails, all VMs are affected.


➢ Can be complex to manage and secure.
3. A. EXPLAIN KERNEL VIRTUAL MACHINE
What is KVM?

KVM (Kernel-based Virtual Machine) is a Linux kernel module that enables a Linux system to function
as a hypervisor. It makes it possible for users to run several, isolated virtual environments (VMs) on a
single physical machine.

Features:

Full Virtualization

➢ Supports execution of unmodified guest OS (Linux, Windows, etc.) with full virtual hardware
access.

Uses Hardware Virtualization

➢ Needs hardware virtualization support (Intel VT-x or AMD-V).


➢ Offers improved performance over software-only virtualization.

Part of Linux Kernel

➢ Included by default in Linux kernel version 2.6.20 and later.


➢ No requirement for external package to install hypervisor.

Open Source

➢ Free to use and supported by large community as well as enterprise (Red Hat).

Support for Many Architectures

➢ Initially designed for x86 but now supports architectures such as IBM S/390, Intel IA-64, ARM.

Dynamic Allocation of Resources

➢ Supports allocation of CPU, RAM, storage dynamically among VMs.

Isolation for Security

➢ Each VM is completely isolated, with strong security boundaries.

Tool Support

➢ Usually paired with tools such as QEMU for hardware emulation and libvirt for management.

Advantages:

➢ High performance due to hardware acceleration.


➢ Strong integration with Linux systems.
➢ Scalable – can run lots of VMs depending on host capacity.

Disadvantages:

➢ Hardware support required (Intel VT/AMD-V).


➢ Performance is contingent upon host OS effectiveness.
➢ Initially developed by Qumranet later acquired by Redhat in 2008
➢ Full virtualized solution with a small code base for Linux on x86 hardware with virtualization
extensions (Intel VT or AMD-V)
➢ Designed as small, light kernel module to leverage the facilities provided by hardware support
of virtualization
➢ Originally supported x86 processors but now has been ported to IBM S/390, Intel IA-64
➢ Free, open source virtualization architecture. Kernel component comes standard in vanilla
Linux (2.6.20)

KVM hypervisor Type 1 or Type 2: Still up for discussion

➢ Implemented as a kernel module, allowing Linux to become a hypervisor simply by loading it.
➢ Device appears in /dev/kvm. Allows control by ioctl() system calls to create new VMs, assign
memory, etc.
➢ Hardware emulation or platform virtualization controlled by QEMU-k

➢ Linux Kernel: The base operating system kernel.


➢ KVM Driver: A kernel module loaded into the Linux kernel, enabling virtualization capabilities.
➢ Normal User Process: Represents standard applications running on the host OS.
➢ Guest Mode: Represents virtual machines running on top of KVM.
➢ Qemu I/O: Indicates that QEMU handles the I/O operations for the guest VMs.

B. WHAT IS EUCALYPTUS AND WHY IT IS USED


Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems It is an open-source
software platform used to build private and hybrid cloud environments. It helps organizations create
their own cloud services, similar to Amazon Web Services (AWS).
1. Open Source & Extensible

➢ Fully open-source, thus users can customize it to suit their requirements.


➢ Implemented primarily in Java and C.
➢ Facilitates customization and integration into current systems.

2. IaaS Model Support

➢ Eucalyptus is mainly Infrastructure-as-a-Service (IaaS) oriented.


➢ Supports virtual machines, storage, and networks on demand.

3. Hybrid Cloud Capability

➢ Supports a hybrid cloud configuration by bridging private clouds with AWS.


➢ You can offload workloads to AWS if your resources in your private cloud are exhausted.

4. Compliant with Current Infrastructure

➢ May be installed on Linux servers.


➢ Emulates KVM or Xen hypervisors to virtualize machines.
➢ No prior hardware requirements – utilizes what is already available.

5. Security Capabilities

➢ Provides role-based access control (RBAC).


➢ X.509 certificates for secure communication between parts.
➢ User and group administration to provide access to the cloud.

6. Components Overview

7. Monitoring & Management

➢ Offers web-based dashboard and CLI utilities.


➢ Monitors usage, status, and health of cloud resources.

8. Use Cases

➢ Private or hybrid cloud-building enterprises


➢ Government or research bodies requiring cloud control
➢ Educational institutions teaching cloud concepts
➢ Developers that require local AWS-like environments for testing
UNIT 4
1. EXPLAIN THE CLOUD DATA CENTRE WITH APPLICATIONS
What Is a Data Centre?

A data centre is a physical building that companies use to house and maintain computers, networking
equipment, and data. It's essential for hosting applications, storing data, and maintaining services.

Key Components of a Data Centre

➢ IT Equipment: Servers, storage systems, networking devices (switches, routers, firewalls).


➢ Infrastructure: Power supplies (UPS, generators), cooling systems (AC, ventilation), and
network connections.
➢ Security: Physical security (locked rooms, monitoring) and fire protection (chemical systems,
not sprinklers).

Data centre Types

Single-site:

➢ Centralized infrastructure.
➢ Simpler to manage with a single location.
➢ Appropriate for small to mid-size businesses.
➢ Lower operational and staffing expenses.
➢ Limited fault tolerance — single point of failure.

Multi-site:

➢ Infrastructure dispersed over multiple geographic locations.


➢ Minimizes latency by serving users from close-by sites.
➢ Supports load balancing and disaster recovery.
➢ Improves business continuity and uptime.
➢ Increased cost and complexity in setup and synchronization.
➢ Excellent for compliance requirements (HIPAA, GDPR, etc.) owing to physical and
electronic security.

Mobile Data Centres

➢ Completely enclosed, pre-configured, and pre-tested units.


➢ Usually contains servers, power, cooling, and networking in a single container.
➢ May be deployed in remote areas where it is not possible to have conventional data
centres.
➢ May be powered off-grid using generators or renewable sources.
➢ Quick deployment — ideal for use in military, disaster recovery, mining, or oil & gas
industries.
➢ Easy relocation — can be relocated as project requirements change.
➢ Suitable for temporary data processing, edge computing, or proof-of-concept projects.
➢ Limited in its expansion and not ideal for large-scale, long-term business operations. Data
centre Tiers
Design & Architecture Factors

➢ Selected due to location (natural disaster susceptibility, power accessibility, political climate).
➢ Must be able to accommodate equipment size, airflow, as well as employees' safety.

Energy Use & Efficiency

➢ Megawatts of power can be consumed by data centres.


➢ Green data centres target minimizing environmental footprints.
➢ Power Usage Effectiveness (PUE) = Total Power ÷ IT Equipment Power.
➢ Virtualization assists in minimizing energy consumption through workload consolidation.

Security & Safety

➢ Comprises badge access, video surveillance, and fire suppression (non-water systems).
➢ Constructed to securely accommodate equipment and prevent unauthorized access.

Infrastructure Management & Monitoring

➢ DCIM (Data centre Infrastructure Management) software facilitates remote monitoring and
control.
➢ Virtualization enables servers, storage, and networks to be pooled and made more efficient.

Why It Matters

➢ Cloud and data centres are highly dependent on modern businesses for:
➢ Quick, dependable services
➢ Optimized resource use
➢ Security and recovery from disasters
➢ Scalability and adaptability

Applications of Cloud Data Centres

➢ Web Hosting & E-commerce


➢ Cloud Storage Services
➢ Artificial Intelligence & Machine Learning
➢ Big Data Analytics

Features:

➢ Elasticity: Automatically scale up/down based on demand.


➢ Pay-as-you-go: Cost-effective, no hardware to buy.
➢ Global Reach: Companies like AWS, Azure, and Google Cloud have regions all over the world.
➢ Multi-tenancy: Single data centre serves multiple clients securely.
➢ Self-service Portals: Users can provision virtual machines, storage, and apps in seconds
through a web interface.
➢ API Access: Interconnect systems and automate processes through APIs.

Examples of Cloud Data centre Providers

➢ Amazon Web Services (AWS) – EC2, S3, Lambda, etc.


➢ Microsoft Azure – Azure Virtual Machines, Blob Storage.
➢ Google Cloud Platform (GCP) – Compute Engine, BigQuery.
➢ IBM Cloud, Oracle Cloud, Alibaba Cloud, etc.
2. WHAT IS SENSOR CLOUD EXPLAIN IN DETAIL
Sensor-Cloud is a novel paradigm for cloud computing that leverages the physical sensors in order to
aggregate its data and map all sensor data into a cloud computing platform. Sensor-Cloud manages
sensor data effectively, which is utilized in numerous monitoring applications.

Sensor-Cloud is an emerging cloud computing paradigm that employs the physical sensors to collect
its data and forward all the sensor data to a cloud computing environment. Sensor-Cloud processes
the sensor data effectively, which is utilized for various monitoring applications

Architecture

1. Data Analysis

➢ Definition: Processes great amounts of sensor data using cloud computing.


➢ Example: Analysing traffic sensor data for urban planning.
➢ Advantage: Provides profound insights into massive data sets.
➢ Disadvantage: Requires robust data security to avoid breaches.

2. Scalability

➢ Definition: Expands sensor networks simply by utilizing cloud resources.


➢ Example: Deploying hundreds of additional sensors to track environmental changes.
➢ Advantage: Scales on demand without additional hardware.
➢ Disadvantage: Costs can rise with increased use.
3. Collaboration

➢ Definition: Enables many to access and collaborate on sensor information.


➢ Example: Researchers at several universities collaborating using the same weather
information.
➢ Advantage: Fosters collaborative work and collaborative findings.
➢ Disadvantage: Can lead to data privacy or ownership problems.

4. Visualization

➢ Definition: Graphical facilities to visualize and interpret sensor data.


➢ Example: Heatmaps of pollution across city sensors.
➢ Advantage: Easier detection of trends and patterns.
➢ Disadvantage: High-level visualizations need training to decipher.

5. Storage & Processing

➢ Definition: Provides cloud-based storage and computing resources.


➢ Example: Storage of years of temperature information from farm fields.
➢ Advantage: No physical servers need to be purchased.
➢ Disadvantage: Internet reliance for access and computation.

6. Dynamic Access

➢ Definition: Anytime, anywhere access by the user.


➢ Example: Farmers accessing sensor information through smartphone applications.
➢ Advantage: Boosts mobility and ease of use.
➢ Disadvantage: Internet connectivity required at all times.

7. Multitenancy

➢ Definition: Shared cloud infrastructure for many users and services.


➢ Example: Several smart home applications accessing cloud-based stored sensor information.
➢ Advantage: Saves cost by sharing resources.
➢ Disadvantage: Potential performance degradation due to sharing of resources.

8. Automation

➢ Definition: Services and work are automatically handled and provided.


➢ Example: Auto-scaling cloud storage as data increases.
➢ Advantage: Accelerates operations and minimizes manual effort.
➢ Disadvantage: More difficult to manage if automation goes wrong or misses the mark.

9. Flexibility

➢ Definition: Can easily change and utilize different apps and services when needed.
➢ Example: Changing between different data visualization tools.
➢ Advantage: Quickly adjusts to evolving user requirements.
➢ Disadvantage: Overload of choices leads to confusion.
10. Resource Agility

➢ Definition: Quick setup and deployment of cloud resources.


➢ Example: Starting a sensor-based alert system during emergencies.
➢ Advantage: Quick reaction to new requirements.
➢ Disadvantage: Can increase expenses if not managed efficiently.

11. Resource Optimization

➢ Definition: Reduced use of computing and storage capacity.


➢ Example: Sharing server space with various weather monitoring services.
➢ Advantage: Reduces cost and waste of underutilized resources.
➢ Disadvantage: Optimization can result in slow performance with heavy loads.

12. Real-Time Response

➢ Definition: Real-time data processing and user feedback via the cloud.
➢ Example: Instant alerts from motion detectors in security systems.
➢ Benefit: Facilitates quick, well-informed decisions.
➢ Drawback: Delays are possible if network connectivity is low.

Security can be given by:

Data Encryption

➢ Definition: Encrypts data to allow only specific parties to view it.


➢ Example: Encryption of temperature sensor data prior to sending to cloud storage.
➢ Advantage: maintains privacy and guard sensitive data throughout transmission and storage.
➢ Disadvantage: consumes extra computation power, causing devices to move slower.

Authentication & Authorization

➢ Definition: authenticate user identity and permit access per role.


➢ Example: Two-factor authentication upon login by a scientist to be allowed access to certain
sensor data.
➢ Benefit: It keeps unauthorized users from accessing or changing data.
➢ Drawback: Weak implementation still results in breaches (e.g., poor passwords).
3.
A. EXPLAIN THE MOBILE CLOUD COMPUTING SERVICE MODELS
Mobile Cloud Computing, or MCC, combines the rapidly expanding Cloud Computing Applications
market with the pervasive smartphone. One of the most revolutionary combinations of contemporary
technologies, MCC has already demonstrated itself to be extremely useful to all the cloud-based
service-providers and mobile users alike.

In this method, easy-to-use mobile applications are created, which are driven by and hosted by the
cloud computing technology. The 'mobile cloud' strategy provides the opportunity to the apps.
Developers to construct applications specifically made for mobile-users, which may be utilized without
being tied up with the operating system of the device or the ability of the device to save data. In this
case, the data-processing and data storing tasks are accomplished outside the mobile devices.

MCC's capability of making the device execute cloud-web-applications on unlike other native
applications makes MCC unique from the notion of 'Mobile Computing'. Here the clients remotely
access store applications and its related data whenever over the Internet through subscribing the
cloud services. While the majority of devices already use a combination of web-based and native
applications, the direction that things are currently moving appears to be toward services and
convenience that are provided by a mobile cloud.

Researchers are making genuine efforts towards developing a powerful and symbiotic platform,
named the 'Third Platform,' that would unite the mobile and the cloud. Professionals envision this
platform to further revolutionize the rising of MCC that has provided its users a superior method to
store and access their data as well as newer data synchronization methods, improved reliability and
better performance. All these beneficial aspects have inspired a lot of people to consider MCC for their
smart-phones.

KEY ENABLERS BEHIND THE GROWTH AND SUCCESS OF MOBILE CLOUD COMPUTING

➢ Improved Broadband Connectivity


➢ Enhanced Cloud Storage Capabilities
➢ Emerging Web & Virtualization Technology Adoption
B. WRITE ABOUT MEMORY MANAGEMENT AND DATA MANAGEMENT IN CLOUD
COMPUTING
Cloud management

Cloud data management is the process of storing, organizing, securing, and analyzing data in the
cloud. As companies move away from conventional data warehouses to cloud systems, proper
management is critical to derive value while maintaining security and efficiency.

Why It Matters

➢ Conventional data centres are expensive and difficult to maintain.


➢ Cloud provides accessibility, scalability, and cost-savings, but requires careful planning.
➢ Effective data management allows companies to maximally leverage cloud capabilities.

Main Strategies in Cloud Data Management

1. Security First for Data

➢ Utilize encryption, firewalls, and monitoring.


➢ Ensure security at rest, in transit, and outside of production.
➢ Enforce standardized governance policies throughout the enterprise.
➢ Example: Encrypt sensitive customer data stored in AWS with native encryption and IAM
policy.
➢ Benefit: Guards against data breaches.
➢ Drawback: May be tricky to set up correctly.

2. Tiered Storage Optimized

➢ Store most-used data in high-performance levels.


➢ Store less-used data in low-cost, high-capacity storage.
➢ Decreases latency and increases efficiency.
➢ Example: Utilize Amazon S3 for cold data and S3 Glacier for archival.
➢ Benefit: Reduces storage costs.
➢ Drawback: Delays in retrieval of archived data.

3. Flexibility with Multi-Structured Data

➢ Supports multiple data types (structured, semi-structured, unstructured).


➢ A single approach prevents additional storage and analysis expenses.
➢ Example: Google BigQuery for handling and querying JSON, CSV, and relational data.
➢ Benefit: Facilitates end-to-end data analysis.
➢ Drawback: Requires sophisticated data integration expertise.
Memory management

Memory management in the cloud refers to how computing resources (RAM and virtual memory)
are allocated, scaled, and optimized across different workloads and users. It is handled by cloud
service providers automatically or via user configuration.

1. Dynamic Memory Allocation

➢ Cloud platforms allocate memory on-demand based on application requirements.


➢ If a program needs more memory, it is automatically provisioned.
➢ Example: AWS Lambda or Azure Functions auto-scale memory depending on function size.
➢ Advantage: Efficient resource use — you pay only for what you use.
Disadvantage: Sudden spikes in demand may cause short delays or resource contention.

2. Virtualization and Hypervisors

➢ Virtual machines share underlying physical memory via a hypervisor.


➢ Enables multiple VMs to run on the same hardware while isolating each tenant’s memory
usage.
➢ Example: VMware ESXi, Microsoft Hyper-V.
➢ Advantage: Improves server utilization and cost efficiency.
➢ Disadvantage: Slight overhead due to virtualization layer.

3. Memory Optimization Tools

➢ Providers offer tools to monitor and optimize memory use.


➢ Example: AWS CloudWatch, Google Stackdriver.
➢ Advantage: Helps detect memory leaks, optimize performance.
Disadvantage: Requires technical knowledge to interpret metrics correctly.

4. Autoscaling and Load Balancing

➢ Based on memory consumption, autoscaling adjusts the number of instances.


➢ Load balancers distribute workloads to prevent overloading one instance.
➢ Advantage: Prevents application crashes due to memory overuse.
Disadvantage: May introduce costs if scaling aggressively.

You might also like