0% found this document useful (0 votes)
77 views4 pages

CC Midsem

The document discusses various computing technologies, including virtualization, cloud computing, distributed computing, and grid computing, highlighting their key concepts, advantages, and disadvantages. It outlines different types of hypervisors, cloud service models, and virtualization types, as well as their applications in various fields. Additionally, it addresses future trends in these technologies, such as containerization and edge computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views4 pages

CC Midsem

The document discusses various computing technologies, including virtualization, cloud computing, distributed computing, and grid computing, highlighting their key concepts, advantages, and disadvantages. It outlines different types of hypervisors, cloud service models, and virtualization types, as well as their applications in various fields. Additionally, it addresses future trends in these technologies, such as containerization and edge computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Virtualization is a technology that allows a single Cloud computing is a technology that enables on-demand Distributed ComputingA distributed system

ibuted system consists


physical resource or application to be shared among access to a shared pool of configurable computing of multiple independent computers that appear as a
multiple customers or organizations. It involves resources (e.g., servers, storage, applications) over the single coherent system to the user. The components
creating a logical (virtual) version of something, such Internet.These resources can be rapidly provisioned and are located on different machines and coordinate
as hardware, storage, or an operating system released with minimal management effort or service actions through communication, often over a
(OS).Key Concepts:-1)Host Machine: The physical provider interaction.Key Concepts:1)On-Demand network.Characteristics:1)No Shared Memory: Each
machine on which virtualization is Self-Service: Users can access computing resources as machine has its own memory.2)Message-Based
implemented.2)Guest Machine: The virtual machine needed, without human intervention.2)Broad Network Communication: Data exchange happens through
(VM) created on the host machine.3)Pointer Access: Resources are accessible over the network using network messages.3)Autonomous Nodes: Each
System: Assigns a logical name to a physical standard devices like laptops and machine runs its own local operating
resource, providing access when smartphones.3)Resource Pooling: Resources are pooled system.4)Heterogeneity: Different hardware and
needed.Advantages:-1)Efficient resource to serve multiple consumers, with resources dynamically software can be part of the same distributed
utilization.2)Cost reduction by running multiple assigned and re-assigned based on demand.4)Rapid system.Advantages:1)Speed: Parallel processing can
applications and OS on the same server.3)Increased Elasticity: Resources can be scaled up or down quickly to significantly enhance computational
flexibility and hardware utilization.Hypervisor:-A handle varying loads.5)Measured Service: Usage of speed.2)Reliability: If one component fails, others can
hypervisor, also known as a Virtual Machine Manager resources is monitored, and consumers only pay for what take over.3)Scalability: New nodes can be added
(VMM), is software or firmware that allows multiple they use.Cloud Service Models:-a. Infrastructure as a easily.Disadvantages:1)Complex Software
guest operating systems to run on a single host Service (IaaS):-1)Description: Provides virtualized Development: Coordination among distributed
system simultaneously.Types of computing resources over the internet.2)Components: components is challenging.2)Security Risks: More
Hypervisors:-(a)Type 1 Hypervisor ("Bare Virtual machines, storage, servers, and networks.3)Use points of vulnerability.3)Network Issues: Performance
Metal"):-1)Runs directly on the host Cases: Hosting websites, storage, backup, and depends on the network's
hardware.2)Does not require a base operating recovery.4)Examples: Amazon Web Services (AWS), reliability.Examples:1)Telecommunication Networks:
system.3)Offers better performance and greater Microsoft Azure, Google Compute Engine.5)Pros: High Like cellular and telephone networks.2)Online
flexibility.4)Commonly used in production scalability, pay-as-you-go model, and full control over Gaming: Massively multiplayer online games
environments.5)Examples: VMware ESXi, Microsoft infrastructure.6)Cons: Requires technical expertise to (MMOs).Web Services: The architecture behind the
Hyper-V, XenServer.(b)Type 2 Hypervisor manage resources.b. Platform as a Service World Wide Web.Cluster Computing:-Cluster
("Hosted"):-1)Runs on a host operating (PaaS):-1)Description: Offers hardware and software computing involves a group of closely linked computers
system.3)Suitable for desktop and development tools over the internet, primarily for working together as a single system. The nodes are
environments.4)Less efficient and less flexible than developers.2)Components: Development tools, typically connected through a high-speed local network
Type 1.5)Examples: Oracle VirtualBox, VMware databases, operating systems, and runtime and coordinated by cluster
Workstation, Parallels Desktop.Types of environments.3)Use Cases: Developing, testing, and middleware.Architecture:1)Master Node: Distributes
Virtualization:-1)Hardware deploying applications.4)Examples: Google App Engine, tasks among other nodes.2)Node Computers: Execute
Virtualization:1)Consolidates multiple physical Microsoft Azure App Services, Manjrasoft Aneka.5)Pros: tasks and send results back to the master
servers into virtual servers on a single physical Simplifies development, supports collaborative work, and node.3)Middleware: Tools like Message Passing
server.2)Subtypes:(a)Full Virtualization: Hardware reduces the complexity of managing infrastructure.6)Cons: Interface (MPI) allow communication within the
is fully simulated, with complete isolation between the Limited control over the underlying infrastructure.c. cluster.Types of Clusters:1)High Availability (HA)
guest OS and hardware (e.g., Microsoft and Software as a Service (SaaS):-1)Description: Delivers Clusters: Provide redundancy, allowing services to
Parallels).(b)Para-Virtualization: Guest OS is software applications over the internet, on a subscription remain available even if some nodes fail.2)Load
partially isolated and requires modifications to run in basis.2)Components: Hosted applications and software Balancing Clusters: Distribute workloads evenly
the virtual environment (e.g., VMware and services accessible through web browsers.3)Use Cases: across all nodes.3)HA & LB Combination: Merges
Xen).(c)Emulation Virtualization: The virtual Email services, CRM, collaboration tools.4)Examples: high availability and load balancing
machine simulates hardware, allowing software to Google Workspace, Microsoft 365, Salesforce.5)Pros: features.Advantages:1)Performance: High processing
run independently of the actual hardware.2)Software Easy to use, no need for hardware or software speed and optimized resource use.2)Scalability: Can
Virtualization:-1)Allows multiple virtual environments installations, and accessible from anywhere.6)Cons: handle large applications
to run on the host machine.2)Subtypes:(a)Operating Limited customization and dependency on the service efficiently.Disadvantages:1)Complex Development:
System Virtualization: Hosts multiple OS on a provider's uptime.Cloud Deployment Models:-a. Public Requires specialized programming
native OS (e.g., Docker).(b)Application Cloud:-1)Description: Services are provided over the techniques.2)Debugging Challenges: Troubleshooting
Virtualization: Runs individual applications in a public internet and shared across is more difficult in distributed environments.Use
virtual environment (e.g., Citrix XenApp).(c)Service organizations.2)Characteristics: Multi-tenant, scalable, Cases:1)Web Servers: Load balancing clusters are
Virtualization: Allows testing and development of cost-effective.3)Examples: Amazon Web Services (AWS), often used for large-scale websites.2)Scientific
services in a virtual setup.3)Memory Microsoft Azure, Google Cloud Platform.4)Pros: Low cost, Research: Simulations and computations requiring
Virtualization:1)Aggregates physical memory across no maintenance, and high scalability.5)Cons: Less control high processing power.Grid Computing:-Grid
different servers into a single virtual memory over data security and compliance concerns.b. Private computing is a type of distributed computing where a
pool.2)Subtypes:(a)Application-Level Control: Cloud:-1)Description: Cloud infrastructure is dedicated to "virtual supercomputer" is formed by combining the
Applications directly access the memory a single organization, either hosted on-premises or by a processing power of geographically dispersed
pool.(b)Operating System-Level Control: Memory third-party provider.2)Characteristics: Offers greater computers, often across different organizations.Key
access is managed by the OS.4)Storage control and customization.3)Pros: Enhanced security, Concepts:1)Resource Sharing: Allows sharing of
Virtualization:1)Combines multiple physical storage better compliance, and control over data.4)Cons: Higher computing power, storage, and data across
devices into a single logical storage costs and maintenance responsibilities.c. Hybrid organizations.2)Middleware: Coordinates the
device.2)Improves performance, load balancing, and Cloud:-1)Description: Combines public and private resources and manages task execution.3)Dynamic
data management.3)Subtypes:(a)Block clouds, allowing data and applications to be shared Nature: Resources are added and removed as
Virtualization: Presents virtual disks to OS and between them.2)Characteristics: Offers flexibility and needed.Architecture Layers:1)Fabric Layer: Physical
applications.(b)File Virtualization: Provides virtual balanced control.3)Pros: Scalable, cost-efficient, and resources like servers and storage
storage in the form of files and directories.5)Data provides control over critical data.4)Cons: Complex to devices.2)Connectivity Layer: Provides secure access
Virtualization:Abstracts data to present it as an manage and integrate.d. Community and communication.3)Resource Layer: Manages
independent layer, reducing input errors and Cloud:-1)Description: Shared by several organizations access to shared resources.4)Collective Layer:
enhancing data management.6)Network with common interests, typically managed by a third party Handles services like resource discovery and
Virtualization:1)Creates multiple sub-networks on a or internally.2)Characteristics: Offers collaborative monitoring.5)Application Layer: User interface for
single physical network.2)Improves security and benefits with improved security.3)Examples: Government interacting with the grid.Types of Grid
allows better data agencies or healthcare organizations sharing Systems:1)Compute-Intensive Grids: Aggregate
management.Subtypes:(a)Internal Network infrastructure.4)Pros: Cost-effective and tailored to CPU power for large computations.2)Data-Intensive
Virtualization: Makes a single system function as a community needs.5)Cons: Limited availability and shared Grids: Provide massive storage and data management
network.(b)External Network Virtualization: Merges resources might affect performance.Key Technologies capabilities.3)Utility Grids: Dynamically pool resources
multiple networks into one or splits one network into Enabling Cloud Computing:-1)Virtualization: Allows for specific application needs.4)Self-Organized Grids:
several.7)Desktop Virtualization:1)Stores a user's multiple virtual machines to run on a single physical Include intelligence for self-healing and dynamic
desktop on a remote server, allowing access from machine, improving resource utilization.2)Distributed management.5)Real-Time Grids: Support real-time
any device.2)Useful for remote work and enhances Computing: Manages tasks across multiple machines, applications like disaster
data security through secure protocols.Benefits of enabling the cloud to function as a unified management.Advantages:1)Resource Utilization:
Virtualization:-1)Cost Savings: Reduces the need system.3)Hypervisors: Software like VMware, Hyper-V, Makes use of underutilized resources.2)Cost
for physical hardware and associated energy and KVM that manage virtual machines.4)Middleware: Efficiency: Cheaper than building a
costs.2)Resource Efficiency: Allows better utilization Acts as a bridge between applications and cloud supercomputer.Disadvantages:1)Security Concerns:
of available resources.3)Flexibility and Scalability: resources, providing services like authentication, resource Allowing external access to internal
Easy to create, back up, and migrate virtual management, and data integration.Advantages of Cloud resources.2)Complex Management: Requires robust
machines.4)Support for Legacy Systems: Allows Computing:-1)Cost Efficiency: Reduces the cost of middleware and management protocols.Use
running old software on new hardware.5)Testing and hardware, software, and IT maintenance.2)Scalability: Cases:1)Scientific Research: CERN uses grid
Development: Enables safe environments for Resources can be scaled up or down based on computing for processing data from particle
software testing.Limitations of demand.3)Flexibility: Accessible from anywhere with an accelerators.2)Financial Modeling: High-performance
Virtualization:-1)Performance Overhead: Virtual internet connection.4)Disaster Recovery: Simplifies data computing for simulations and analytics.Mobile
machines may not perform as well as physical backup and disaster recovery plans.5)Collaboration: Computing:-Mobile computing allows the transmission
machines.2)Complexity: Requires skilled Supports remote work and real-time data of data, voice, and video through wireless devices
administration and management.3)Security sharing.Limitations of Cloud Computing:-1)Security without needing a fixed physical
Concerns: If the host system is compromised, all and Privacy: Potential risk of data breaches and less connection.Components:-1)Mobile Communication:
virtual machines are at risk.4)Licensing Costs: control over data.2)Downtime and Reliability: Protocols, services, and infrastructure enabling
While hardware costs may decrease, software Dependence on internet connectivity and the service wireless communication.2)Mobile Hardware: Devices
licensing can become more complex and provider's uptime.3)Limited Control: Especially with SaaS like smartphones, tablets, laptops, and PDAs.3)Mobile
expensive.Use Cases of Virtualization1)Cloud and PaaS models, where infrastructure and platform Software: Operating systems and applications that run
Computing: Acts as a foundational technology, management is not directly accessible.4)Vendor Lock-In: on mobile hardware.Characteristics:1)Portability:
enabling the deployment of IaaS, PaaS, and SaaS Switching providers can be complex and costly.Use Cases Devices can be carried anywhere.2)Connectivity:
services.2)Data Centers: Supports the creation of of Cloud Computing:-1)E-commerce: Hosting websites Continuous access to the internet and other
virtual servers and desktops.3)Development and and managing databases.2)Healthcare: Storing patient networks.3)Personalization: Devices and apps tailored
Testing: Allows for isolated environments for records securely and enabling telemedicine.3)Education: to user preferences.4)Context Awareness: Apps can
software testing.4)Disaster Recovery: Simplifies Providing online learning platforms and collaboration respond to the user's environment (e.g., GPS,
backups and system restoration.5)Enterprise IT: tools.4)Financial Services: Offering secure transactions sensors).Advantages:1)Flexibility: Access information
Enhances the flexibility of infrastructure and resource and data analytics.5)Entertainment: Streaming services and perform tasks from anywhere.2)Productivity:
allocation.Future Trends in like Netflix and online gaming.Future Trends in Cloud Enables work and communication on the
Virtualization:-1)Containerization: Tools like Docker Computing:-1)Edge Computing: Bringing computing go.3)Real-Time Access: Immediate availability of data
and Kubernetes are evolving traditional virtualization power closer to the data source to reduce and services.Disadvantages:-1)Security Risks:
methods.2)Edge and Fog Computing: Extends latency.2)Serverless Computing: Developers can run Devices are prone to theft and data
virtualization to distributed and localized applications without managing server infrastructure.3)AI breaches.2)Limited Resources: Battery life,
environments.3)Serverless Computing: Minimizes and Machine Learning: Leveraging cloud power for processing power, and screen size
the need for traditional VM-based advanced data analysis and automation.4)IoT Integration: constraints.3)Network Dependence: Requires reliable
infrastructure.4)Automation: Use of AI and machine Connecting smart devices through the wireless connectivity.Use Cases:1)E-commerce:
learning to optimize resource allocation in virtual cloud.5)Sustainability: Optimizing data centers to reduce Mobile banking and shopping apps.2)Healthcare:
environments. energy consumption and carbon footprint. Remote monitoring and telemedicine.3)Navigation:
GPS-based applications for travel and logistics.
Consistency models define the rules for how data is managed and Circuit Switching:-1)Dedicated Connection: Establishes Virtual server provisioning, is the process of creating a new
synchronized across distributed systems, particularly in cloud a fixed, dedicated communication path between sender virtual machine on a physical host server and allocating the
environments.Purpose: To balance data consistency, system performance, and receiver before data transfer.2)Resource necessary computing resources to support it.It involves setting
and availability while ensuring data integrity across multiple nodes.Types of
Reservation: Allocates fixed bandwidth and resources for up and managing cloud resources for an organization, allowing
Consistency Models:-1)Strong Consistency:-1)Definition: Guarantees
the entire duration of the session.3)Consistent rapid deployment of virtual environments with minimal manual
that all read operations return the most recent write.2)How It Works: After a
data update, all subsequent reads reflect that update immediately across all Performance: Provides guaranteed data transfer rate and effort.Traditional vs. Virtualization Approach:1)In traditional
replicas.3)Advantages:(a)Ensures data accuracy and reliability.(b)Ideal for low latency.4)Connection-Oriented: Requires call setup setups, provisioning a new server required significant time and
systems requiring transactional integrity (e.g., banking and teardown phases (e.g., traditional telephone effort, involving hardware installation, OS configuration, and
systems).4)Disadvantages:(a)Can introduce latency due to the need to networks.5)Inefficient for Burst Traffic: Resources security measures.2)With virtualization, provisioning a new VM
synchronize all nodes.(b)May impact performance in globally distributed remain reserved even when no data is being transmitted, takes only minutes, enhancing speed, efficiency, and adherence
systems.5)Use Cases: Financial transactions, inventory management, and leading to potential wastage.Packet Switching:-1)No to Service Level Agreements (SLA) and Quality of Service
systems needing strict data accuracy.2)Eventual
Dedicated Path: Data is split into packets that travel (QoS) standards.What is VM Migration?VM migration refers to
Consistency:-1)Definition: Allows temporary data inconsistency but
independently through the network.2)Dynamic Resource the process of moving a virtual machine from one physical
ensures that all replicas will become consistent over time.2)How It Works:
Updates are propagated asynchronously. During this process, some nodes Allocation: Shares network resources among all users, server to another with minimal downtime.It is especially useful
may return outdated data until synchronization is improving efficiency.3)Adaptive Routing: Packets can during server maintenance, upgrades, or when balancing
complete.3)Advantages:(a)Highly scalable and performant.(b)Well-suited take different routes to reach the destination, optimizing workloads across servers.Advantages:-1)Time Efficiency:
for distributed systems with high availability network traffic.4)Connectionless or Migrations occur in milliseconds.2)Operational Continuity:
requirements.4)Disadvantages:Potential for stale reads, which may not be Connection-Oriented: Supports both datagram Keeps services active and minimizes customer
acceptable in all applications.5)Use Cases: Social media feeds, DNS (connectionless) and virtual circuit (connection-oriented) disruption.3)Resource Optimization: Helps maintain
systems, and content delivery networks (CDNs).3)Causal
methods.5)Efficient for Data Traffic: Ideal for bursty data performance and reliability.Virtualization Layer in VM
Consistency:-1)Definition: Ensures operations that are causally related are
transfer, as bandwidth is used only when packets are sent Provisioning:-Functionality:The virtualization layer partitions
seen by all nodes in the same order. Independent operations may appear in
different orders.2)How It Works: Tracks the causal relationships between (e.g., the Internet).Router is a device that connects the physical resources of a server into multiple virtual machines,
operations and maintains the order of dependent various local area networks and wide area networks in the each capable of handling different workloads.It gives each VM
operations.3)Advantages:(a)Balances consistency and Internet. It will automatically select and set routes the illusion of owning the entire physical hardware, enhancing
availability.(b)Prevents anomalies in collaborative applications (e.g., according to the channel conditions, and send signals in flexibility and resource management.Roles of the
document editing).4)Disadvantages:More complex to implement than the best path and in sequence.Bridges: Operate at the Virtualization Layer:-1)Resource Scheduling: Manages the
eventual consistency.5)Use Cases: Collaborative tools, version control OSI model's link layer, forwarding data frames based on allocation of CPU, memory, and storage.2)Physical Resource
systems, and chat applications.4)Monotonic Reads
MAC addresses to provide transparent network Allocation: Distributes hardware resources to virtual
Consistency:-1)Definition: Guarantees that once a process has read a
communication.Technical Considerations for Cloud environments.3)Abstraction: Ensures VMs operate
particular value, it will not see an older value in subsequent reads.2)How It
Works: Ensures that data versions only move forward and not backward in Adoption:-1)Infrastructure Compatibility: Assess independently of the underlying hardware.VM Provisioning in
time for any given reader.3)Advantages:(a)Provides a stable view of data, existing IT systems and their suitability for the cloud. Private and Hybrid Clouds:- Private Cloud:1)Provides public
avoiding confusing rollbacks.(b)Useful in scenarios where data changes Consider hybrid or multi-cloud strategies.2)Network cloud functionality on private resources while maintaining data
incrementally.4)Disadvantages:Does not guarantee the most recent data, Bandwidth and Latency: Ensure sufficient bandwidth and control, security, and governance.2)Can be set up within an
only a consistent progression of versions.5)Use Cases: E-commerce order address latency with caching and optimization organization’s own data center or as a Virtual Private Cloud
tracking, where it’s important to avoid seeing older order statuses.5)Read techniques.3)Security and Compliance: Evaluate cloud (VPC) within a vendor's
Your Writes Consistency:-1)Definition: Ensures that a user's read
provider security measures, encryption, access controls, infrastructure.Characteristics:-1)Self-service provisioning for
operation will always reflect their own most recent write.2)How It Works:
and regulatory compliance (e.g., GDPR, HIPAA).4)Data users.2)Automated and well-managed virtual
Updates made by a client are immediately visible to them, even if the system
is still updating other nodes.3)Advantages:Enhances the user experience Migration: Choose the right migration strategy environments.3)Optimized resource utilization and server
by providing immediate feedback on updates.4)Disadvantages:Consistency (Lift-and-Shift, Replatforming, Refactoring) and address efficiency.Hybrid Cloud:1)Combines private/internal and
is only guaranteed for the user's own writes, not those made by integration challenges.5)Disaster Recovery: Implement external/public cloud resources.2)Uses Cloud Bursting: When
others.5)Use Cases: User profile updates, where changes made by the user backup and recovery strategies, including Disaster on-premises infrastructure hits peak capacity, additional
should be visible immediately to them.Server Consolidation:-Combining Recovery as a Service (DRaaS).Business workloads are offloaded to a public cloud.Cloud Bursting:-A
multiple servers into a single, more powerful server or a cluster using Considerations for Cloud Adoption:-1)Cost method that uses public cloud resources when the private cloud
virtualization technology.Purpose: Enhances efficiency and
Management: Compare CapEx (on-premise) vs. OpEx hits peak capacity.Advantages:1)Scalability: Manages
cost-effectiveness in cloud environments by reducing the number of physical
(cloud) and assess Total Cost of Ownership (TCO).2)ROI unexpected surges in demand.2)Cost Efficiency: Avoids the
serversTypes of Server Consolidation:-1)Logical Consolidation:Multiple
virtual servers on a single physical server.Benefits: Cost savings, improved Analysis: Align cloud adoption with business goals and need for excess on-premises infrastructure.3)Agility: Quickly
performance, flexibility, and easier scalability.2)Physical estimate productivity gains and cost savings.3)Vendor adapts to workload changes.VM Provisioning Life Cycle:-1)IT
Consolidation:-Replaces multiple physical servers with fewer, more Selection: Evaluate cloud providers based on SLAs, Service Request: A request is made to create a new server for
powerful ones.Benefits: Enhanced performance and a more efficient cloud quality of service (QoS), and support. Consider a specific service.2)Resource Analysis: IT administration
environment.3)Rationalized Consolidation:Groups servers based on multi-vendor strategies to avoid vendor evaluates available server resources.3)Provisioning: A virtual
workload and consolidates similar applications onto fewer servers.Benefits: lock-in.4)Scalability and Continuity: Plan for dynamic machine is created, configured, and started.4)Operation: The
Optimizes efficiency and reduces costs.How Does Server Consolidation
resource scaling and cloud bursting to handle demand VM serves web requests, supports migration services, and
Work?1)Virtualization: Allows several virtual servers to run on a single
spikes.5)Performance Monitoring: Ensure cloud services scales resources on demand.5)Decommissioning: When the
physical server by creating an abstraction layer between hardware and
virtual servers.2)Logical Servers: Each virtual server operates maintain high performance and a good user service is no longer needed, the VM is released, and resources
independently with its own OS and applications.3)Resource Sharing: experience.On-Premise IT Resources:-1)Infrastructure are reallocated.Standardization in Cloud and Virtualization:-
Physical resources (CPU, RAM, storage) are shared among virtual servers Ownership: Hardware and software are owned, managed, Importance of Standardization:1)Ensures interoperability
to maximize utilization.Steps to Perform Server and maintained by the organization on its premises.2)Cost between different virtualization management vendors and cloud
Consolidation:-1)Assessing the Environment: Identify servers with similar Structure: High upfront Capital Expenditure (CapEx) for services.2)Facilitates a consistent approach to managing virtual
workloads and analyze resource utilization.2)Identifying and Grouping purchasing and maintaining hardware and machines and cloud environments.Key
Servers: Cluster servers that can be consolidated into fewer, powerful
infrastructure.3)Control and Customization: Full control Initiatives:(a)Distributed Management Task Force (DMTF):
servers.3)Planning the Consolidation: Choose the best approach (e.g.,
over security, data management, and IT infrastructure Developed standards for virtualization technology, including the
virtualization, physical consolidation).4)Testing and Validation: Ensure the
consolidation plan meets organizational needs.5)Consolidating Servers: customization.4)Maintenance and Support: Requires Virtualization Management Initiative (VMAN).1)VMAN
Migrate workloads and bring consolidated servers online.6)Monitoring and in-house IT staff for updates, troubleshooting, and system introduces the Open Virtualization Format (OVF), a common
Maintenance: Regularly evaluate performance and maintain server management.5)Scalability: Limited by physical hardware format for packaging and securely distributing virtual
health.7)Optimization: Continuously adjust settings to maintain optimal capacity; scaling up often requires purchasing additional appliances.2)Supported by industry leaders like Dell, HP, IBM,
performance.Benefits of Server Consolidation:-1)Cost Savings: Reduces equipment.Cloud-Based IT Resources:-1)Service Microsoft, and VMware.(b)Open Grid Forum (OGF): Launched
hardware, power, and cooling expenses.2)Improved Performance: Model: IT resources (e.g., servers, storage, applications) the Open Cloud Computing Interface Working Group
Enhances resource utilization and processing speed.3)Scalability and
are provided by third-party cloud service providers over the (OCCIWG).1)Focuses on delivering a standard API for the
Flexibility: Allows easy addition or removal of virtual
internet.2)Cost Structure: Operates on a Operational remote management of cloud IaaS.2)Supports deployment,
servers.4)Management Simplicity: Simplifies server management with a
single point of control.5)Resource Efficiency: Leads to better performance Expenditure (OpEx) model, with pay-as-you-go pricing and autonomic scaling, and monitoring of cloud resources.VM
and reduced operational costs.Multitenancy in cloud computing allows reduced upfront costs.3)Flexibility and Scalability: Offers Migration involves moving a virtual machine from one physical
multiple customers (tenants) to use the same computing resources while on-demand resource scaling (up or down) without the need host to another, often without noticeable service
keeping their data completely separate.Benefits of for physical hardware investments.4)Maintenance and disruption.Simplifies maintenance, upgrades, and load
Multitenancy:-1)Resource Efficiency: Maximizes the use of available Management: Cloud providers handle maintenance, balancing in virtualized environments.Live Migration (Hot or
resources by sharing them among multiple tenants.2)Cost Savings: updates, and security, reducing the need for dedicated IT Real-Time Migration):-Moves a running VM from one physical
Reduces costs for both customers and cloud vendors by avoiding the need
staff.5)Accessibility: Accessible from anywhere with an host to another with minimal downtime (milliseconds). Ideal for
for dedicated physical hardware.3)Operational Savings: Lowers power
internet connection, supporting remote work and global load balancing and proactive
consumption and cooling costs by consolidating resources.4)Vendor
Advantages: Simplifies management and reduces expenses for cloud collaboration.Standardization in Data maintenance.Steps:1)Preparation: Ensure sufficient resources
providers.5)User Isolation: Maintains data and process isolation to ensure Centers:-Standardization involves using predefined and shared storage on the destination host.2)Memory
security while sharing resources.Drawbacks of Multitenancy:-1)Security components, practices, and layouts consistently across a Pre-Copy: Iteratively copy memory pages while the VM is
Risks: Potential for security breaches and compliance issues, especially data center.Key Benefits:1)Affordability: Economies of running.3)State Transfer: Transfer CPU and remaining memory
when storing sensitive data on shared infrastructure.2)Data Vulnerability: scale reduce costs for components and states.4)Network Redirection: Update the Address Resolution
Since data is stored on third-party servers, it may be exposed to maintenance.2)Efficiency: Streamlined operations with Protocol (ARP) for seamless traffic rerouting.5)Activation: Start
unauthorized access or threats.3)Resource Competition: Tenants may
predictable processes.3)Scalability: Easy to expand by the VM on the destination host and shut it down on the
compete for shared resources, potentially affecting performance (noisy
adding standardized units.4)Reliability: Consistent source.Benefits: Minimal downtime, no service interruption,
neighbor effect).4)Single Point of Failure: Since there is only one resource
instance, a failure could lead to service disruptions.Types of Multitenant designs minimize errors and improve uptime.Modularity in supports maintenance without disruptions.Cold Migration
Architecture:-(a)Single Application, Single Database-1)Description: Data Centers:-Modularity involves building data centers (Offline Migration):-Involves moving a powered-off VM to
Uses a single application and a shared database schema for all with interchangeable, self-contained units.Key another host. Suitable for planned maintenance and server
tenants.2)Benefits: Low cost and easier scaling.3)Drawbacks: Can suffer Benefits:1)Flexibility: Quickly add or remove modular upgrades.Steps:1)Preparation: Shut down the VM and ensure
from the noisy neighbor effect, affecting performance.(b)Single Application, components to adapt to changing needs.2)Serviceability: the destination host has adequate resources.2)Data Transfer:
Multiple Databases-1)Description: One application instance with separate Simplified maintenance by swapping modules without Move VM configuration files, disk images, and logs to the new
databases for each tenant.2)Benefits: Good for regulatory compliance and
disrupting the system.3)Resilience: Isolates failures, host.3)Registration: Add the VM to the destination host’s
data segregation.3)Drawbacks: Higher cost and complexity, but minimizes
allowing unaffected modules to continue inventory.4)Reconfiguration: Adjust network and resource
performance impacts from noisy neighbors.(c)Multiple Application,
Multiple Databases-1)Description: Each tenant has a separate application operating.4)Future-Proofing: Easily integrate new settings as needed.5)Activation: Power on the VM and validate
instance and database.2)Benefits: High security and flexibility; tenants can technologies by replacing or upgrading functionality.Benefits: No shared storage needed, simpler
be segmented based on specific criteria.3)Drawbacks: More complex to modules.Balancing Standardization and Modularity:- process, good for hardware changes.Live Migration vs. Cold
manage and maintain, higher operational costs.Multitenant Cloud:-1)Cost Challenges of Over-Standardization:-1)Limited Migration: Key Differences Live Migration (Hot/Real-Time
Efficiency: Shares infrastructure and resources among multiple tenants, Flexibility: Difficult to adapt to new technologies or Migration):-1)VM State: The virtual machine remains powered
leading to lower costs for both cloud providers and users.2)Resource specific business needs.2)Stifled Innovation: Standard on during the migration process.2)Downtime: Minimal (usually
Sharing: Utilizes virtualization to allow multiple tenants to access the same
practices can delay adopting emerging milliseconds), ensuring no noticeable impact on
hardware and software instance simultaneously.3)Maintenance and
solutions.3)One-Size-Fits-All Limitation: Not always end-users.3)Storage Requirements: Requires shared storage
Updates: Managed centrally by the cloud provider, ensuring all tenants
receive updates and maintenance automatically.4)Limited Customization: suitable for specialized workloads.Challenges of accessible by both the source and destination hosts.4)Use
Tenants have restricted ability to customize the software environment, as Excessive Modularity:-1)Complex Management: Too Cases: Ideal for load balancing, proactive maintenance, and
changes affect all users.5)Scalability: Allows dynamic allocation of many modular components can complicate avoiding service interruptions.5)Network Impact: Uses Address
resources to tenants based on demand, supporting flexible operations.2)Higher Costs: Custom modules may lack Resolution Protocol (ARP) updates for seamless traffic
scaling.Single-Tenant Cloud:-1)Dedicated Resources: Each tenant has cost efficiencies.3)Compatibility Issues: Risk of redirection.Cold Migration (Offline Migration):-1)VM State:
exclusive access to a dedicated server or cloud environment, enhancing integration problems with diverse components.Strategies The virtual machine must be powered off before migration
performance and security.2)Higher Costs: More expensive due to the need
for Balance:-1)Standardize Core Elements: Consistent begins.2)Downtime: Involves service downtime, as the VM is
for dedicated hardware and infrastructure management.3)Full Control:
practices for power, cooling, and not operational during the move.3)Storage Requirements:
Tenants manage their own updates, security settings, and customizations,
providing greater flexibility.4)Enhanced Security: Isolated environments infrastructure.2)Introduce Modularity in Key Areas: Use Does not require shared storage; files can be moved directly to
reduce the risk of data breaches and compliance issues, ideal for highly modular server and storage units for scalability.3)Hybrid the new host.4)Use Cases: Best for planned maintenance,
regulated industries.5)Consistent Performance: Avoids the "noisy Approach: Maintain a standardized backbone while server upgrades, and data center relocations.5)Network
neighbor" effect, ensuring that resource use by others does not affect allowing modular expansion. Impact: May require reconfiguration of network settings on the
performance. destination host.
Cloud load balancing distributes workloads across Data Backup:-1)Purpose: Primarily for data recovery in case of
computing resources in a cloud environment, balancing Cloud Data Replication:-1)Global Scope: Replicates data loss, corruption, or accidental deletion.2)Process: Creates
network traffic to improve performance and maintain data across cloud-based services and multiple periodic snapshots of data, often stored in compressed
service continuity.Function: Routes incoming traffic to geographic regions, enhancing global formats.3)Storage: Typically stored in separate locations,
multiple servers or resources, supporting workload accessibility.2)Automatic Failover: Supports real-time including external drives, cloud storage, or off-site
demands, enhancing performance, and preventing service or near-real-time replication with automatic failover for facilities.4)Recovery: Allows restoration of data to a specific
disruptions.Global Reach: Can distribute workloads quick disaster recovery.3)High Availability: Ensures point in time, useful for long-term data retention.5)Usage: Ideal
across multiple geographic regions.How Does Cloud data is available even if a primary cloud region for archival, compliance, and protection against data corruption
Load Balancing Work?1)Software-Based Approach: experiences an outage.4)Automation and or ransomware attacks.Data Replication:-1)Purpose: Ensures
Unlike traditional hardware load balancers, cloud load Management: Uses advanced cloud tools to automate data availability and consistency across multiple systems in
balancing uses software to manage traffic replication processes, reducing manual real-time or near-real-time.2)Process: Continuously copies data
distribution.2)Placement: Sits between client devices and intervention.5)Wide Geographic Distribution: from one system to another, keeping all instances
backend servers, routing requests based on algorithms Distributes data across multiple data centers (e.g., San synchronized.3)Storage: Replicated data is often stored in
and policies.3)Health Monitoring: Continuously checks Francisco, New York, London) to improve performance active environments, enabling immediate use if the primary
the status of servers to ensure they are operational before and compliance.Traditional Data Replication:-1)Local system fails.4)Recovery: Supports high availability and quick
sending traffic.Load Balancing Techniques and Scope: Typically involves data replication from a mobile failover during disasters but may not restore historical
Algorithms:-(a)Static Algorithms-1)Round Robin: device to a PC or from a PC to a networked database data.5)Usage: Best for disaster recovery, load balancing, and
Distributes requests sequentially across within a local or external network.2)Manual Failover: maintaining business continuity with minimal downtime.Virtual
servers.2)Weighted Round Robin: Assigns more Requires manual intervention to activate replicated data Machine replication is the process of creating a copy of a
requests to servers with greater capacity.3)IP Hash: Maps during a failure.3)Limited Accessibility: Replicas are virtual machine (VM) and maintaining it as a backup. The
client IP addresses to specific servers using a hash not accessible until the primary node fails, which can replicated VM can be activated if the original VM fails, ensuring
function.(b)Dynamic Algorithms:_1)Least Connections: delay recovery.4)Simple Replication Levels: Often business continuity and disaster recovery.Key
Sends traffic to the server with the fewest active involves replication from local to external networks, Benefits:1)Enhances data availability and system
connections.2)Least Response Time: Directs traffic to focusing on basic backup needs.5)Performance reliability.2)Supports Disaster Recovery (DR) by providing a
servers with the lowest response times.3)Least Impact: May slow down systems due to manual ready-to-use backup.3)Helps meet High Availability (HA) and
Bandwidth: Chooses servers using the least bandwidth processes and less efficient data Service Level Agreements (SLA) requirements.Types of VM
recently.Benefits of Cloud Load synchronization.Steps to Host a Web Application on Replication:-(a)Real-Time Replication:-1)How It Works:
Balancing:-1)Performance Improvement: Automatically a Cloud Platform:-1)Choose a Cloud Service Copies data to the replica VM as it is written on the primary
manages traffic to handle spikes efficiently.2)Reliability: Provider (CSP)-(a)Options: AWS, Microsoft Azure, VM.2)Pros: Provides the most precise backup with minimal data
Supports high availability by routing traffic around Google Cloud Platform (GCP), loss.3)Cons: Requires high bandwidth and robust
outages.3)Cost Efficiency: Eliminates the need for DigitalOcean.(b)Considerations: Pricing, scalability, hardware.4)Use Case: Ideal for mission-critical applications
expensive on-premises hardware.4)Flexibility: Assists reliability, global data centers, services, and where data consistency is crucial.(b)Point-in-Time
development teams during updates and support.2)Select the Hosting Service Type-(a)IaaS: Replication:-1)How It Works: Takes snapshots of the VM at
maintenance.5)Enhanced Security: Defends against Full control over infrastructure (e.g., AWS EC2, Azure specific intervals or on request.2)Pros: Less resource-intensive
DDoS attacks by distributing traffic across VMs).(b)PaaS: Managed deployment and scalability than real-time replication.3)Cons: May result in some data loss if
servers.6)Scalability: Supports automatic scaling to (e.g., AWS Elastic Beanstalk, Google App a failure occurs between snapshots.4)Use Case: Suitable for
manage workload changes.7)Health Checks: Monitors Engine).(c)SaaS: If offering a full solution (e.g., Shopify, environments where a slight delay in data recovery is
server health and reroutes traffic if needed.Cloud Load Salesforce).3)Prepare Your Web acceptable.Steps of VM Replication :-1)Initial Replication: A
Balancing:-1)Software-Based: Primarily uses software Application-(a)Ensure configurations, dependencies, full copy of the primary VM is created and stored at the
solutions within cloud environments, eliminating the need and database connections are ready.(b)Choose a tech replication site.2)Ongoing Synchronization: Incremental
for physical hardware.2)High Scalability: Automatically stack (e.g., Node.js, Python, PHP).(c)Consider changes are continuously or periodically transferred to keep the
adjusts resources to handle traffic spikes without manual containerization with Docker for portability.4)Configure replica updated.3)Failover Process: During a failure, the
intervention.3)Global Availability: Can route traffic to the Storage and Databases-(a)Databases: Relational replicated VM is activated to replace the original VM.4)Failback:
nearest server, improving performance and reducing (e.g., MySQL) or Non-relational (e.g., Once the primary VM is restored, changes from the replica are
latency.4)Cost-Effective: Operates on a pay-as-you-go MongoDB).(b)Storage: Use cloud storage (e.g., AWS synchronized back to the original.Key Components of VM
model, avoiding the high upfront costs of physical load S3, Azure Blob Storage) for media and backups.5)Set Replication:-1)Replication Software: Tools like VMware Site
balancers.5)Automation and Flexibility: Supports Up Domain Name and DNS-(a)Register a domain Recovery Manager, Veeam Backup & Replication, and Microsoft
advanced features like auto-scaling, failover, health (e.g., GoDaddy, Namecheap).(b)Configure DNS with Azure Site Recovery.2)Hypervisors: Platforms like VMware
monitoring, and DDoS protection.Traditional Load services like AWS Route 53 or Google Cloud vSphere, Microsoft Hyper-V, and KVM support replication
Balancing:-1)Hardware-Based: Requires dedicated DNS.6)Configure SSL/TLS for Security-(a)Install an features.3)Storage Systems: Replicated data can be stored
physical appliances installed in on-premises data SSL certificate (e.g., Let’s Encrypt) to enable on-premises, in the cloud, or in a hybrid environment.Disaster
centers.2)Limited Scalability: Expanding capacity HTTPS.(b)Use managed SSL services from the CSP Recovery as a Service (DRaaS):-A cloud-based service that
involves purchasing and configuring additional (e.g., AWS Certificate Manager).7)Deploy Your Web manages VM replication and disaster recovery
hardware.3)Localized Use: Typically confined to specific Application-(a)Upload using Git, CI/CD pipelines, or processes.Features:1)Provides automated failover and
physical locations, making it challenging to manage global FTP.(b)Utilize deployment tools (e.g., AWS failback.2)Offers geo-redundant storage to maintain data copies
traffic.4)High Initial Costs: Involves significant investment CodeDeploy, Azure DevOps).8)Auto-Scaling and in multiple locations.3)Supports orchestration and testing of
in hardware, setup, and maintenance.5)Manual Load Balancing-(a)Load Balancing: Distribute traffic disaster recovery plans without affecting production
Management: Requires IT staff to handle maintenance, with tools like AWS Elastic Load environments.Resource replication involves creating multiple
updates, and scaling, leading to higher operational Balancer.(b)Auto-Scaling: Automatically adjust copies of the same IT resource in cloud computing to improve
complexity.Types of Cloud Load instances based on traffic (e.g., AWS Auto availability, reliability, and performance.Key
Balancing:-1)Application Load Balancing:-1)How It Scaling).9)Set Up Monitoring and Logging-(a)Use Benefits:1)Reliability: Ensures consistent access to resources
Works: Operates at the Application Layer (Layer 7) of the monitoring tools (e.g., AWS CloudWatch, Azure despite hardware failures or network issues.2)Disaster
OSI model, using request content to make routing Monitor).(b)Set up logging services (e.g., AWS Recovery: Maintains redundant data copies in different
decisions.2)Routing Criteria: Analyzes HTTP headers, CloudTrail, ELK Stack).10)Manage Backups and locations for quick recovery.3)Application Performance:
URL paths, cookies, SSL session IDs, and application Recovery-Regularly back up data and create a disaster Enhances performance, especially for mobile
data.3)Use Cases: Ideal for web applications, recovery plan (e.g., AWS RDS backups).Logical applications.Remote replication is the process of creating and
microservices architectures, and API Network Perimeter is a virtual boundary that isolates maintaining a synchronized copy of a virtual machine (VM) or
gateways.4)Example: AWS Application Load Balancer, and secures cloud resources using Software-Defined data at a remote site.How Does Remote Replication
Azure Application Gateway, and Networking (SDN) and virtual network Work?1)Initial Setup:-A full copy of the VM or dataset is
NGINX.5)Benefits:(a)Supports content-based technologies.Purpose: Controls traffic flow, ensures created and transferred to the remote site.2)Data
routing.(b)Enables session persistence, ensuring a user’s secure access, and protects cloud-based applications, Synchronization:-(a)Synchronous Replication: Data is
requests are consistently sent to the same data, and services.Difference from Physical updated simultaneously at both the primary and remote sites,
server.(c)Supports WebSocket and HTTP/2 protocols for Perimeters: Unlike physical perimeters that use ensuring no data loss but potentially affecting
modern web applications.2)Network Load hardware (e.g., routers, firewalls), logical perimeters performance.(b)Asynchronous Replication: Updates are made
Balancing:-1)How It Works: Operates at the Network rely on virtualized network controls.How Does a to the remote site at scheduled intervals, reducing system load
Layer (Layer 4) of the OSI model, distributing traffic based Logical Network Perimeter Help?1)Access Control: but introducing the risk of data lag.3)Failover Mechanism:-In
on IP addresses, port numbers, and protocols (e.g., TCP, Manages who can access cloud resources.2)Data the event of a failure at the primary site, the system
UDP).2)Routing Criteria: Focuses on network-level Protection: Safeguards sensitive data from external automatically switches to the replicated VM or data at the
information without analyzing application data.3)Use threats.3)Traffic Segmentation: Separates internal and remote site.4)Failback Process:After resolving the issue at the
Cases: Best suited for applications requiring high external network traffic, enhancing security.4)Dynamic primary site, data changes from the remote site are
throughput, low latency, and fast connections, such as Security: Adapts security policies to a multi-tenant and synchronized back, and operations can resume
VoIP, gaming servers, and real-time distributed cloud environment.Key Components of a normally.Factors to Consider for Remote
applications.4)Example: AWS Network Load Balancer, Logical Network Perimeter:-(a)Virtual Replication:-1)Distance: Shorter distances are ideal for
Azure Load Balancer, and Firewalls:-1)Function: Monitor and control network synchronous replication, while asynchronous replication works
HAProxy.5)Benefits:(a)Supports millions of requests per traffic within cloud environments.2)Use Case: Enforce well over longer distances.2)Bandwidth: Sufficient internet
second.(b)Efficient for low-level traffic distribution.(c)Ideal security policies across virtual machines (VMs), speed and network connectivity are crucial for secure and rapid
for non-HTTP traffic, providing fast failover.3)Global containers, and cloud services.(b)Virtual Private data transfer.3)Data Rate: The replication data rate must not
Server Load Balancing (GSLB)-1)How It Works: Networks (VPNs):-1)Function: Create encrypted exceed the available bandwidth to avoid network
Distributes traffic across servers located in different tunnels for secure communication between remote congestion.4)Replication Technology: Utilize advanced tools
geographical regions to improve latency, availability, and users and cloud networks.2)Benefit: Extends private that support parallel replication tasks for
disaster recovery.2)Routing Criteria: Takes into account networks securely over public internet efficiencyCloud-to-Cloud Data Replication:-Involves
client location, server health, and load status to connect connections.(c)Gateways:-1)Function: Act as replicating data between different cloud services or regions
users to the nearest and most responsive server.3)Use intermediaries between cloud and external networks, within a cloud.Benefit: Provides enhanced data redundancy,
Cases: Ideal for global applications, content delivery managing and filtering traffic.2)Examples: API accessibility, and disaster recovery capabilities.Synchronous
networks (CDNs), and businesses with a global customer Gateways, Web Gateways.(d)Intrusion Detection Replication:-1)Real-Time Data Replication: Data is written to
base.4)Example: Akamai Global Traffic Manager, AWS Systems (IDS) & Intrusion Prevention Systems both the primary and remote storage simultaneously, ensuring
Route 53, and Azure Traffic (IPS):-1)IDS: Detects and alerts administrators about data consistency.2)Zero Data Loss: Ideal for critical
Manager.5)Benefits:(a)Reduces latency by connecting potential security threats.2)IPS: Automatically blocks or applications, as write operations are only considered complete
users to the closest server.(b)Provides failover by prevents unauthorized activities.(e)Identity and when both locations acknowledge them.3)High Network
redirecting traffic to healthy servers in case of regional Access Management (IAM) Tools:-1)Function: Requirements: Needs high bandwidth and low-latency
outages.(c)Supports disaster recovery plans by Control who can access cloud resources based on user connections to avoid performance bottlenecks.4)Short
maintaining service availability across multiple data roles and permissions.2)Features: User Management, Recovery Point Objective (RPO): Guarantees the most recent
centers.4)DNS Load Balancing:-1)How It Works: Uses Authentication, Authorization, and Access data is available during disaster recovery.5)Best for Short
Domain Name System (DNS) to distribute network traffic Management.3)Example: AWS Identity and Access Distances: Works effectively when replication sites are
by mapping a single domain name to multiple IP Management (IAM).Security Implications:-1)Data geographically close to minimize latency.Asynchronous
addresses.2)Routing Criteria: Allocates traffic based on Protection: Encrypts data at rest and in transit using Replication:-1)Deferred Data Replication: Data is first written
availability, performance, and geographical firewalls, VPNs, and gateways.2)Minimized Attack to the primary storage, then replicated to the remote site at
proximity.3)Use Cases: Suitable for global distribution of Surface: Isolates critical resources from threats by scheduled intervals.2)Potential Data Loss: There is a risk of
services, balancing traffic among different data centers, segmenting the network.3)Traffic Control: Allows losing the most recent data changes if a disaster occurs before
and multi-cloud environments.4)Example: AWS Route 53, segmentation of internal and external communications, synchronization.3)Lower Network Strain: Requires less
Google Cloud DNS, and Cloudflare DNS Load reducing exposure to risks.4)Compliance: Supports bandwidth and is not affected by network latency, making it
Balancer.5)Benefits:(a)Simplifies multi-region regulatory requirements like HIPAA and PCI-DSS by suitable for long-distance replication.4)Flexible Scheduling:
deployments.(b)Automatically redirects traffic if a server enforcing robust network security. Allows replication at intervals such as hourly, daily, or weekly,
becomes unavailable.(c)Enhances resilience by balancing performance and data safety.5)Cost-Effective:
distributing DNS queries across multiple endpoints. Typically more affordable than synchronous replication, making it
suitable for non-critical applications.
An automated scaling listener is a service agent that monitors workload
demands and triggers auto-scaling actions to maintain performance and
efficiency.Functionality:1)Tracks communication between cloud users
and services.2)Helps detect when scaling is needed by monitoring
workload status and backend demands.Importance:1)Supports dynamic
scaling, adjusting resources automatically based on usage
patterns.2)Installed near the firewall in cloud architecture for efficient
monitoring.How Automated Scaling Listener Works Steps
Involved:1)Monitoring Workloads: Continuously tracks data, including
the number of incoming requests and backend load.2)Analyzing Data:
Evaluates usage patterns to identify when scaling is
needed.3)Thresholds and Alerts: Uses predefined metrics (e.g., CPU,
memory usage) to trigger alerts when limits are exceeded.4)Resource
Allocation: Dynamically allocates or reduces resources based on current
demand.5)Preventing Overload: Creates additional instances during
spikes and scales down when demand decreases.Auto
Scaling:-1)Primary Function: Automatically adjusts the number of
compute instances based on workload demand.2)Scalability: Scales
resources up (adds instances) or down (removes instances) to maintain
performance and cost efficiency.3)Triggers: Based on metrics like CPU
usage, memory utilization, or custom metrics.4)Purpose: Ensures that
applications have enough resources to handle peak loads while avoiding
over-provisioning.5)Example Tools: AWS Auto Scaling, Google Cloud
Autoscaler, Azure Autoscale.Load Balancing:-1)Primary Function:
Distributes incoming network traffic evenly across multiple servers or
instances.2)High Availability: Improves application availability by
directing traffic away from unhealthy or overloaded servers.3)Traffic
Management: Uses algorithms like Round Robin, Least Connections, or
IP Hashing to route requests.4)Purpose: Enhances performance,
reduces latency, and prevents server overloads.5)Example Tools: AWS
Elastic Load Balancer (ELB), Azure Load Balancer, Google Cloud Load
Balancing.Horizontal Auto Scaling:-1)Definition: Adds or removes
instances (e.g., servers, VMs) to adjust capacity based on
demand.2)Scalability: Supports infinite scaling by increasing the number
of resources in a resource pool.3)Availability: Enhances redundancy
and fault tolerance by distributing the load across multiple instances.4)No
Downtime: New instances can be added or removed seamlessly without
interrupting services.5)Best Use Cases: Ideal for stateless applications,
microservices, and environments with dynamic workloads (e.g., web
servers).Vertical Auto Scaling:-1)Definition: Increases or decreases
the power (e.g., CPU, RAM) of an existing server or
instance.2)Scalability: Limited by the capacity of a single machine,
making it less scalable than horizontal scaling.3)Availability: Does not
improve redundancy; if the instance fails, the application may face
downtime.4)Possible Downtime: May require restarting the server or
application, leading to service interruptions.5)Best Use Cases: Suitable
for monolithic applications or when scaling a specific process (e.g.,
database servers needing more RAM).Configuration of Scaling
Policies Key Steps:1)Define Scaling Goals: Identify metrics like CPU
utilization, memory usage, and request count.2)Create Scaling
Policies:(a)Threshold-Based: Trigger actions when specific metrics
exceed set limits.(b)Scheduled Policies: Scale resources during
predictable usage periods.(c)Predictive Policies: Use machine learning
to adjust resources proactively.3)Set Alarms and Triggers:Define
conditions to initiate scaling actions.4)Define Scaling
Actions:Scale-Out: Add instances when demand increases.Scale-In:
Remove instances when demand decreases.5)Implement Cooldown
Periods:Prevents rapid and repetitive scaling actions.6)Monitor and
Optimize:Regularly review and adjust policies to maintain performance
and efficiency.Scheduling is the process of mapping a set of jobs to
virtual machines (VMs) or allocating VMs to available resources to meet
user demands.Objectives of Scheduling:-1)Throughput: Maximizes
the number of completed tasks per time unit.2)Latency: Minimizes the
turnaround time from task submission to completion.3)Response Time:
Reduces the time between request submission and service
initiation.4)Load Balancing: Distributes tasks evenly across resources to
avoid bottlenecks.Types of Scheduling:-1)First Come, First Served
(FCFS):-How It Works: Tasks are executed in the order they
arrive.Advantages: Simple and fair, with low complexity.Disadvantages:
High waiting time and no prioritization, leading to inefficiency with large
tasks.2)Round Robin (RR):-How It Works: Allocates a fixed time slice to
each task in a cyclic order.Advantages: Improves response time and
handles time-sharing systems well.Disadvantages: Can lead to high
context-switching overhead.3)Shortest Job First (SJF):-How It Works:
Prioritizes tasks with the shortest execution time.Advantages: Minimizes
waiting time and enhances efficiency.Disadvantages: Long tasks may
experience starvation if smaller tasks keep arriving.4)Priority
Scheduling-How It Works: Assigns priority levels to tasks, executing
higher-priority tasks first.Advantages: Suitable for critical tasks requiring
immediate execution.Disadvantages: Low-priority tasks may suffer from
starvation.5)Max-Min Scheduling:-How It Works: Sorts tasks by
completion time, prioritizing longer tasks on the fastest VMs.Advantages:
Utilizes resources efficiently and improves performance over FCFS and
SJF.Disadvantages: Smaller tasks may wait longer if many large tasks
are present.Scheduling Levels in Cloud Computing:-Host Level
Scheduling-Distributes VMs across physical hosts based on policies to
optimize resource use.VM Level Scheduling:-Allocates tasks to specific
VMs, focusing on resource utilization and minimum make-span (total task
completion time).Task Scheduling
Algorithms:-Classifications:-1)Immediate Scheduling:-Definition:
Assigns tasks to VMs as soon as they arrive.Use Case: Real-time
processing where tasks cannot be queued.2)Batch
Scheduling:-Definition: Groups tasks into batches before scheduling,
allowing better resource allocation strategies.Example: Mapping events
in batch processing systems.3)Static Scheduling:-Definition: Uses prior
information about the system to allocate tasks uniformly across
VMs.Algorithms: Round Robin, Random Scheduling.Advantage:
Simplicity and low overhead.Disadvantage: Does not adapt to real-time
changes in resource availability.4)Dynamic Scheduling:_Definition:
Considers the current state of VMs and allocates tasks based on
available capacity.Advantage: Adapts to system changes, improving
resource efficiency.5)Preemptive vs. Non-Preemptive
Scheduling:-Preemptive: Tasks can be interrupted and moved to
another resource, offering flexibility.Non-Preemptive: Tasks run to
completion without interruption, ensuring stability but less
adaptability.Advantages of Good Scheduling
Algorithms:-1)Performance Management: Enhances cloud service
performance and Quality of Service (QoS).2)Resource Optimization:
Maximizes CPU and memory utilization while minimizing task execution
time.3)Fairness: Balances resource allocation among tasks.4)System
Throughput: Increases the number of completed tasks per time
unit.5)Load Balancing: Prevents resource overloading by distributing
tasks evenly.

You might also like