Unit III CC
Unit III CC
VIRTUALIZATION:
DESKTOP VIRTUALIZATION:
BENEFITS:
security: it professionals rate security as their biggest challenge year after year.
by removing os and application concerns from user devices, desktop
virtualization enables centralized security control, with hardware security needs
limited to virtualization servers, and an emphasis on identity and access
management with role-based permissions that limit users only to those
applications and data they are authorized to access. additionally, if an employee
leaves an organization there is no need to remove applications and data from
user devices; any data on the user device is ephemeral by design and does not
persist when a virtual desktop session ends.
the three most popular types of desktop virtualization are virtual desktop
infrastructure (vdi), remote desktop services (rds), and desktop-as-a-
service (daas).
vdi simulates the familiar desktop computing model as virtual desktop sessions
that run on vms either in on-premises data center or in the cloud. organizations
who adopt this model manage the desktop virtualization server as they would
any other application server on-premises. since all end-user computing is moved
from users back into the data center, the initial deployment of servers to run vdi
sessions can be a considerable investment, tempered by eliminating the need to
constantly refresh end-user devices.
NETWORK VIRTUALIZATION:
STORAGE VIRTUALIZATION
It also allows for convenient features like moving data between different
storage devices or creating backups effortlessly. In simpler terms, storage
virtualization helps us keep our data organized, accessible, and secure in a
more efficient and flexible way.
Consider your computer as a space with a finite number of shelves or as
having a certain amount of storage space. Consider storing a range of papers
and documents, but knowing that not all of them will fit on a single shelf. It
becomes tough to remember where each file is saved and to effectively use
the space that is available.
File servers: The operating system writes the data to a remote location with no
need to understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over
the WAN environment, WAN accelerators will cache the data locally and
present the re-requested blocks at LAN speed, while not impacting the WAN
performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating
system. NAS presents the storage as file operations (like NFS). SAN
technologies present the storage as block level storage (like Fibre Channel).
SAN technologies receive the operating instructions only when if the storage
was a locally attached device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage
tiering analyze the most commonly used data and places it on the highest
performing storage pool. The lowest one used data is placed on the weakest
performing storage pool.
1. Data is stored in the more convenient locations away from the specific
host. In the case of a host failure, the data is not compromised
necessarily.
2. The storage devices can perform advanced functions like replication,
reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more
flexible in how storage is provided, partitioned, and protected.
3. **Data Security and Privacy**: Ensuring the security and privacy of data
in a virtualized environment is essential. Organizations need to implement
robust security measures to protect sensitive information.
4. **Network Latency**: Depending on the cloud provider and the
geographical location of data centers, network latency can be a concern,
especially for applications that require low-latency responses.
Conclusion
Virtual clusters and resource management are concepts often associated with
cloud computing and virtualization technologies. Here's an explanation of these
terms:
Containers and virtual machines (VMs) are both technologies used in the field
of virtualization, but they have some fundamental differences in how they
operate and their use cases. Here's a comparison of containers and virtual
machines:
Containers:
Virtual Machines:
1. Isolation: Virtual machines provide a higher level of isolation because
each VM has its own complete virtualized operating system. This makes
them more secure since the VMs are fully independent.
3. Portability: VMs are less portable than containers since they encapsulate
the entire operating system. Moving VMs between different
environments can be more challenging due to potential compatibility
issues.
4. Scalability: VMs are better suited for vertical scaling, where you allocate
more resources (CPU, memory) to a single VM. While you can create
multiple VMs for scaling, it's typically less efficient and slower than
container scaling.
Use Cases:
Introduction to Docker
Docker is a powerful platform for developing, shipping, and running
applications within containers. Containers are lightweight, standalone, and
executable packages that include everything needed to run a piece of software,
including the code, runtime, system tools, libraries, and settings. Docker has
revolutionized software development and deployment by making it easier to
build, package, and distribute applications across various environments, from
development to production. In this comprehensive guide, we'll explore the core
components of Docker, how containers work, Docker images, and repositories.
Docker Components
Docker comprises several key components that work together to enable the
creation, deployment, and management of containers. These components
include:
1. Docker Engine:
The Docker Engine is the core component that runs on the host operating
system and manages containers. It consists of the Docker daemon, REST
API, and command-line interface (CLI). The daemon is responsible for
building, running, and managing containers, while the CLI is used to
interact with the daemon.
2. Docker Client:
3. Docker Images:
Docker images are the blueprints or templates for containers. They are
read-only and consist of a snapshot of a file system, application code,
libraries, environment variables, and configuration files. Docker images
are used to create containers, and they can be stored and shared in Docker
repositories.
4. Docker Containers:
Docker Containers
Docker containers are at the heart of Docker's value proposition. They offer
several key benefits:
1. Isolation:
2. Portability:
3. Resource Efficiency:
4. Scalability:
Containers are well-suited for horizontal scaling. You can replicate and
scale containers to meet changing demands quickly. Container
orchestration platforms like Kubernetes facilitate efficient container
management.
5. Version Control:
Docker allows version control for both images and containers. You can
track changes, rollback to previous versions, and maintain consistency in
your application stack.
To create a Docker container, you typically start with a Docker image, which
serves as a snapshot of a specific application or service. Docker images can be
built manually using Dockerfiles or obtained from Docker registries, such as
Docker Hub. Dockerfiles are text files that define the steps required to build an
image, specifying a base image, application code, dependencies, and
configurations.
Once you have an image, you can run it to create a running container using the
docker run command. Docker will start a new process with its own file system
and environment, based on the image. Containers can be customized with
runtime parameters, including port mappings, environment variables, and more.
Docker Images
Docker images serve as the starting point for creating containers. An image is a
snapshot of a file system that includes application code, runtime, libraries,
environment variables, and configurations. Images are immutable and read-
only. You can create, modify, and share images, but each change results in a
new image version.
Image Layers
Docker images are composed of layers. Each layer represents a set of changes to
the file system. Layers are cached to improve efficiency and reduce duplication.
When an image is built or updated, only the modified layers need to be
transferred, making image distribution faster and more efficient.
Docker Repositories
Docker images are typically stored and shared in Docker repositories. A Docker
repository is a collection of related image versions tagged with unique
identifiers. Repositories are organized on Docker registries, which can be public
or private.
Docker Hub
Docker Hub is the default public registry for Docker images. It hosts thousands
of pre-built images shared by the Docker community, covering various software
applications, tools, and operating systems. Docker Hub provides an accessible
and convenient resource for finding and using Docker images.
Private Registries
Docker images can have multiple versions, each identified by a tag. Tags are
used to specify which version of an image you want to use when running a
container. The default tag is latest, but you can assign custom tags to images to
manage different versions and configurations.
Conclusion