0% found this document useful (0 votes)
50 views5 pages

Understanding Virtualization Technologies

Chapter 2 discusses virtualization, which allows multiple virtual machines to run on a single physical machine, enhancing resource utilization and IT service delivery. It covers various types of virtualization, including data, server, operating system, and network functions virtualization, along with their benefits and challenges. Additionally, it introduces Linux containers, command line interfaces, and user management in Linux systems.

Uploaded by

xaman36827
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views5 pages

Understanding Virtualization Technologies

Chapter 2 discusses virtualization, which allows multiple virtual machines to run on a single physical machine, enhancing resource utilization and IT service delivery. It covers various types of virtualization, including data, server, operating system, and network functions virtualization, along with their benefits and challenges. Additionally, it introduces Linux containers, command line interfaces, and user management in Linux systems.

Uploaded by

xaman36827
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 2:

Virtualization

Virtualization:
“Creating Multiple Virtual machine (VMs) on a single physical
machine, allowing multiple operating Systems to run
simultaneously “

Virtualization technology:
It creates useful IT services using resources that are traditionally bound to hardware. It allows
user to use a physical machine’s full capacity by distributing its capabilities among many users or
environments.

Working

✓ Working Software called hypervisors separate the physical resources from the virtual
environments.
✓ Hypervisors can sit on top of an operating system (like on a laptop) or be installed directly onto
hardware (like a server), which is how most enterprises virtualize.
✓ Hypervisors take physical resources and divide them up so that virtual environments can use
them. Resources are partitioned as needed from the physical environment to the many virtual
environments.
✓ Users interact with and run computations within the virtual environment (typically called a
guest machine or virtual machine).
✓ The virtual machine functions as a single data file. And like any digital file, it can be moved
from one computer to another, opened in either one, and be expected to work the same.

Types of virtualizations Data virtualization


Data virtualization tools sit in front of multiple data sources and
allows them to be treated as single source, delivering the needed
data—in the required form—at the right time to any application or
user.
Server virtualization
Servers are computers designed to process a high volume
of specific tasks really well so other computers—like laptops and
desktops—can do a variety of other tasks. Virtualizing a server
lets it to do more of those specific functions and involves
partitioning it so that the components can be used to serve
multiple functions.

Operating system virtualization


Operating system virtualization happens at the kernel—the central task managers of operating
systems. It’s a useful way to run Linux and Windows environments side-by-side. Enterprises can also
push virtual operating systems to computers, which:

• Reduces bulk hardware costs, since the computers don’t require


such high out-of-the-box capabilities.

• Increases security, since all virtual instances can be monitored and


isolated.

• Limits time spent on IT services like software updates.

Network functions virtualization


Network functions virtualization (NFV) separates a network's key functions (like directory
services, file sharing, and IP configuration) so they can be distributed among environments.

Once software functions are independent of the physical machines they once lived on, specific
functions can be packaged together into a new network and assigned to an environment.

Virtualizing networks reduces the number of physical


components—like switches, routers, servers, cables, and hubs—that
are needed to create multiple, independent networks, and it’s
particularly popular in the telecommunications industry.

Challenges of Virtualization
1. Determining Individual Needs
2. Licensing Restrictions
3. Resource Estimations
4. VM Management
5. Virtual Backups
Potentials of Virtualization
1. Slash your IT expenses
2. Reduce downtime and enhance resiliency in disaster recovery situations
3. Increase efficiency and productivity
4. Control independence and DevOps
5. Move to be more green-friendly (organizational and environmental)

Linux containers (LXC)

Definition:

“A lightweight Operating System Virtualization method that runs multiple isolated Linux
system on a single host “

They are packaged computing environments that combine various IT components and isolate them from
the rest of the system. Their main differences are in terms of scale and portability.

Working:

Containers are typically measured by the megabyte. They don’t package anything bigger
than an app and all the files necessary to run, and are often used to package single functions that perform
specific tasks (known as a micro service). The lightweight nature of containers—and their shared
operating system (OS)—makes them very easy to move across multiple environments.

containers hold a micro service or app and everything it needs to run. Everything within a container is
preserved on something called an image—a code-based file that includes all libraries and
dependencies

Linux Command Line Interface (CLI)


✓ The Command Line Interface (CLI), is a non-graphical, text-based interface to the
computer system, where the user types in a command and the computer then executes it.
✓ The Terminal is the platform or the IDE that provides the command line interface (CLI)
environment to the user.
✓ The CLI terminal accepts the commands that the user types and passes to a shell.
✓ The shell then receives and interprets what the user has typed into the instructions that can
be executed by the OS (Operating System).
✓ If the output is produced by the specific command, then this text is displayed in the terminal.
✓ If any of the problems with the commands are found, then some error message is displayed.

Graphical User Interface

• In Graphical Mode (GUI), user can have many shells open, and perform tasks on
multiple/remote computers.
After successfully logging in, user is taken to the OS desktop where installed applications can
be used.
• Non-graphical mode (CLI) starts off with a text-based login, User is prompted for our
username/ID - password. If the login is successful, then an execution shell is provided. In command
line interface or the CLI, there are no windows present to move around.

Linux Shell

Definition:

“Shell A Shell provides an interface to the Linux system. It gathers input from user and
executes commands based on that input. When a program finishes executing, it displays
that program's output. “

“Shell is an environment in which commands, programs, and shell scripts can be run. “
There are different Flavors of a shell. Each flavour of shell has its own set of recognized commands
and functions.

Shell Prompt The prompt, $, which is called the command prompt, is issued by the shell. While the
prompt is displayed, command can be used. Shell reads input after pressing Enter. It determines the
command to be executed by looking at the first word of input.

Shell Types
There are two types of shells –
Bourne shell –The Bourne shell is the original Unix shell, developed by Stephen Bourne.
It has following subcategories –
• Bourne shell (sh) - It is Still widely used today, particularly for scripting.
• Korn shell (ksh) – is a shell that was developed by David Korn. Shell offers may additional
features.
• Bourne Again shell (bash) – It is an Improvement over the original Bourne shell and offers many
additional features .
• POSIX shell (sh) – shell that is designed to be compatible with the POSIX standard.
C shell – The C Shell is a shell that was developed at the University of California.
In C-type shell, the % character is the default prompt.
It has following subcategories –
• C shell (csh) – it is known for its C-like syntax and is often used for interactive use.
• TENEX/TOPS C shell (tcsh) – it is an enhanced version of the C shell, with additional features and
improvements.
The original Unix shell was written in the mid-1970s by Stephen R. Bourne while he was at the
AT&T Bell Labs in New Jersey. Bourne shell was the first shell to appear on Unix systems, thus it is
referred to as "the shell". Bourne shell is usually installed as /bin/sh on most versions of Unix. For
this reason, it is the shell of choice for writing scripts that can be used on different versions of Unix.

Linux user

Users are accounts that can be used to login into a system. Each user is identified by a unique
identification number or UID by the system. All the information of users in a system are stored in
/etc/passwd file. The hashed passwords for users are stored in /etc/shadow file.

Users can be divided into two categories on the basis of the level of access:

Super user/root/administrator: Access to all the files on the system.

Normal users: Limited access

When a new user is created, by default system takes following actions:

✓ Assigns UID to the user.


✓ Creates a home directory /home/.
✓ Sets the default shell of the user to be /bin/sh.
✓ Creates a private user group, named after the username itself.
✓ Contents of /etc/skel are copied to the home directory of the new user.
✓ .bashrc, .bash_profile and .bash_logout are copied to the home directory of new user. These
files provide environment variables for this user’s session.

Common questions

Powered by AI

Virtualization enhances disaster recovery and system resilience by allowing for the seamless movement and replication of virtual environments across different physical machines. This flexibility means that in the event of hardware failure or other disasters, IT services can be quickly restored from backed-up VMs, minimizing downtime. The ability to conduct regular backups and create redundant systems further enhances resilience, ensuring continuous business operations even in adverse situations .

Operating system virtualization enhances security by allowing virtual instances to be easily monitored and isolated, thereby reducing vulnerability to security breaches. It also reduces costs by decreasing the need for high-specification hardware, as virtual systems can be run on less powerful machines, and reduces time spent on IT services like software updates .

Virtualization technologies contribute to environmental benefits by reducing the need for physical hardware, which in turn minimizes energy consumption and waste. By optimizing the use of existing resources and decreasing the demand for new hardware, virtualization supports green IT initiatives and helps organizations move towards more environmentally-friendly operations. This reduction in resource consumption aligns with organizational sustainability goals, reflecting a commitment to eco-friendly practices .

The command line interface (CLI) in Linux is a text-based environment where users input commands directly, offering precise control and efficiency for experienced users. In contrast, a graphical user interface (GUI) provides a visual, user-friendly interface that allows tasks to be performed through icons and menus, making it accessible to novices. While CLI offers powerful scripting and automation capabilities due to its command-driven nature, GUI is advantageous for tasks involving multiple applications or remote computers due to its intuitive design .

Network functions virtualization (NFV) offers several advantages, including reducing the need for numerous physical components such as switches, routers, and servers, which simplifies network management and lowers costs. It allows for the flexible deployment of network functions across environments and is particularly beneficial in the telecommunications industry. However, NFV also presents challenges such as determining individual organizational needs, dealing with licensing restrictions, accurate resource estimations, and effective VM management and virtual backups .

Linux containers (LXC) and virtual machines (VMs) differ primarily in terms of structure and use. LXC is a lightweight OS virtualization method that runs multiple isolated Linux systems on a single host, allowing for easy portability and scalability. Containers hold microservices or apps and everything necessary to run them, but share the OS, making them more lightweight than VMs. Conversely, VMs have their own OS and emulate a complete computer environment, making them heavier and less portable, but also allowing for running different OS types concurrently on the same physical hardware .

Virtualization technology increases resource utilization by enabling the creation of multiple virtual machines (VMs) on a single physical machine. This is achieved through hypervisors, which separate physical resources from virtual environments, allowing each virtual machine to operate with a share of the underlying physical hardware as needed. As a result, the full capacity of the physical machine is utilized, maximizing efficiency and performance while distributing capabilities across multiple users or environments .

Hypervisors play a critical role in virtualization by acting as an intermediary layer between physical hardware and virtual environments. They facilitate the creation of VMs by allocating the necessary physical resources, thus allowing multiple operating systems to run concurrently on a single physical machine. This separation optimizes resource use, enhances scalability, and provides IT services more flexibly and efficiently. Enterprises can significantly reduce costs and improve operational efficiency through this consolidation and dynamic allocation of resources .

Data virtualization simplifies data management by allowing diverse data sources to be treated as a single source, which in turn facilitates easier access for applications and users regardless of the physical location of data. It ensures that data is delivered in the required form at the right time, enhancing decision-making and operational efficiency. This approach reduces the need for extensive data replication and storage management, improving overall access speeds and reducing system overhead .

Different shell types in Linux offer varying capabilities that impact user experience and scripting. The Bourne shell (sh), known for its scripting efficiency and compatibility, is widely used for script writing. Variants like the Bourne Again shell (bash) introduce enhancements like improved scripting features and user interfaces. The C shell (csh), with its C-like syntax, is designed for interactive use. These shells provide structured environments that cater to different user preferences and scripting needs, enhancing usability and functionality within Linux systems .

You might also like