100% found this document useful (1 vote)
105 views

Chapter 4 - Resource Monitoring & Management

Resources monitoring for system administration

Uploaded by

Yina The first
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
105 views

Chapter 4 - Resource Monitoring & Management

Resources monitoring for system administration

Uploaded by

Yina The first
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 4: Resource Monitoring and Management

Chapter 4: Resource Monitoring & Management


4.1.Resource Monitoring & Management
As stated earlier, a great deal of system administration revolves around resources and their efficient
use. By balancing various resources against the people and programs that use those resources, you
waste less money and make your users as happy as possible. However, this leaves two questions:
i. What are resources?
ii. How is it possible to know what resources are being used (and to what extent)?
The purpose of this chapter is to enable you to answer these questions by helping you to learn more
about resources and how they can be monitored.
Before you can monitor resources, you first have to know what resources there are to monitor. All
systems have the following resources available:
➢ CPU power
➢ Bandwidth
➢ Memory
➢ Storage
These resources have a direct impact on system performance, and therefore, on your users' productivity
and happiness. At its simplest, resource monitoring is nothing more than obtaining information
concerning the utilization of one or more system resources.
However, it is rarely this simple. First, one must take into account the resources to be monitored. Then
it is necessary to examine each system to be monitored, paying particular attention to each system's
situation.
The systems you monitor fall into one of two categories:
➢ The system is currently experiencing performance problems at least part of the time and you
would like to improve its performance.
➢ The system is currently running well and you would like it to stay that way.
The first category means you should monitor resources from a system performance perspective, while
the second category means you should monitor system resources from a capacity planning perspective.
Because each perspective has its own unique requirements, the following sections explore each
category in more depth.

System Performance Monitoring


System performance monitoring is normally done in response to a performance problem. Either the
system is running too slowly, or programs (and sometimes even the entire system) fail to run at all. In
either case, performance monitoring is normally done as the first and last steps of a three-step process:
i. Monitoring to identify the nature and scope of the resource shortages that are causing the
performance problems

1
Chapter 4: Resource Monitoring and Management

ii. The data produced from monitoring is analyzed and a course of action (normally performance
tuning and/or the procurement of additional hardware) is taken to resolve the problem
iii. Monitoring to ensure that the performance problem has been resolved
Because of this, performance monitoring tends to be relatively short-lived in duration and more
detailed in scope.
Note: System performance monitoring is an iterative process, with these steps
being repeated several times to arrive at the best possible system performance.
The primary reason for this is that system resources and their utilization tend
to be highly interrelated, meaning that often the elimination of one resource
bottleneck uncovers another one.

Monitoring System Capacity


Monitoring system capacity is done as part of an ongoing capacity planning program. Capacity
planning uses long-term resource monitoring to determine rates of change in the utilization of system
resources. Once these rates of change are known, it becomes possible to conduct more accurate long-
term planning regarding the procurement of additional resources.
Monitoring done for capacity planning purposes is different from performance monitoring in two ways:
i. The monitoring is done on a more-or-less continuous basis
ii. The monitoring is usually not as detailed
The reason for these differences stems from the goals of a capacity planning program. Capacity
planning requires a "big picture" viewpoint; short-term or anomalous resource usage is of little concern.
Instead, data is collected over a period of time, making it possible to categorize resource utilization in
terms of changes in workload. In more narrowly-defined environments, (where only one application is
run, for example) it is possible to model the application's impact on system resources. This can be done
with sufficient accuracy to make it possible to determine, for example, the impact of 5 more customer
service representatives running the customer service application during the busiest time of the day.

4.1.1. What to Monitor?


As stated earlier, the resources present in every system are CPU power, bandwidth, memory, and
storage. At first glance, it would seem that monitoring would need only consist of examining these four
different things.
Unfortunately, it is not that simple. For example, consider a disk drive. What things might you want to
know about its performance?
➢ How much free space is available?
➢ How many I/O operations on average does it perform each second?
➢ How long on average does it take each I/O operation to be completed?
➢ How many of those I/O operations are reads? How many are writes?
➢ What is the average amount of data read/written with each I/O?
There are more ways of studying disk drive performance; these points have only scratched the surface.
The main concept to keep in mind is that there are many different types of data for each resource.

2
Chapter 4: Resource Monitoring and Management

The following subsections explore the types of utilization information that would be helpful for each of
the major resource types.
4.1.1.1. Monitoring CPU Power
In its most basic form, monitoring CPU power can be no more difficult than determining if CPU
utilization ever reaches 100%. If CPU utilization stays below 100%, no matter what the system is
doing, there is additional processing power available for more work.
However, it is a rare system that does not reach 100% CPU utilization at least some of the time. At that
point it is important to examine more detailed CPU utilization data. By doing so, it becomes possible to
start determining where the majority of your processing power is being consumed. Here are some of
the more popular CPU utilization statistics:
➢ User Versus System
➢ Context Switches
➢ Interrupts
➢ Runnable Processes
A process may be in different states. For example, it may be:
➢ Waiting for an I/O operation to complete
➢ Waiting for the memory management subsystem to handle a page fault
In these cases, the process has no need for the CPU.
However, eventually the process state changes, and the process becomes runnable. As the name
implies, a runnable process is one that is capable of getting work done as soon as it is scheduled to
receive CPU time. However, if more than one process is runnable at any given time, all but one
(assuming a single-processor computer system) of the runnable processes must wait for their turn at the
CPU. By monitoring the number of runnable processes, it is possible to determine how CPU-bound
your system is.
Other performance metrics that reflect an impact on CPU utilization tend to include different services
the operating system provides to processes. They may include statistics on memory management, I/O
processing, and so on. These statistics also reveal that, when system performance is monitored, there
are no boundaries between the different statistics. In other words, CPU utilization statistics may end up
pointing to a problem in the I/O subsystem, or memory utilization statistics may reveal an application
design flaw.
Therefore, when monitoring system performance, it is not possible to examine any one statistic in
complete isolation; only by examining the overall picture it is possible to extract meaningful
information from any performance statistics you gather.
4.1.1.2. Monitoring Bandwidth
Monitoring bandwidth is more difficult than the other resources described here. The reason for this is
due to the fact that performance statistics tend to be device-based, while most of the places where
bandwidth is important tend to be the buses that connect devices. In those instances where more than
one device shares a common bus, you might see reasonable statistics for each device, but the aggregate
load those devices place on the bus would be much greater.

3
Chapter 4: Resource Monitoring and Management

Another challenge to monitoring bandwidth is that there can be circumstances where statistics for the
devices themselves may not be available. This is particularly true for system expansion buses and
datapaths. However, even though 100% accurate bandwidth-related statistics may not always be
available, there is often enough information to make some level of analysis possible, particularly when
related statistics are taken into account.
Some of the more common bandwidth-related statistics are:
➢ Bytes received/sent: Network interface statistics provide an indication of the bandwidth
utilization of one of the more visible buses -- the network.
➢ Interface counts and rates: These network-related statistics can give indications of excessive
collisions, transmit and receive errors, and more. Through the use of these statistics (particularly
if the statistics are available for more than one system on your network), it is possible to
perform a modicum of network troubleshooting even before the more common network
diagnostic tools are used.
➢ Transfers per Second: Normally collected for block I/O devices, such as disk and high-
performance tape drives, this statistic is a good way of determining whether a particular device's
bandwidth limit is being reached. Due to their electromechanical nature, disk and tape drives
can only perform so many I/O operations every second; their performance degrades rapidly as
this limit is reached.
4.1.1.3. Monitoring Memory
If there is one area where a wealth of performance statistics can be found, it is in the area of monitoring
memory utilization. Due to the inherent complexity of today's demand-paged virtual memory operating
systems, memory utilization statistics are many and varied. It is here that the majority of a system
administrator's work with resource management takes place.
The following statistics represent a cursory overview of commonly-found memory management
statistics:
➢ Page Ins/Page Outs: These statistics make it possible to gauge the flow of pages from system
memory to attached mass storage devices (usually disk drives). High rates for both of these
statistics can mean that the system is short of physical memory and is thrashing, or spending
more system resources on moving pages into and out of memory than on actually running
applications.
➢ Active/Inactive Pages: These statistics show how heavily memory-resident pages are used. A
lack of inactive pages can point toward a shortage of physical memory.
➢ Free, Shared, Buffered, and Cached Pages: These statistics provide additional detail over the
more simplistic active/inactive page statistics. By using these statistics, it is possible to
determine the overall mix of memory utilization.
➢ Swap Ins/Swap Outs: These statistics show the system's overall swapping behavior. Excessive
rates here can point to physical memory shortages.
Successfully monitoring memory utilization requires a good understanding of how demand-paged
virtual memory operating systems work, which alone could take up an entire book.

4
Chapter 4: Resource Monitoring and Management

4.1.1.4. Monitoring Storage


Monitoring storage normally takes place at two different levels:
➢ Monitoring for sufficient disk space
➢ Monitoring for storage-related performance problems
The reason for this is that it is possible to have dire problems in one area and no problems whatsoever
in the other. For example, it is possible to cause a disk drive to run out of disk space without once
causing any kind of performance-related problems. Likewise, it is possible to have a disk drive that has
99% free space, yet is being pushed past its limits in terms of performance.
However, it is more likely that the average system experiences varying degrees of resource shortages in
both areas. Because of this, it is also likely that -- to some extent -- problems in one area impact the
other. Most often this type of interaction takes the form of poorer and poorer I/O performance as a disk
drive nears 0% free space although, in cases of extreme I/O loads, it might be possible to slow I/O
throughput to such a level that applications no longer run properly.
In any case, the following statistics are useful for monitoring storage:
➢ Free Space: it is probably the one resource all system administrators watch closely; it would be
a rare administrator that never checks on free space (or has some automated way of doing so).
➢ File System-Related Statistics: These statistics (such as number of files/directories, average
file size, etc.) provide additional detail over a single free space percentage. As such, these
statistics make it possible for system administrators to configure the system to give the best
performance, as the I/O load imposed by a file system full of many small files is not the same as
that imposed by a file system filled with a single massive file.
➢ Transfers per Second: This statistic is a good way of determining whether a particular device's
bandwidth limitations are being reached.
➢ Reads/Writes per Second: A slightly more detailed breakdown of transfers per second, these
statistics allow the system administrator to more fully understand the nature of the I/O loads a
storage device is experiencing. This can be critical, as some storage technologies have widely
different performance characteristics for read versus write operations.

4.1.2. Monitoring Tools


As your organization grows, so does the number of servers, devices, and services you depend on. The
term system covers all of the computing resources of your organization. Each element in the system
infrastructure relies on underlying services or provides services to components that are closer to user.
In networking, it is typical to think of a system as a layered stack. User software sits at the top of the
stack and system applications and services on the next layer down. Beneath the services and
applications, you will encounter operating systems and firmware. The performance of software
elements needs to be monitored as an application stack.
Users will notice performance problems with the software that they use, but those problems rarely arise
within that software. All layers of the application stack need to be examined to find the root cause of
performance issues. You need to head off problems with real-time status monitoring before they occur.
Monitoring tools help you spot errors and service failures before they start to impact users.

5
Chapter 4: Resource Monitoring and Management

The system stack continues on below the software. Hardware issues can be prevented through
hardware monitoring. You will need to monitor servers, network devices, interface performance, and
network link capacity. You need to monitor many different types of interacting system elements to keep
your IT services running smoothly.

Why do System Performance Monitoring?


Knowing whether a computer has issues is fairly straightforward when the computer is right in front of
you. Knowing what’s causing the problem? That’s harder. But a computer sitting by itself is not as
useful as it could be. Even the smallest small-office/home-office network has multiple nodes: laptops,
desktops, tablets, WiFi access points, Internet gateway, smartphones, file servers and/or media servers,
printers, and so on. That means you are in charge of “infrastructure” rather than just “equipment.” Any
component might start misbehaving and could cause issues for the others.
You most likely rely on off-premises servers and services, too. Even a personal website raises the
nagging question, “Is my site still up?” And when your ISP has problems, your local network’s
usefulness suffers. You need an activity monitor. Organizations rely more and more on servers and
services hosted in the cloud: SaaS applications (email, office apps, business packages, etc); file storage;
cloud hosting for your own databases and apps; and so on. This requires sophisticated monitoring
capabilities that can handle hybrid environments.
Bandwidth monitoring tools and NetFlow and sFlow based traffic analyzers help you stay aware of
the activity, capacity, and health of your network. They allow you to watch traffic as it flows through
routers and switches, or arrives at and leaves hosts.
But what of the hosts on your network, their hardware, and the services and applications running there?
Monitoring activity, capacity, and health of hosts and applications is the focus of system monitoring.

System Monitoring Software Essentials


To keep your system fit for purpose, your monitoring activities need to cover the following priorities:
➢ Acceptable delivery speeds
➢ Constant availability
➢ Preventative maintenance
➢ Software version monitoring and patching
➢ Intrusion detection
➢ Data integrity
➢ Security monitoring
➢ Attack mitigation
➢ Virus prevention and detection
Lack of funding may cause you to compromise on monitoring completeness. The expense of
monitoring can be justified because of it:
➢ reduces user/customer support costs
➢ prevents loss of income caused by system outages or attack vulnerability
➢ prevents data leakage leading to litigation
➢ prevents hardware damage and loss of business-critical data

6
Chapter 4: Resource Monitoring and Management

Minimum system monitoring software capabilities


More sophisticated system monitoring package provides a much broader range of capabilities, such as:
➢ Monitoring multiple servers. Handling servers from various vendors running various
operating systems. Monitoring servers at multiple sites and in cloud environments.
➢ Monitoring a range of server metrics: availability, CPU usage, memory usage, disk space,
response time & upload/download rates. Monitoring CPU temperature & power supply voltages
➢ Monitoring applications. Using deep knowledge of common applications and services to
monitor key server processes, including web servers, database servers, and application stacks.
➢ Automatically alerting you of problems, such as servers or network devices that are
overloaded or down, or worrisome trends. Customized alerts that can use multiple methods to
contact you – email, SMS text messages, pager, etc.
➢ Triggering actions in response to alerts, to handle certain classes of problems automatically.
➢ Collecting historical data about server and device health and behavior.
➢ Displaying data. Crunching the data and analyzing trends to display illuminating visualizations
of the data.
➢ Reports. Besides displays, generating useful predefined reports that help with tasks like
forecasting capacity, optimizing resource usage, and predicting needs for maintenance and
upgrades.
➢ Customizable reporting. A facility to help you create custom reports.
➢ Easy configurability, using methods like auto-discovery and knowledge of server and
application types.
➢ Non-intrusive: imposing a low overhead on your production machines and services. Making
smart use of agents to offload monitoring where appropriate.
➢ Scalability: Able to grow with your business, from a small or medium business (SMB) to a
large enterprise.
4.1.2.1. Windows Task Manager
Task Manager (old name Windows Task Manager) is a task manager, system monitor, and startup
manager included with all versions of Microsoft Windows since Windows NT 4.0 and Windows 2000.
Windows Task Manager provides information about computer performance and shows detailed
information about the programs and processes running on the computer, including name of running
processes, CPU load, commit charge, I/O details, logged-in users, and Windows services; if connected
to the network, you can also view the network status and quickly understand how the network works.
Microsoft improves the task manager between each version of Windows, sometimes quite dramatically.
Specifically, the task managers in Windows 10 and Windows 8 are very different from those in
Windows 7and Windows Vista, and the task managers in Windows 7 and Vista are very different from
those in Windows XP. A similar program called Tasks exists in Windows 98 and Windows 95.

How to Open the Task Manager


Starting Task Manager is always a concern for many of you. Now we will list some easy and quick
ways for you to open it. Some of them might come in handy if you don’t know how to open a Task
Manager or you can’t open Task Manager the way you’re used to.

7
Chapter 4: Resource Monitoring and Management

You are probably familiar with the way that pressing Ctrl+Alt+Delete on your keyboard. Before
Windows Vista was released, this way can bring you directly to Task Manager. Starting with Windows
Vista, pressing Ctrl+Alt+Delete now leads to the Windows Security interface, which provides options
for locking your PC, switching users, signing out, changing a password, and running Task Manager.
The quickest way to start Task Manager is to press Ctrl+Shift+Esc, and it will take you directly to it.
If you prefer using a mouse over a keyboard, one of the quickest ways to launch Task Manager is to
right-click on any blank area on the taskbar and select Task Manager. Just need two clicks.
You can also run Task Management by hitting Windows+R to open the Run box, typing taskmgr and
then hitting Enter or clicking OK.
In fact, you can also open the Task Manager by Star menu, Windows Explorer, or creating a shortcut...

Figure 4.1. How to Start Task Manager


While we have listed these four convenient ways which are totally enough for you.

8
Chapter 4: Resource Monitoring and Management

Explanation of the Tabs in Task Manager


Now we are going to discuss all the useful tabs you can find in the Task Manager nowadays, mostly in
Windows 8 and Windows 10.

Figure 4.2. Sample Screen Shot of a Task Manager Window


Processes
The Processes tab contains a list of all running programs and applications on your computer (listed
under Apps), as well as any background processes and Windows processes that are running.
In this tab, you can close running programs, see how each program uses your computer resources, and
more.
The Processes tab is available in all versions of Windows. Starting with Windows 8, Microsoft has
combined the Applications and Processes tab into the Processes tab, so Windows 8/10 displays all
running programs in addition to processes and services.
Performance

9
Chapter 4: Resource Monitoring and Management

The Performance tab is available in all versions of Windows that is a summary of what's going on,
overall, with your major hardware components, including CPU, memory, disk drive, Wi-Fi, and
network usage. It displays how much the computer's available system resources are being used, so you
can check the valuable information.
For example, this tab makes it easy to see your CPU model and maximum speed, RAM slots in use,
disk transfer rate, your IP address...Newer versions of Windows also display usage charts. What’s
more? There is a quick link to the Resource Monitor at the bottom of this tab.
App History
The App History tab displays the CPU usage and network utilization that each Windows app has used
from the date listed on the screen until the time you enter Task Manager. App History is only available
in Task Manager in Windows 10 and Windows 8.
Startup
The Startup tab shows every program that is launched automatically each time you start your computer,
along with several important details about each program, including the Publisher, Status, and Startup
impact which is the most valuable information - shows the impact rating of high, medium or low.
This tab is great for identifying and then disabling programs that you don't need them to run
automatically. Disabling Windows auto-start programs is a very simple way to speed up your computer.
Startup tab is only available in Task Manager in Windows 10 and Windows 8.
Users
The Users tab shows users currently signed in to the computer and the processes are running within
each. The Users tab is available in all Windows versions of Task Manager but only shows processes
that each user is running in Windows 10 and Windows 8.
Details
The Details tab contains full details of each process running on your computer. The information
provided in this tab is useful during advanced troubleshooting. Details tab is available in Task Manager
in Windows 10 and Windows 8, and the features of the Processes tab are similar to Details in earlier
versions of Windows.
Services
The Services tab is available in Task Manager in Windows 10, 8, 7, and Vista that shows all of the
Windows Services currently running on the computer with the Description and Status. The status is
Running or Stopped, and you can change it.

What to Do in the Task Manager?


Task manager always gives you some limited control over those running tasks, like set process
priorities, processor affinity, start and stop services, and forcibly terminate processes.
Well, one of the most common things done in Task Manager is to use End Task to prevent programs
from running. If a program no longer responds, you can select End Task from the Task Manager to
close the program without restarting the computer.

10
Chapter 4: Resource Monitoring and Management

4.1.2.2. Windows Resource Monitoring (Resmon)


Resource Monitor (Resmon) is a system application included in Windows Vista and later versions of
Windows that allows users to look at the presence and allocation of resources on a computer. This
application allows administrators and other users determine how system resources are being used by a
particular hardware setup.

How to start Resource Monitor


Users and administrators have several options to start Resource Monitor. It is included in several
versions of Windows, and some options to start the tool are only available in select versions of the
operating system.
The first two methods should work on all versions of Windows that are supported by Microsoft.
(1) Windows-R to open the run box. Type resmon.exe, and hit the Enter-key.
(2) Windows-R to open the run box. Type perfmon.exe /res, and hit the Enter-key.
(3) On Windows 10: Start → All Apps → Windows Administrative Tools → Resource Monitor
(4) Old Windows: Start → All Programs → Accessories → System Tools → Resource Monitor
(5) Open Task Manager with Ctrl+Shift+Esc→ Performance tab, click open Resource Monitor.

Figure 4.3. Opening Resource Monitor from Task Manager


The Resource Monitor interface looks the same on Windows 7, 8.1 and 10. The program uses tabs to
separate data, it loads an overview when you start it, including CPU, Memory, Disk, and Network are
the five tabs of the program including all the processes that use the resources.

11
Chapter 4: Resource Monitoring and Management

The sidebar displays graphs that highlight the CPU, Disk, Network, and Memory use over a period of
60 seconds.
You can hide and show elements with a click on the arrow icon in title bars. Another option that you
have to customize the interface is to move the mouse cursor over dividers in the interface to drag the
visible area. Use it to increase or decrease the visible area of the element.
You may want to hide the graphs, for instance, to make more room for more important data and run the
Resource Monitor window in as large of a resolution as possible.
The overview tab is a good starting point, as it gives you an overview of the resource usage. It
highlights CPU and memory usage, disk utilization, and network use in real-time.
Each particular listing offers a wealth of information. The CPU box lists process names and IDs, the
network box IP addresses and data transfers, the memory box hard faults, and the disk box read and
write operations.
One interesting option that you have right here and there is to select one or multiple processes under
CPU to apply filters to the Disk, Network and Memory tab.
If you select a particular process under CPU, Resource Monitor lists the disk, network and memory
usage of that process only in its interface. This is one of the differences to the Task Manager, as you
cannot do something like that in the tool.

Figure 4.4. Sample Screen Shot of Resource Monitor

12
Chapter 4: Resource Monitoring and Management

Monitor CPU Usage with Resource Monitor


You need to switch to the CPU tab if you want to monitor CPU utilization in detail. You find the
processes listing of the overview page there, and also the three new listings Services, Associated
Handles and Associated Modules.
You can filter by processes to display data only for those processes. This is quite handy, as it is a quick
way to see links between processes, and services and other files on the system. Note that the graphs are
different to the ones displayed before. The graphs on the CPU tab lists the usage of each core, Service
CPU usage, and total CPU usage.
Associated Modules lists files such as dynamic link libraries that are used by a process. Associated
Handles point to system resources such as files or Registry values. These offer specific information but
are useful at times. You can run a search for handles, for instance, to find out why you can't delete a file
at that point in time.
Resource Monitor gives you some control over processes and services on the CPU tab. Right-click on
any process to display a context menu with options to end the selected process or entire process tree, to
suspend or resume processes, and to run a search online.
The Services context menu is limited to starting, stopping and restarting services, and to search online
for information.
Processes may be displayed using colors. A red process indicates that it is not responding, and a blue
one that it is suspended

Memory in Resource Monitor


The memory tab lists processes just like the CPU tab does, but with a focus on memory usage. It
features a physical memory view on top of that that visualizes the distribution of memory on the
Windows machine.
If this is your first time accessing the information, you may be surprised that quite a bit of memory may
be hardware reserved. The graphs highlight the used physical memory, the commit charge, and the hard
faults per second. Each process is listed with its name and process ID, the hard faults, and various
memory related information.
➢ Commit: Amount of virtual memory reserved by the operating system for the process.
➢ Working Set: Amount of physical memory currently in use by the process.
➢ Shareable: Amount of physical memory in use by the process that can be shared with other
processes.
➢ Private: Amount of physical memory in use by the process that cannot be used by other
processes.

Disk Activity information


The Disk tab of the Windows Resource Monitor lists the disk activity of processes and storage
information. It visualizes the disk usage in total and for each running process. You get a reading of each
processes' disk read and write activity, and can use the filtering options to filter by a particular process
or several processes.

13
Chapter 4: Resource Monitoring and Management

The Storage listing at bottom lists all available drives, available and total space on the drive, as well as
active time. The graphs visualize the disk queue length. It is an indicator for requests of that particular
disk and is a good indicator to find out if disk performance cannot keep up with I/O operations.

Network Activity in Resource Monitor


The Network tab lists network activity, TCP connections and listening ports. It lists network activity of
any running process in detail. It is useful, as it tells you right away if processes connect to the Internet.
You do get TCP connection listings that highlight remote servers that processes connect to, the
bandwidth use, and the local listening ports.
Bandwidth
Bandwidth describes the maximum data transfer rate of a network. It measures how much data can be
sent over a specific connection in a given amount of time. For example, a gigabit Ethernet connection
has a bandwidth of 1,000 Mbps (125 megabytes per second). An Internet connection via cable modem
may provide 25 Mbps of bandwidth.
While bandwidth is used to describe network speeds, it does not measure how fast bits of data move
from one location to another. Since data packets travel over electronic or fiber optic cables, the speed of
each bit transferred is negligible. Instead, bandwidth measures how much data can flow through a
specific connection at one time.
When visualizing bandwidth, it may help to think of a network connection as a tube and each bit of
data as a grain of sand. If you pour a large amount of sand into a skinny tube, it will take a long time
for the sand to flow through it. If you pour the same amount of sand through a wide tube, the sand will
finish flowing through the tube much faster. Similarly, a download will finish much faster when you
have a high-bandwidth connection rather than a low-bandwidth connection.
Data often flows over multiple network connections, which means the connection with the smallest
bandwidth acts as a bottleneck. Generally, the Internet backbone and connections between servers have
the most bandwidth, so they rarely serve as bottlenecks. Instead, the most common Internet bottleneck
is your connection to your ISP.
Bandwidth vs. Speed
Internet speed is a major vice to any Internet user. Even though Internet speed and data transfer mostly
revolve around bandwidth, your Internet speed can also be different from the Internet bandwidth
expectations. What tends to make it complicated is that the terms bandwidth, speed, and bandwidth
speed are used interchangeably, but they are actually different things. Most people refer to speed as
how long it takes to upload and download files, videos, livestreams, and other content.
Bandwidth is the size of the pipe or the overall capacity for data. Keep in mind that you could have
great bandwidth and not so great speed if your end system, your network, can’t handle all of the flow of
information. They key is making sure everything matches up. If you want to know more about your
Internet performance, you can use an Internet speed test. This could help you see if your Internet
service provider is providing the actual Internet connection that you are expecting, or if there are
problems at the network level with being able to handle the data.

14
Chapter 4: Resource Monitoring and Management

Network bandwidth
Use of bandwidth can also be monitored by a network bandwidth monitor. Network bandwidth is a
fixed commodity. There are several ways to use network bandwidth. First, you can control the data
flow in your Internet connection. That is you can streamline data from one point to another point. Next,
you can also optimize data so that it consumes less bandwidth from what is allocated.
In summary, bandwidth is the amount of information and Internet connection can handle in a given
period. An Internet connection operates much faster or slower depending on whether the bandwidth is
large or small. With a larger bandwidth, the set of data transmission is much faster than an Internet
connection with a lower bandwidth.
4.1.3. Network Printers
Network printing allows us to efficiently use printing resources. With network printing we first connect
all of our work stations to a network and then we implement a network printer. In general there are two
ways this can be done. In first method we take a regular printer and plug it into the back of one of the
PCs. On the picture below that PC is named Workstation 1. Then we share that printer on the network
by going to the printer properties in Windows.

Figure 4.5. Sample Shared Printer through a workstation


In this configuration other hosts on the network can send the print job through the network to the
Workstation 1, which then sends the print job to the print device. This is the cheaper method, but we
depend on the Workstation 1, which has to be turned on all the time. If someone is using that computer,
then we depend on that person too. This method is used in home or small office scenarios. To connect
to the shared printer we can use the UNC path in the format: \\computername\sharename.
UNC (Universal Naming Convention) path is a standard for identifying servers,
printers and other resources in a network. It uses double slashes (for Unix and
Linux) or backslashes (for Windows) to precede the name of the computer.
➢ //servername/path Unix
➢ \\servername\path DOS/Windows

In second method we implement the type of printer that has its own network interface installed (either
wired or wireless). This way we can connect our printer directly to the network so the print jobs can be
sent from workstations directly to that network printer.

15
Chapter 4: Resource Monitoring and Management

Figure 4.6. Shared printer with its own dedicated NIC (Network Interface Card)
The print job doesn’t have to go through the workstation such as in the first case. To connect to a
network attached printer we can create a printer object using a TCP/IP port. We use the IP address and
port name information to connect to the printer.

Print Port
When a client needs to send print job to network printer, client application formats print job and sends
it to print driver. Just as a traditional print job, it’s saved on the local work station in the spool. Then the
job is sent from the spool to the printer. In traditional set up the computer will send the job through the
parallel or USB cable to the printer. In the network printing set up, the job is redirected. The print job
goes out through the network board, then the network, and then arrives at destination network printer.

Drivers
Each network host that wants to use the network printer must have the corresponding printer driver
installed. When we share a printer in Windows, the current printer driver is automatically delivered to
clients that connect to the shared printer. If the client computers run a different version of Windows, we
can add the necessary printer drivers to the printer object. To add drivers for network users we can use
the ‘Advanced’ and ‘Sharing’ tab in printer properties.

Print Server
An important component of any network printer that we have is the print server. The print server
manages the flow of documents sent to the printer. Using a print server lets us customize when and how
documents print. There are different types of print servers. In the first scenario where we have attached
ordinary printer to our workstation, the printer has no print server hardware built in. In this case the
operating system running on Workstation 1 functions as a print server. It receives the jobs from the
other clients, saves them locally in a directory on the hard drive and spools them off to the printer one
at a time as the printer becomes ready. The computer can fill other roles on the network in addition to
being the print server. Most operating systems include print server software.

16
Chapter 4: Resource Monitoring and Management

Some printers, like our printer from the second scenario, have a built in print server that’s integrated
into the hardware of the printer itself. It receives the print jobs from the various clients, queues them
up, gives them priority and sends them on through the printing mechanism as it becomes available. We
often refer to this type of print server as internal print server. We use special management software to
connect to this kind of print server and manage print jobs.
Print servers can also be implemented in another way. We can purchase an external print server. The
external print server has one interface that connects to the printer (parallel or USB interface), and it also
has a network jack that plugs into our HUB or switch. It provides all the print server functions but it’s
all built into the hardware of the print server itself. So, when clients send a job to the printer, the jobs
are sent through the network to the hardware print server which then formats, prioritizes, saves them in
the queue, and then spools them off to the printer one at a time as the printer becomes available.
Different operating systems implement servers in different ways, and different external or internal print
servers also function in different ways. Because of that we need to check our documentation to see how
to set it up with our specific hardware or software.
Remember: We can share our existing printers on the network or we can set up a
printer which has its own NIC and which is then directly connected to the
network. Print server formats, prioritizes, queues and then spools print jobs.

4.2. Remote Administration


Remote administration is an approach being followed to control either a computer system or a network
or an application or all three from a remote location. Simply put, Remote administration refers to any
method of controlling a computer from a remote location. A remote location may refer to a computer in
the next room or one on the other side of the world. It may also refer to both legal and illegal remote
administration. Generally, remote administration is essentially adopted when it is difficult or
impractical to a person to be physically present and do administration on a system’s terminal.
4.2.1. Requirements to Perform Remote Administration
Internet connection
One of the fundamental requirements to perform remote administration is network connectivity. Any
computer with an Internet connection, TCP/IP or on a LAN can be remotely administered. For non-
malicious administration, the user must install or enable server software on host system to be viewed.
Then user/client can access host system from another computer using the installed software.
Usually, both systems should be connected to the Internet, and the IP address of the host/server system
must be known. Remote administration is therefore less practical if the host uses a dial-up modem,
which is not constantly online and often has a Dynamic IP.

Connecting
When the client connects to host computer, a window showing the Desktop of the host usually appears.
The client may then control the host as if he/she were sitting right in front of it. Windows has a built-in
remote administration package called Remote Desktop Connection. A free cross-platform alternative
is VNC, which offers similar functionality.

17
Chapter 4: Resource Monitoring and Management

4.2.2. Common Tasks/Services for which Remote Administration is Used


Generally, remote administration is needed for user management, file system management, software
installation/configuration, network management, Network Security/Firewalls, VPN, Infrastructure
Design, Network File Servers, Auto-mounting etc. and kernel optimization/ recompilation. The
following are some of the tasks/ services for which remote administration need to be done:
➢ General
• Controlling one's own computer from a remote location (e.g. to access the software or data
on a personal computer from an Internet café).
➢ ICT Infrastructure Management
• Remote administration essentially needed to administer the ICT infrastructure such as the
servers, the routing and switching components, the security devices and other such related.
➢ Shutdown
• Shutting down or rebooting a computer over a network.
➢ Accessing Peripherals
• Using a network device, like printer
• retrieving streaming data, much like a CCTV system.
➢ Modifying
• Editing another computer's Registry settings,
• remotely connect to another machine to troubleshoot issues
• modifying system services,
• installing software on another machine,
• modifying logical groups.
➢ Viewing
• remotely run a program or copy a file
• remotely assisting others,
• supervising computer or Internet usage (monitor the remote computers activities)
• access to a remote system's "Computer Management" snap-in.
➢ Hacking
• Computers infected with malware, such as Trojans, sometimes open back doors into
computer systems which allow malicious users to hack into and control the computer. Such
users may then add, delete, modify or execute files on the computer to their own ends.
4.2.3. Remote Desktop Solutions
Most people who are used to a Unix-style environment know that a machine can be reached over the
network at the shell level using utilities like telnet or ssh. And some people realize that X Windows
output can be redirected back to the client workstation. But many people don’t realize that it is easy to
use an entire desktop over the network. The following are some of proprietary and open source
applications that can be used to achieve this.
SSH (Secure Shell)
Secure Shell (SSH) is a proprietary cryptographic network tool for secure data communication between
two networked computers that connects, via a secure channel over an insecure network, a server and a
client (running SSH server and SSH client programs, respectively). The protocol specification
distinguishes between two major versions that are referred to as SSH-1 and SSH-2.

18
Chapter 4: Resource Monitoring and Management

The best-known application of the tool is for access to shell accounts on Unix-like operating systems-
GNU/Linux, OpenBSD, FreeBSD, but it can also used in a similar fashion for accounts on Windows.
SSH is generally used to log into a remote machine and execute commands. It also supports tunneling,
forwarding TCP ports and X11 connections, it can transfer files using the associated SSH file transfer
(SFTP) or secure copy (SCP) protocols. SSH uses the client-server model.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of
exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure
path over the Internet, through a firewall to a virtual machine.
OpenSSH (OpenBSD Secure Shell)
OpenSSH is a tool providing encrypted communication sessions over a computer network using the
SSH protocol. It was created as an open source alternative to the proprietary Secure Shell software
suite offered by SSH Communications Security.
Telnet
Telnet is used to connect a remote computer over network. It provides a bidirectional interactive text-
oriented communication facility using a virtual terminal connection on internet or local area networks.
Telnet provides a command-line interface on a remote host. Most network equipment and operating
systems with a TCP/IP stack support a Telnet service for remote configuration (including systems based
on Windows NT). Telnet is used to establish a connection to Transmission Control Protocol (TCP) on
port number 23, where a Telnet server application (telnetd) is listening.
Experts in computer security, recommend that the use of Telnet for remote logins should be
discontinued under all normal circumstances, for the following reasons:
➢ Telnet, by default, does not encrypt any data sent over the connection (including passwords),
and so it is often practical to eavesdrop on the communications and use the password later for
malicious purposes; anybody who has access to a router, switch, hub or gateway located on the
network between the two hosts where Telnet is being used can intercept the packets passing by
and obtain login, password and whatever else is typed with a packet analyzer.
➢ Most implementations of Telnet have no authentication that would ensure communication is
carried out between the two desired hosts and not intercepted in the middle.
➢ Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.
rlogin
Utility for Unix-like operating systems that allows users to log in on another host remotely through
network, communicating through TCP port 513. It has several security problem, like all information,
including passwords transmitted in unencrypted mode. It is vulnerable to interception. Therefor, it was
rarely used across distrusted networks (like the public Internet) and even in closed networks.
rsh
remote shell (rsh) can connect a remote host across a computer network. The remote system to which
rsh connects runs the rsh daemon (rshd). The daemon typically uses the well-known Transmission
Control Protocol (TCP) port number 514. In security point of view, it is not recommended.

19
Chapter 4: Resource Monitoring and Management

VNC (Virtual Network Computing)


VNC is a remote display system which allows the user to view the desktop of a remote machine
anywhere on the Internet. It can also be directed through SSH for security.
Install VNC server on a computer (server) and install client on local PC. Setup is extremely easy and
server is very stable. On client side, set the resolution and connect to IP of VNC server.

FreeNX
FreeNX allows to access desktop from another computer over the Internet. One can use this to login
graphically to a desktop from a remote location. One example of its use would be to have a FreeNX
server set up on home computer, and graphically logging in to the home computer from work computer,
using a FreeNX client.

Wireless Remote Administration


Remote administration software has recently started to appear on wireless devices such as the
BlackBerry, Pocket PC, and Palm devices, as well as some mobile phones.
Generally these solutions do not provide the full remote access seen on software such as VNC or
Terminal Services, but do allow administrators to perform a variety of tasks, such as rebooting
computers, resetting passwords, and viewing system event logs, thus reducing or even eliminating the
need for system administrators to carry a laptop or be within reach of the office.
AetherPal and Netop are tools used for full wireless remote access and administration on Smart phone
devices. Wireless remote administration is usually the only way to maintain man-made objects in space.

Remote Desktop Connection (RDC)


Remote Desktop Connection (RDC) is a Microsoft technology that allows a local computer to connect
to and control a remote PC over a network or the Internet. It is done through a Remote Desktop Service
(RDS) or a terminal service that uses the company's proprietary Remote Desktop Protocol (RDP).
Remote Desktop Connection is also known simply as Remote Desktop.
Typically, RDC requires the remote computer to enable the RDS and to be powered on. The connection
is established when a local computer requests connection to a remote computer using an RDC-enabled
software. On authentication, the local computer has full or restricted access to the remote computer.
Besides desktop computers, servers and laptops, RDC also supports connecting to virtual machines.
This technology was introduced in Windows XP.
Alternatively referred to as remote administration, remote admin is way to control another computer
without physically being in front of it. Some examples of how remote administration could be used.
➢ Remotely run a program or copy a file.
➢ Remotely connect to another machine to troubleshoot issues.
➢ Remotely shutdown a computer.
➢ Install software to another computer.
➢ Monitor the remote computers activity.

20
Chapter 4: Resource Monitoring and Management

Remote Admin allows system administrators or support personnel to remotely access Officelinx Admin
from their own workstation, eliminating the need to be in front of the server in order to perform
administrative functions.
4.2.4. Disadvantages of Remote Administration
Remote administration has many disadvantages too apart from its advantages. The first and foremost
disadvantage is the security. Generally, certain ports to be open at Server level to do remote
administration. Due to open ports, the hackers/attackers takes advantage to compromise the system. It
is advised that remote administration to be used only in emergency or essential situations only to do
administration remotely. In normal situations, it is ideal to block the ports to avoid remote
administration.

4.3. Performance
4.3.1. Redundant Array of Inexpensive (or Independent) Disks (RAID)
RAID is a data storage virtualization technology that combines multiple physical disk drive
components into one or more logical units for the purposes of data redundancy, performance
improvement, or both. This was in contrast to the previous concept of highly reliable mainframe disk
drives referred to as Single Large Expensive Disk (SLED).
Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on
the required level of redundancy and performance. The different schemes, or data distribution layouts,
are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme,
or RAID level, provides a different balance among the key goals: reliability, availability,
performance, and capacity. RAID levels greater than RAID 0 provide protection against
unrecoverable sector read errors, as well as against failures of whole physical drives.
4.3.1.1. Standard levels
Originally, there were five standard levels of RAID, but many variations have evolved, including
several nested levels and many non-standard levels (mostly proprietary). RAID levels and their
associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the
Common RAID Disk Drive Format (DDF) standard:
RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned
volume, the capacity of a RAID 0 volume is the same; it is the sum of the capacities
of drives in the set. But because striping distributes contents of each file among all
drives in the set, the failure of any drive causes the entire RAID 0 volume and all
files to be lost. In comparison, a spanned volume preserves the files on the unfailing
drives. The benefit of RAID 0 is that the throughput of read and write operations to
any file is multiplied by the number of drives because, unlike spanned volumes,
reads and writes are done concurrently. The cost is increased vulnerability to drive
failures—since any drive in a RAID 0 setup failing causes entire volume to be lost,
the average failure rate of the volume rises with the number of attached drives.
Figure 4.6. RAID 0 setup

21
Chapter 4: Resource Monitoring and Management

NOTES:
In data storage, data striping is the technique of segmenting logically sequential data, slike a file, so
that consecutive segments are stored on different physical storage devices. It is useful when processor
requests data more quickly than single storage device can provide it. By spreading segments across
multiple devices which can be accessed concurrently, total data throughput is increased.

In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard
disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored
volume is a complete logical representation of separate volume copies.

Parity stripe or parity disk in a RAID array provides error-correction. Parity bits are written at the
rate of one parity bit per n bits, where n is the number of disks in the array. When a read error occurs,
each bit in the error region is recalculated from its set of n bits. In this way, using one parity bit
creates "redunancy" for a region from the size of one bit to the size of one disk.

RAID 1 consists of data mirroring, without parity or striping. Data is written


identically to two or more drives, thereby producing a "mirrored set" of drives. Thus,
any read request can be serviced by any drive in the set. If a request is broadcast to
every drive in the set, it can be serviced by the drive that accesses the data first
(depending on its seek time and rotational latency), improving performance. Sustained
read throughput, if the controller or software is optimized for it, approaches the sum of
throughputs of every drive in the set, just as for RAID 0. Actual read throughput of
most RAID 1 implementations is slower than the fastest drive. Write throughput is
always slower because every drive must be updated, and the slowest drive limits the
write performance. The array continues to operate as long as at least one drive is
functioning.
Figure 4.7. RAID 1 setup

RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation
is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code
parity is calculated across corresponding bits and stored on at least one parity drive. This level is of
historical significance only; as of 2014 it is not used by any commercially available system.

Figure 4.8. RAID 2 setup

22
Chapter 4: Resource Monitoring and Management

RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is
synchronized and data is striped such that each sequential byte is on a different drive. Parity is
calculated across corresponding bytes and stored on a dedicated parity drive. Although
implementations exist, RAID 3 is not commonly used in practice. The following figure shows a RAID
3 setup of 6-byte blocks and two parity bytes, shown are blocks of data in different colors.

Figure 4.9. RAID 3 setup

RAID 4 consists of block-level striping with dedicated parity. The main advantage of RAID 4 over
RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the
whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all
data drives. As a result, more I/O operations can be executed in parallel, improving the performance of
small transfers. The figure below shows a setup of RAID 4 with dedicated parity disk with each color
representing the group of blocks in the respective parity block (a strip).

Figure 4.10. RAID 4 setup

RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information
is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a
single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.

23
Chapter 4: Resource Monitoring and Management

RAID 5 requires at least three disks. Like all single-parity concepts, large RAID 5 implementations
are susceptible to system failures because of trends regarding array rebuild time and the chance of drive
failure during rebuild. Rebuilding an array requires reading all data from all disks, opening a chance for
a second drive failure and the loss of the entire array. The figure below shows a setup of RAID 5 layout
with each color represent the group of data blocks and associated party block (a stripe).

Figure 4.11. RAID 5 layout

RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault
tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-
availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of
four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array
until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and
manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the
drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead
of RAID 5. RAID 10 also minimizes these problems. The figure below shows a RAID 6 setup, which is
identical to RAID 5 other than the addition of a second parity block.

Figure 4.12. RAID 6 setup

24

You might also like