Answers To The Questions From Stallings' Book
Answers To The Questions From Stallings' Book
Review Questions:
3.1. What does a trace of instructions consist of?
(...) The behavior of an individual process can be characterized by the list of the sequence of instructions that are executed in it.
process. This list is called process trace. (...) (Page 106)
3.2. What are the events that usually lead to the creation of a process?
Table 3.1. Reasons for the creation of processes
The operating system is equipped with a batch job control flow,
New batch job generally on tape or disk. When the operating system is prepared to take
a new job, will read the next sequence of job control orders
Interactive connection A user logs into the system from a terminal.
The operating system can create a process to carry out a function on behalf
Created by the OS to provide a service from a user program, without the user having to wait <for example, a
process for print control
(Page 111)
3.3. Briefly describe each state of the process model in Figure 3.5.
New: A process that has just been created but has not yet been admitted by the operating system into the group of executable processes.
Normally a new process is not loaded into the main memory.
Ready: A process that is prepared to execute when the opportunity arises.
Execution: Process that is currently running. (...)
Blocked: a process that cannot be executed until a certain event occurs, such as the completion of an I/O operation.
Terminated: a process that has been excluded by the operating system from the group of executable processes, either because it stopped or due to
any other reason.
(Page 113)
3.8. For what types of entities does the operating system maintain information tables intended to facilitate management?
Memory, I/O, Files and processes (page 122)
3.11. What are the steps that an operating system takes to create a new process?
Assign a unique identifier to the new process
Allocate space for the process
Start the process control block.
Establish the appropriate links.
Create or expand data structures. (Page 131)
3.14 What is the difference between mode change and process change?
It is clear then that mode change is a concept distinct from process change. A mode change can occur without
change the state of the process that is currently in the Running state. In such a case, save the context and restore it later
it involves a small extra cost. However, if the process that was running needs to transition to another state (Ready, Blocked,
etc.), the operating system has to make substantial changes in its environment. (Page 134)
Chapter 4 - Threads, SMP, and Micronuclei
Resumen
Some operating systems make a distinction between the concepts of process and thread. The former refers to the property of
the resources and the second refers to the execution of programs. This approach can lead to an improvement in efficiency and do more
comfortable programming. In a multithreaded system, within a single process, multiple concurrent threads can be defined.
both with user-level threads and kernel-level threads. The operating system is aware of the user-level threads and
are created and managed by a user-level thread library that runs in the user space of a process. User-level threads
they are very efficient. since it is not necessary to change modes when switching from one thread to another. However, at each moment, it can only
execute a user-level thread of the same process, if one thread blocks, the entire process blocks. User-level threads
The core consists of internal threads within a process that the core manages. That's why the core is aware of them, the multiple threads of a
the same process can run in parallel on a multiprocessor and thread blocking does not block the entire process. However, for
switching from one thread to another requires a change of mode.
Symmetric multiprocessing is a method of organizing a multiprocessor system in which any process (or thread) can be
run on any processor; this includes kernel processes and code. An SMP architecture presents new design elements of
the operating systems and offers higher performance than a single-processor system under similar conditions
In recent years, there has been a lot of attention on the microkernel approach in operating system design. In its pure form,
a microkernel operating system is made up of a very small microkernel that runs in kernel mode and contains only
the most essential and critical functions of the operating system. The rest of the operating system functions are implemented to
run in user mode and to use the microkernel for critical services. The design with microkernels is aimed at a
flexible and highly modular implementation. However, there are still doubts about the performance of this architecture.
Review Questions:
4.1. Table 3.5 lists the most common elements of a process control block for a non-threaded operating system.
Which ones should belong to a thread control block and which to a process control block in a multithreading system?
In a multithreaded environment, a process is defined as the unit of protection and resource allocation unit. Processes are assigned
associate the following elements:
A virtual address space that contains the image of the process.
Protected access to processors, other processes (for inter-process communication), files, and I/O resources
(devices and channels).
In a process, there can be one or more threads, each with the following:
The thread execution state (Running, Ready, etc.).
The context of the processor, which is saved when it is not executing; one way to view the thread is as a program counter.
independent operating within a process.
A stack of execution.
Static storage for local variables.
Access to the memory and resources of the process, shared with all the other threads of the same. (Page 151)
4.2. Indicate reasoned why a context switch between threads can be less costly than a context switch between processes.
Because it does not involve an expensive change of context.
4.3. What are the two different and potentially independent characteristics that express the concept of process?
Unit of ownership of resources
Shipping Unit (page 150)
4.4.Indique cuatro ejemplos generales del uso de hilos en un sistema monousuario multiprogramado.
Interactive and background work.
Asynchronous processing
Acceleration of execution
Modular structuring of programs (page 153)
4.5. What resources do the threads of a process normally share?
... all the threads of a process share the state and resources of the process. They reside in the same address space and have
access to the same data. (...) (page 152)
4.6.Enumere tres ventajas de los ULT frente a los KLT.
The advantages of using ULT instead of KLT are largely theoretical; among others, the following are:
1. Thread switching does not require kernel mode privileges, because all the thread management data structures
They are in the user address space of the same process. Therefore, the process should not switch to kernel mode.
to manage threads. This avoids the overhead of two mode changes (from user to kernel and from kernel to user).
2. A specific planning can be carried out. For an application, planning through a rotating shift may be better.
while for another it may be better to plan by priorities. A planning algorithm can be made to
measure of the application without affecting the underlying planning of the operating system.
3. ULTs can run on any operating system. No changes to the kernel are necessary to support ULT.
Underlying. The thread library is a set of application utilities shared by all applications.
(page 159)
Single Instruction Single Data Stream (SISD): a single processor executes a single stream of instructions to operate on data.
stored in a single memory.
Single Instruction/Multiple Data (SIMD): a single machine instruction controls the simultaneous execution of several
elements of the process according to a sequence of blockages. Each element of the process has an associated data memory, so each
instruction is executed over a different set of data through different processors. In this category are found the
vectors and matrices of processors.
Multiple instruction/single data flow (MISD): a sequence of data is transmitted to a set of processors, each of
which execute an instruction of the sequence. This structure has never been implemented.
Multiple instruction/multiple data (MIMD): a set of processors executes several sequences simultaneously.
instructions on different datasets. (page 164)
4.10. List the key design elements for an SMP operating system.
Concurrent processes and threads
Planning
Synchronization
Memory Management
Reliability and Fault Tolerance (page 166)
4.11.1. Provide examples of functions and services of a conventional monolithic operating system that can be subsystems.
externals in an operating system with microkernel.
Client processes, Device managers, File servers, Process servers, virtual memory. (page 168)
4.12. Enumere y explique brevemente siete ventajas potenciales de un diseño con micronúcleo frente a un diseño monolítico.
uniform interface
extensibility
flexibility
portability
reliability
support for distributed systems
object-oriented operating system (page 169)
4.13. Explain the potential disadvantage of the performance of an operating system with a microkernel.
A frequently cited potential disadvantage of micronuclei is their performance. It takes more time to build and send a
message, for accepting and decoding the response, through the microkernel that through a simple system call. (...) (Page 170)
4.14. Enumere tres funciones que esperaría encontrar incluso en un sistema operativo con un micronúcleo mínimo.
A micronucleus must include those functions that depend directly on the hardware, and whose functionality is
necessary to support the applications and servers that run in kernel mode. These functions are grouped into the following
general categories: low-level memory management, inter-process communication, interrupt management and I/O. (...) (page 171)
4.15. What is the basic form of communication between processes or threads in an operating system with a microkernel?
The passage of messages.
Chapter 5 - Concurrency, mutual exclusion, and synchronization
Resumen
The central themes of modern operating systems are multiprogramming, multiprocessing, and distributed processing. A
A fundamental point in these topics and in operating system design technologies is concurrency. When multiple are executed
concurrently processes, in the real case of a multiprocessor system or in the virtual case of a single-processor system
Multiprogrammed, issues of conflict resolution and cooperation arise.
Concurrent processes can interact in various ways. Processes that are unaware of each other may compete.
by resources such as processor time or I/O devices. Processes may have indirect knowledge of each other
because they share access to common objects, such as a main memory block or a file. Finally, the processes
they can have direct knowledge of each other and cooperate through the exchange of information. The key points that arise in this
interaction is mutual exclusion and deadlock.
Mutual exclusion is a condition in which there is a set of concurrent processes and only one can access a given resource or
perform a given function at each moment in time. Mutual exclusion techniques can be used to resolve conflicts such as
the competition for resources and to synchronize processes so that they can cooperate. An example of this last one is the model of
producer/consumer, in which one process puts data into a buffer and one or more processes extract it.
Several algorithms have been developed in software to provide mutual exclusion, the best known of which is Dekker's algorithm.
Software solutions often have a high cost and the risk of logical errors in the program is also high. A second set
methods for supporting mutual exclusion imply the use of special machine instructions. These methods reduce the
overload, but they are still inefficient because they use active waiting.
Another method to support mutual exclusion consists of including the features within the operating system. Two of the techniques
The most common are semaphores and message passing. Semaphores are used for signaling between processes and can be employed
easily to enforce a mutual exclusion discipline. The messages are useful for compliance with mutual exclusion and
they also offer an effective means of communication between processes.
Review Questions:
5.1. Enumere cuatro elementos de diseño para los cuales es necesario el concepto de concurrencia.
inter-process communication
sharing and competition for resources
synchronization in the execution of processes
assignment of processor time to processes (page 192)
5.3. What are the basic requirements for the execution of concurrent processes?
It will be found that the basic requirement for supporting process concurrency is the ability to enforce mutual exclusion,
that is to say, to prohibit other processes from performing an action when one process has obtained permission. (...) (Page 192)
Definition: Any two statements S1 and S2 can be executed concurrently producing the same result if
they were executed sequentially if and only if the following conditions were met:
1) R(S1) intersection W(S2) = empty
2) W(S2) intersection R(S1) = empty
3) W(S1) intersection W(S2) = empty
(W(Sx) = writers of Sx; R(Sx) = readers of Sx) (notes, volume 1, page 192)
5.4. Enumere tres niveles de conocimiento entre procesos y defina brevemente cada uno de ellos.
Processes do not have knowledge of each other: these are independent processes that are not designed to operate together.
together. The best example of this situation is the multiprogramming of several independent processes. These can be both
batch jobs such as interactive sessions or a combination of both. Although the processes do not work together, the system
The operative has to take charge of the competition for resources. For example, two independent applications may want
access the same disk, file, or printer. The operating system must regulate these accesses.
Processes have an indirect knowledge of others: processes do not necessarily know others by their
process identifiers, but they share access to some objects, such as an I/O buffer. These processes show
cooperation to share the common object.
The processes have direct knowledge of each other: the processes are capable of communicating with others through the
process identifier and are designed to work together on some activity. These processes also show
cooperation. (page 197)
5.5. What is the difference between competitive processes and cooperative processes?
Competing processes for resources do not know about each other, and cooperative processes have knowledge of each other
and they cooperate through sharing or communication.
5.6. Enumere los tres problemas de control asociados a la competencia entre procesos y defina brevemente cada uno de ellos.
Need for mutual exclusion: to ensure that two processes do not access a critical resource simultaneously.
Deadlock: when processes hold resources that other processes need.
Starvation: that a process never gets control of the resources it needs.
1. The mutual exclusion must be fulfilled: only one process of all those that have critical sections for the same resource can execute.
shared object, must have permission to enter it at a given moment.
A process that is interrupted in a non-critical section must do so without interfering with other processes.
A process must not be able to request access to a critical section only to be delayed indefinitely; it cannot
allow interlocking or starvation.
4. When no process is in its critical section, any process requesting to enter its own should be able to do so without
dilation.
5. Assumptions should not be made about the relative speed of processes or the number of processors.
A process remains in its critical section for only a finite time.
5.9. What is the difference between general semaphores and binary semaphores?
Binary semaphores can only take the values 0 and 1.
5.10. What is the difference between weak and robust traffic lights?
Robust semaphores include the FIFO policy for process execution, weak ones do not.
5.12. What is the difference between blocker and non-blocker in relation to messages?
A blocking message is one that a process receives that was blocked waiting for it.
A non-blocking message is one that a process receives when it was not blocked waiting for it.
4.13. What are the conditions generally associated with the readers/writers problem?
Any number of readers can read a file simultaneously.
Only one writer can write to the file at a time.
3) When a writer is accessing the file, no reader can read it.
NOTE:
Other important topics of this chapter:
Software solutions for mutual exclusion: Dekker's and Peterson's algorithms.
void signal(semaphore s)
void signalB(semaphore s)
s.counter4 ~;
if (s.counter <= 0) it (s.cola.isempty(>)
s.value 1;
remove a process P from the queue; else
put the process P in the ready queue;
remove a process P from queue;
put the process P in the ready queue;
Figure 5.6. A definition of the Figure 5.7. A definition of the
primitives of traffic lights. primitives of binary semaphores.
Chapter 6 - Concurrency, deadlock, and starvation
Resumen
The interlock is the blocking of a set of processes that compete for system resources or communicate with each other.
with others. The block is permanent until the operating system performs some extraordinary operation, such as killing one or
more processes or force one or more processes to roll back in execution. Deadlock can involve reusable resources or
consumables. A consumable resource is one that is destroyed upon being acquired by a process; examples include messages and
the information of I/O buffers. A reusable resource is one that does not get depleted or destroyed by use, such as an I/O channel or a
memory zone.
The general methods to address deadlock are three: prevention, detection, and prediction. The prevention of deadlock
ensures that deadlock will not occur by ensuring that none of the necessary conditions for deadlock are met. The
detection of deadlock is necessary if the operating system is always willing to grant resource requests;
Periodically, the operating system must check the deadlock situation and take measures to undo it. The prediction of
Deadlock involves the analysis of each new resource request to determine if it could lead to a deadlock and grant it.
only if it is not possible to reach a deadlock.
Review Questions:
6.2. What are the three conditions that must be met for deadlock to be possible?
1 - Mutual exclusion
2 - Retention and waiting
3 - No appropriation
6.3. What are the four conditions that give rise to interlocking?
1 - Mutual exclusion
2 - Retention and waiting
3 - No appropriation
4 - Vicious cycle of waiting
6.6. How can the condition of the 'vicious circle of waiting' be prevented?
The condition of the waiting vicious circle can be prevented by defining a linear ordering of resource types. If a process is given
they have been assigned resources of type R, so they will only be able to make subsequent requests for resources of the type following R in the
ordering. (Page 264)
6.7. What is the difference between prediction, detection, and prevention of deadlock?
In the prevention of deadlock, resource requests are required to prevent at least one of the four from happening.
conditions. (...)
With deadlock prediction, the three necessary conditions can be met, but the right choices are made to avoid it.
reach the deadlock point (...) (page 264)
(...) with deadlock detection, resources will be granted to processes that need them whenever possible. Periodically, the
An algorithm will be executed that allows detecting the condition of a vicious circle of waiting (...) (page 270)
Note:
Also important in this chapter: Banker's algorithm (deadlock prediction), safe and unsafe states. (page 266)
Deadlock detection algorithm. (page 270)
Dinner of the Philosophers (page 272)
Chapter 7 - Memory management
Resumen
One of the most important and complex tasks of an operating system is memory management. Memory management
it involves treating main memory as a resource to allocate and share among several active processes. For efficient use of
the processor and the I/O services, it is interesting to keep as many processes as possible in main memory. Additionally, it is
desirable to free programmers from size limitations in program development.
The basic tools of memory management are paging and segmentation. In paging, each process is divided into
constant and relatively small size pages. Segmentation allows the use of variable size parts. It is also possible
combine segmentation and paging into a single memory management scheme.
Review Questions:
7.1. What are the requirements that memory management should aim to satisfy?
Relocation
Protection
Sharing
Physical organization
Logical organization (page 292)
7.4. What are some of the reasons for allowing two or more processes to access a particular memory region?
In order to share data structures between different processes.
7.5. In a static partitioning scheme, what are the advantages of using partitions of different sizes?
Reduce internal fragmentation. Provide some greater flexibility than that obtained in the static partitioning scheme with sizes.
equal.
7.6. What is the difference between internal fragmentation and external fragmentation?
Internal fragmentation: when a process occupies less space than the one allocated to it by the operating system.
External fragmentation: When there are unused gaps in memory between processes that occupy it.
7.7. What are the differences between logical addresses, relative addresses, and physical addresses?
A logical address is a reference to a memory position independent of the current allocation of data to memory; it should be.
to make a translation to a physical address before being able to access the memory. A relative address is a particular case
of logical address, in which the address is expressed as a position relative to some known point, usually the beginning of the
A physical address or absolute address is a real position in main memory.
(Page 305)
Note:
Also important BUDDY SYSTEM
Chapter 8 - Virtual Memory
Resumen
For efficient use of the processor and I/O services, it is advisable to keep in main memory
as many processes as possible. Furthermore, it is advisable to free programmers from size limitations in development.
programs.
The way to approach both problems is through virtual memory. With virtual memory, all references to addresses are
logical references that are translated into real addresses during execution. This allows processes to be located at any position of
the main memory and change location over time. Virtual memory also allows a process to be divided into fragments.
These fragments do not have to be located contiguously in main memory during execution and it is not even
It is necessary for all fragments of the process to be in memory during execution.
The two basic approaches to virtual memory are paging and segmentation. With paging, each process is divided into
fixed-size and relatively small pages. Segmentation allows the use of variable-sized segments. It is also possible
combine segmentation and paging into a single memory management scheme.
A virtual memory management scheme requires both hardware and software support. The hardware support is provided by
the processor. This support includes the dynamic translation of virtual addresses to physical addresses and the generation of interrupts
when a referenced page or segment is not in main memory. These interrupts activate the management software
operating system memory.
A series of design issues related to operating systems support the management of virtual memory:
Reading policies: the pages of processes can be loaded on demand or a previous pagination policy can be used; this
last groups the input activities by loading multiple pages at once.
* Location policies: in a pure segmentation system, an incoming segment must fit into an available memory space.
Replacement policies: when the memory is full, a decision must be made about which page or pages will be replaced.
Management of the resident set: the operating system must decide how much main memory to allocate to a particular process.
when it loads. A static allocation can be made at the time of process creation or it can change dynamically.
* Emptying policies: the pages of a modified process can be flushed to disk at the time of replacement or it can
apply a pre-flushing policy; the latter groups the output activity by expelling several pages at once.
Load control: load control determines the number of resident processes that will be in the main memory at a given time.
given.
Review Questions:
8.1. What is the difference between simple pagination and pagination with virtual memory?
In the study of simple paging, it was indicated that each process has its own page table and that when it loads all its pages into
the main memory, is created and loaded in the main memory a page table. Each entry of the page table contains the
frame number of the corresponding page in main memory. When considering a virtual memory scheme based on the
pagination requires the same structure, a page table. Again, it's normal to associate a single page table with each
process. In this case, however, the entries in the page table become more complex. Since only some of the pages
of a process can be in main memory, a bit is needed in each entry of the table to indicate whether the corresponding page
is present (P) in main memory or not. If the bit indicates that the page is in memory, the entry also includes the
frame number for that page.
Another control bit necessary in the page table entry is the modified (M) bit, to indicate whether the content of the page
the corresponding has been altered since the page was loaded into main memory. If there have been no changes, it is not necessary to write the
page when it is replaced in the frame it currently occupies. There may also be other control bits. For example, if the protection
or the sharing is managed on the page. more bits will be needed for that purpose.
(Pages 328 and 329)
8.3. Why is the principle of proximity crucial for the use of virtual memory?
The previous arguments are based on the principle of proximity, which was introduced in Chapter 1 (see especially Appendix lA).
In summary, the principle of proximity states that references to data and the program within a process tend to cluster.
Thus, the assumption is valid that, for short periods of time, only a few fragments of a process will be needed.
In addition, it would be possible to make intelligent predictions about which fragments of a process will be needed in the near future and thus avoid
hyperpagination. (Page 326)
8.7. What is the difference between resident set management and page replacement policy?
The page replacement policy is responsible for selecting the page to be replaced among those currently in memory.
The management of the resident set is the opposite, it decides which pages will be loaded into main memory.
8.8. What is the relationship between the FIFO page replacement algorithms and the clock?
The algorithms are similar, but the clock algorithm includes a signaling bit to eliminate the oldest pages of the
memory.
8.10. Why is it not possible to combine a global replacement policy and a fixed allocation policy?
Because the fixed allocation policy implies that processes are loaded into a fixed number of pages on which to execute and the policy of
global replacement considers all pages in memory as candidates for replacement, regardless of the process to which
belong.
8.11. What is the difference between a resident set and a working set?
A working set is a virtual space to which the process has referred at a given time, and a resident set is the
total set of pages in memory of the process.
8.12. What is the difference between demand discharging and prior discharging?
Demand paging: the page is downloaded to secondary memory only when it has been chosen to be replaced.
Previous emptying: the page that has been written to secondary memory before its frame is needed.
Chapter 9 - Uniprocessor planning
summary
The operating system can make three types of scheduling decisions that affect the execution of processes. The
Long-term scheduling determines when new processes are admitted into the system. Medium-term scheduling is part of the
exchange function and determines when a process is partially or totally moved to the main memory so that it can be executed.
Short-term planning determines which of the ready processes will be executed next by the processor. This chapter will...
focuses on matters related to short-term planning.
In the design of a short-term planner, a wide variety of criteria are used. Some of these criteria refer to the
system behavior as perceived by the user (user-oriented), while others consider the overall effectiveness of
system to meet the needs of all users (system-oriented). Some of the criteria specifically refer to
quantitative measures of performance, while others are qualitative in nature. From the user's perspective, the most characteristic feature
The important aspect of a system is, in general, the response time, while from the system's point of view, the pro-
conductivity or the use of the processor.
A wide variety of algorithms have been developed to make short-term scheduling decisions among ready processes. Among
these are included:
First to arrive/first to be served: select the process that has been waiting the longest for service.
Rotating shift: it uses a time fractioning to ensure that processes are limited to executing in short bursts of time.
rotating among the ready processes.
First, the shortest process: select the process with the least expected execution time, without taking over the CPU.
Least remaining time: select the process with the least expected execution time left in the processor. A process
it can be evicted when another process is ready.
First, the highest response rate: the planning decision is based on an estimate of the normalized return time.
Feedback: it establishes a set of planning queues and places the processes in the queues, taking into account, among others
criteria, the execution history.
The choice of a scheduling algorithm will depend on the expected performance and the complexity of the implementation.
Review Questions:
9.3. What is the difference between Return Time and Response Time?
One is user-oriented (response) and the other is process termination-oriented (return).
9.4. In process planning, does a low priority value represent low or high priority?
Low priority.
9.10. Briefly define the First Highest Response Ratio (HRRN) scheduling.
The planning decision is based on an estimate of the normalized return time.
Resumen
In a tightly coupled multiprocessor, several processors have access to the same main memory. With this configuration, the
The planning structure is somewhat more complex. For example, a certain process can be assigned to the same processor during
all their life or it can be issued to a different processor each time it reaches the Running state. Some performance studies
They propose that the differences between the various scheduling algorithms are less significant in a multiprocessor system.
A real-time process or task is one that runs in connection with some process, function, or set of external events.
computer system that must meet one or more deadlines to interact correctly and efficiently with the external environment. A
Real-time operating system is one that manages real-time processes. In this context, the criteria are not applicable.
traditional algorithm selection methods. Instead, the key factor is meeting deadlines. They are appropriate in this
context the algorithms that depend heavily on the appropriation and reaction to relative deadlines.
Review Questions:
(page 427)
Among the various proposals for thread scheduling for multiprocessors and processor allocation, the following stand out.
four methods:
Load distribution: processes are not assigned to a particular processor. A global queue of ready threads is maintained and each
processor, when idle, selects a thread from the queue. The term workload distribution is used to distinguish this
load balancing scheme strategy, in which work is assigned more permanently
Group planning: a set of related threads is planned for execution on a set of processors simultaneously.
time, according to a one-to-one relationship.
Dedicated processor assignment: it is the opposite approach to load balancing and offers a defined implicit scheduling.
for the assignment of threads to processors. While a program is running, it is assigned a number of processors equal to
number of threads it has. When the program finishes, the processors return to the general reserve for possible
assignments to other programs.
Dynamic planning: the number of threads in a program can be changed during execution.
(page 432)
First to arrive/first to be served (FCFS): when a job arrives, each of its threads is placed consecutively at the end.
from the shared queue. When a processor becomes idle, it takes the next ready thread and executes it until it finishes or is
blocked.
First the one with the smallest number of threads: the shared ready queue is organized as a priority queue, in which the
The highest priority is assigned to the jobs' threads with the least number of unplanned threads. Jobs of the same
Priorities are arranged according to the order of arrival. As with FCFS, a scheduled thread runs until it finishes or is blocked.
First the one with the least number of threads (preferred): the highest priority is given to jobs with the least number of threads without
finish. The arrival of a job with a smaller number of threads than a running job will evict the threads of the job
planned. (Page 433)
10.4. What is the difference between hard real-time tasks and soft real-time tasks?
The rigid real-time requirement must meet the deadline; otherwise, it will produce unwanted damage or a fatal error in the system.
flexible real-time task has an associated deadline, which is convenient but not mandatory; even if the deadline has passed, it still has
It makes sense to plan and complete the task. (page 438)
10.5. What is the difference between periodic and aperiodic real-time tasks?
The periodic event must start or end within a timeframe or may have a restriction for both the start and the end.
completion. In the case of a periodic task, the requirement can be stated as "once every period T" or "exactly every T
units." (page 438)
10.6. Enumere y defina brevemente las cinco áreas generales de requisitos para sistemas operativos en tiempo real.
Determinism: performing operations at fixed moments
Sensitivity: time of the system to respond to an interruption
User control: Allow the user specific control over task priority.
Reliability: measure of the quality of the system
Fault tolerance: a characteristic that refers to maintaining the ability to respond in the event of failures.
10.7. Enumere y defina brevemente las cuatro clases de algoritmos de planificación en tiempo real.
Methods with static tables: they perform a static analysis of the possible shipping schedules. The result of the
analysis is a planning that determines, at runtime, when the execution of a task should begin.
Preferred methods with static priorities: a static analysis is also carried out, but no action is taken.
planning. Instead, this analysis is used to assign priorities to tasks, so that a planner can be employed
preferred conventional with priorities.
Dynamic planning methods: feasibility is determined at execution time (dynamically) instead of beforehand.
start execution (statically). A new task is accepted for execution only if it is feasible to meet its constraints.
time. One of the results of the feasibility analysis is a plan or project used to decide when each is issued
task.
Dynamic methods of the best result: no feasibility analysis is conducted. The system attempts to meet all
deadlines and abandon any process already initiated and whose deadline has not been met.
Moment when it is ready: the moment when the task is ready for execution. In the case of a repetitive task
periodic, it is actually a sequence of moments known in advance. In the case of an aperiodic task, this time
It can be known in advance or the operating system may only be aware of it when the task is already
find list.
Start date: the moment when the task must begin.
Deadline: the moment when the task must be completed. Typical real-time applications usually have a
start period or an end period but not both.
Processing time: the time required to complete a task until its completion. In some cases, this time is
facilitated, but in others, the operating system calculates an exponential average. In other scheduling systems, this is not used.
information.
Resource requirements: the set of resources (in addition to the processor) that a task needs during its execution.
Priority: measures the relative importance of the task. Rigid real-time tasks can have an 'absolute' priority.
a system failure occurs if a deadline is missed. If the system continues to run regardless of what happens, both tasks
Real-time rigid ones, like the flexible ones, will receive relative priority as a guideline for the planner.
Subtask structure: a task can be broken down into a mandatory subtask and another optional subtask. Only the subtask
Mandatory has a rigid deadline.
Chapter 11 - I/O Management and Disk Scheduling
Resumen
The interface of a computer system with the outside world is the I/O architecture. This architecture is designed to provide a
systematic means of controlling interaction with the outside world and providing the operating system with the information it needs to
manage I/O activity effectively.
I/O functions are generally divided into a set of levels, where the lowest ones handle details close to the
physical functions to be performed and the superiors deal with the I/O from a logical and general point of view. The result is that changes in
The hardware parameters do not necessarily affect most of the I/O software.
A key aspect of I/O is the use of buffers controlled by I/O utilities rather than by application processes. The
Intermediate storage helps to equalize the internal speed differences of the computer system and the speeds of the
I/O devices. The use of buffers also allows to decouple the actual I/O transfers from the process's address space.
of application, which allows the operating system greater flexibility in performing memory management functions.
The aspect of I/O that has the greatest impact on the overall performance of the system is disk I/O. Consequently, efforts have been made
more research and invested more design efforts at this point than at any other point in the I/O. Two of the most used methods
Frequently, to improve disk I/O performance, disk scheduling and caching are used.
At any given moment, there can be a queue of I/O requests to the same disk. It is the task of disk scheduling to satisfy
these requests in such a way that it minimizes the mechanical disk search time and, therefore, improves performance. This includes
in play the physical arrangement of pending requests, as well as considerations about their proximity.
A disk cache is a buffer, usually in the main memory, that acts as a block cache between the
disk memory and the rest of the main memory. According to the principle of proximity, the use of a disk cache should reduce
substantially the number of block I/O transfers between main memory and the disk.
Review Questions:
11.2. What is the difference between logical I/O and device I/O?
One is responsible for the general I/O functions requested by user processes (Logical I/O) and the other to issue the instructions.
appropriate I/O to the device (I/O to device)
11.3. What is the difference between a block-oriented device and a stream-oriented device? Give an example of each.
they.
For the study of the different intermediate storage methods, it is sometimes important to make a distinction between two types of
devices: block-oriented devices and stream-oriented devices. Block-oriented devices store the
block information, usually of fixed size, transferring one block at a time. Generally, it is possible
refer to the (lats by their block number. Disks and tapes are examples of block-oriented devices. Devices
oriented aflujotransfer the data as a series of bytes; they do not have block structure. Terminals, printers, ports of
communication, mice and other pointing devices and most of the remaining devices that are not storage
Secondary are devices oriented towards flows.
(Page 471)
11.4. Why could I expect an improvement in performance using double buffering for I/O instead of single buffering?
simple?
An improvement can be made on the simple intermediate memory by assigning two intermediate stores of the system to the operation.
(Figure 1 1.6c). In this way, a process can transfer data to (or from) an intermediate memory while the system
operational empty (or fill) (page 473)
11.5. What are the delays involved in a disk read or write?
Seek time
Rotational delay (rotation delay or Rotational Delay)
Transfer Time
11.6. Briefly define the disk scheduling policies illustrated in Figure 11.8.
. FIFO: access orders are processed in sequential order
. SSTF: (shortest scan time first) first the one with the shortest service time
. SCAN (look): the arm processes requests in one direction until it reaches the end, when it changes direction.
. C-SCAN: all requests are satisfied in a single direction (ascending or descending)
. (also important are: N-SCAN, F-SCAN, C-LOOK UP)
11.7. Striping - Distributes data across multiple disks for increased performance but offers no redundancy.
Level 0: groups two or more physical disks to form a logical disk, does not provide data redundancy.
Level 1: mirror, copy all the information from the first set of disks to the second
Level 2: redundancy through Hamming code, uses Hamming polynomials on additional disks to provide reliability to the
data requires n/2–1 redundant disks, this technique has been surpassed.
Level 3: parity through bit interleaving, requires a single redundant disk.
Level 4: block interleaving parity, requires a redundant disk, stores parity but calculated by block.
Level 5: the same as level 4 except that it distributes parity across all disks in an interleaved manner.
Level 6: = to level 5 except that it adds another way to control independent parity and requires 2 parity disks.
A file is a set of records. The way in which these records are accessed determines its logical organization and, to some extent,
its physical organization on the disk. If a file is going to be essentially processed in its entirety, sequential organization is the most
simple and appropriate. If sequential access is necessary but random access to the file is also desired, a sequential file
indexed can provide the best performance. If file access is mainly random, an indexed file or a file of
dispersion may be the most appropriate.
Whatever file structure is chosen, a directory service is also needed. This allows files to be organized in a
hierarchical form. This organization is useful for the user to keep track of the files and for the file management system
provide users with access control along with other services.
File records, even those of fixed size, generally do not match the physical block size of the disk. In this way, they
some type of clustering strategy is needed. The clustering strategy that is used will be determined by a balance between the
complexity, performance, and space utilization.
A key function of any file management scheme is disk space management. Part of this function is the
disk block allocation strategy for files. A wide variety of methods and data structures have been employed.
to keep a record of the location of each file. Furthermore, it should also manage the unused disk space. This latter
The function mainly consists of maintaining a disk allocation table that indicates which blocks are free.
Review Questions:
12.6. Why is the average search time for a record shorter in an indexed sequential file than in a sequential file?
Because it is accessed directly through the index.
12.7. What are the typical operations that can be performed on a directory?
search
create directory
delete file
List directory
Update directory (page 525)
12.8. What is the relationship between a path name and a working directory?
A path name ends with the name of a file and the working directory is the one associated with that file.
12.9. What are the typical access rights that can be granted or denied to a user over a file?
none
knowledge
execution
reading
addition
update
change of protection
erasure (page 528)
Fixed blocks: they store fixed-length records, with an integer number of records saved in each block. There may exist
unused space at the end of each block. This is called internal fragmentation.
Variable length blocks with segments: variable length records are used, grouped in blocks without leaving space.
not used. In this way, some records must span two blocks, indicating the continuation section with a pointer to
next block.
Variable-length blocks without segments: variable-length records are used, but they are not divided into segments. In most
there will be wasted space from the blocks, due to the impossibility of utilizing the remainder of the block if the record
next is greater than the remaining unused space. (Page 530)
Normally, the client system provides a graphical user interface (GUI) that allows the user to take advantage of multiple
applications with minimal learning and relative ease. The servers provide support for shared utilities, such as systems
of database management. The actual application is split between the client and the server as a way to try to optimize ease.
of use and performance.
The key mechanism required in any distributed system is communication between processes. Generally, two techniques are used.
A message-passing service generalizes the message passing of a single system. The same classes of conventions and
synchronization standards. Another method is the use of remote procedure calls. This is a technique by which two
programs from different machines interact using the syntax and semantics of procedure call/return. Both programs
calls such as those that call behave as if the associated program were running on the same machine.
A cluster is a group of complete interconnected computers working together as a unified execution resource that
it can create the illusion of being a single machine. The term complete computer refers to a system that can operate independently,
separately from the grouping.
Review Questions:
Relational database
A database where access to information is limited by the selection of rows that meet all criteria
search.
Server
A computer, generally a very powerful workstation, a minicomputer, or a mainframe, that contains information for
that network clients could manipulate it.
13.2. What distinguishes the client-server process from any other type of distributed data process?
Friendly applications are deposited in user systems.
Applications are dispersed and corporate databases are centralized.
There is a commitment towards open and modular systems.
Networking is essential
(Pages 558 and 559)
13.3. What is the role of a communication architecture like TCP/IP in a client environment?/server?
It is the most commonly used communication protocol for interoperability between client and server.
13.4. I studied the reasons for placing applications on the client, on the server, or for making a division between client and server.
In the best case, the actual functions of the application can be distributed between client and server in such a way that they are optimized.
resources of the network and of the platform, as well as the users' ability to perform various tasks and cooperate with each other in the
use of shared resources. In some cases, these requirements dictate that the bulk of the application software runs on the
server, while in other cases, most of the application logic is located on the client. (page 560)
13.5. What are thick clients and thin clients and what are the philosophical differences between the two methods?
Thick Client: a machine connected to a server but with significant local processing capacity.
Thin client: machine connected to a powerful server that does little or no computing.
The most important philosophical difference has to do with the centralization of data and applications.
13.6. Sugiera los pros y los contras de las estrategias del cliente grueso y del delgado.
(see pages 562, 563, and 564)
13.7. Suggest the hidden reasons for the three-tier client/server architecture.
better control of incoming and outgoing data
possibility of applying better security
possibility of scalability without affecting the interfaces
...
13.9. Since there are standards like TCP/IP, why is middleware necessary?
TCP/IP is an interface for communication software and middleware is an interface between applications.
13.10. List some advantages and disadvantages of blocking and non-blocking primitives in message passing.
(...) With non-blocking or asynchronous primitives, a process is not suspended as a result of sending a receive. In this way,
when a process issues a Send primitive, the operating system will return control as soon as the message has been placed in
queue for its transmission or a copy has been made. If no copy is made, any changes made by the sender to the message before the
transmission or during it, will be done under their responsibility. When the message has been transmitted or copied to a
safe place for its later transmission. it interrupts the sending process to inform you that the message buffer can be recycled.
In a similar way, a non-blocking receive is emitted by a process to then continue executing. When a message arrives, it is reported to the
process through interruption or it can periodically check its status.
Non-blocking primitives offer an efficient and flexible use of the message passing service. The disadvantage of this approach is
that programs using these primitives are difficult to test and debug. The irreproducible sequences dependent on time
they can cause subtle and complicated problems. (...) (page 569)
13.11. Enumere algunas ventajas e inconvenientes del enlace persistente y no persistente en RPC.
Non-persistent sockets assume that the logical connection is established between two processes at the moment of the remote call and that the
the connection is lost as soon as the values are returned. As a connection requires the maintenance of state information in
both extremes consume resources. The non-persistent style is used to conserve such resources. On the other hand, the cost of establishing
connections make non-persistent links not very suitable for remote procedures that the same caller invokes
frequently.
With persistent links, a connection established for a remote procedure call is maintained after the
The procedure ends. The connection can be used for future remote procedure calls. If a period of time elapses
determined without activity on the connection, it ends. For applications that make repeated remote procedure calls
Sometimes, the persistent link maintains the logical connection and allows a sequence of calls and returns to use the same connection.
(Page 574)
Review Questions:
14.1. Comment on some of the reasons for the implementation of process migration.
cargo compartment
performance of communications
availability
use of special capabilities
14.3. What are the causes for the migration of preferred and non-preferred processes?
The load distribution (?)
14.5. What is the difference between distributed mutual exclusion using a centralized approach and using a distributed approach?
That one depends on a shared memory and the other on message passing.
Review Questions:
15.1. What are the fundamental requirements that computer security addresses?
Secret
Integrity
Availability
Authenticity
15.2. What is the difference between active and passive security threats?
Active threats: they are those related to hidden listening or transmission control.
Passive threats: they involve alterations of the data flow or the creation of some false flow.
15.3. Enumere y defina brevemente las categorías de amenazas a la seguridad activa y pasiva.
Passives:
revelation of the content of the message
traffic analysis
Actives:
usurpation
repetition
modification of messages
service denial
15.4. What elements are needed in the most common access control techniques?
Processors, memory, I/O devices, programs, and data
15.5. In access control, what is the difference between a subject and an object?
A subject is an entity capable of accessing objects, and objects are the elements that must be controlled.
15.7. Explain the difference between statistical anomaly-based intrusion detection and rule-based intrusion detection.
One tries to detect intrusions by deviation from normal behavior and the other by deviation from correct behavior.
15.8. To the malicious programs sent via email with attachments or VBS developed in 1999 and 2000 (for example,
Melissa or Love letter) in the media are called email viruses. Could it be more accurate the
term email worms?
Yes, because it uses email services, remote execution capability, and remote connection capability.
15.10. What are the two general approaches to attacking a classical encryption scheme?
Generic decryption technology and digital immunization are used.
15.13. What evaluation criterion will be used to calculate the AES candidates?
Security, efficiency, memory needs, adaptation to software and hardware, and flexibility.
15.14. Explain the difference between classical encryption and public key encryption.
One is based on mathematical functions and the other on operations on bit patterns.
15.15. What are the differences between the terms public key, private key, secret key?
The key used in classical encryption is called a secret key. The two keys used for public key encryption are
they are known as public and private keys. Without exception, the private key must be kept secret, but it is called a private key for
avoid confusion (page 681)