0% found this document useful (0 votes)
89 views4 pages

Key Concepts in Operating Systems

Mapua

Uploaded by

n9cnh4pnpn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views4 pages

Key Concepts in Operating Systems

Mapua

Uploaded by

n9cnh4pnpn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

M1SUMA (82/100)

1. A rigid metal or glass platters covered with magnetic recording material


Magnetic Disk

2. The basic idea behind ______________ is to remove all non-essential services


from the kernel, and implement them as system applications instead, thereby making
the kernel as small and efficient as possible.
Micro kernels

[Link] both transmit and reception are blocking in process synchronization, it is


called __________________.
Rendezvous

[Link] of main memory that provides large nonvolatile storage capacity.


Secondary Storage

5. Which of the following are process scheduling queues?


ready queue
device queue
job queue

6. It is a number assigned to every peripheral that is being used to communicate


with the CPU.
Bus

7. This is when an absolute code is generated when it is known where a process will
reside in memory.
Compile Time

8. Used for high-speed I/O devices able to transmit information at close to memory
speeds without CPU intervention.
DMA

9. In computer architecture, this is where the results of the operations are being
stored.
accumulator

10. It pertains to a request for operating system service.


system call

11. Copying information into faster storage system; main memory can be viewed as a
last cache for secondary storage.
Caching

[Link] a user click an application icon on the desktop, it creates a new process.
This process is waiting at the ________________ for its turn prior to CPU execution
Ready Queue

13. Analogy:

WIndows : Threads | Linux : ___________


tasks

14. In a client-server architecture, a __________ system offers an interface


for clients to store and retrieve files.
compute-server

15. An operation that needs to be executed before a file can be available to


processes in a system.
Mounting

16. Which of the following are considered to be a primary storage?


registers
cache
main memory

17. It refers to a signal coming from an iput/output devices to temporarily halt


CPU execution.
interrupt

18. In computer organization, when an code is being executed, it is being fetched


from the memory and loaded into _______________.
DRAM

19. Assume you have the following jobs to execute with one processor, with the jobs
arriving at the following times and requiring the following amount of CPU.

what is the second (2nd) process that will complete if a Round-Robin algorithm
using a time quantum equal to 2 millisecond is used? (Type number ONLY)
2

20. Which of the following schedulers are being used by a process during
scheduling?
long-term scheduler
short-term scheduler
medium-term scheduler

21.A process retained control of the CPU until the process is blocked or
terminated.
Non pre-emptive

22. A computer system that pertains to PDAs and Cellular telephones.


Handheld Systems

23. Which of the following are Operating Systems services for programming language
support?
compilers
linkers
debuggers
assemblers

24. Burst time is the mount of time a process need to complete its process
execution. Burst time is also called _______________.
execution time

25. Which of the following pertains to communications in a Client-Server network


systems?
Ports
RPC
IP

26. In computer orgranization, high speed devices are placed in a single chip
called _________________.
microprocessor

27. Which of the following are the areas where the multiple core CPU present
challenges to the programmers?
Identifying tasks
Balance
Data splitting
Data dependency
Testing and debugging

[Link] databases on a client-server architecture is an example of which


cooperating process component?
Modularity

29. The term used interchangeably with process.


job

30.A set of all processes residing in main memory, ready and waiting to execute.
ready queue

31. Another term for the Preemptive Short-Job First is __________________________.


Shortest Remaining Time First

32. The only large storage media that the CPU can access directly.
Main Memory

33. In computer architecture, this is where the results of the operations are being
stored.
accumulator

34. A hardware signal sent to the processor that temporarily stops a running
program and allows a special program.
Interrupt Request

35. It pertains to operating system implementation that can allow an OS to execute


on a non-native hardware.
Emulation

36. Which of the following includes physical implementation of communication link?


network
shared memory
hardware bus

37. In UNIX systems, __________ are used to inform a process that an


event has happened.
signals

38. What is the average waiting time if FCFS scheduling algorithm is used? (Type
numerical value ONLY)
5.5

39. Which of the following are characteristics of a JAVA ports?


Connection-Oriented
Connectionless
Multicast Socket

40. Which of the following are performed by a user-level threads libraries?


POSIX Pthreads
Java Threads

41. Pipelining is an example of which cooperating process component?


Information Sharing
42. A system that is categorize by hard or soft system.
Real-time system

43. A program that intermediates to the entire computer system components


Operating System

44. Which of the following are unique among multiple threaded processes?
registers
Program Counter
stack

45. Data fetched from the memory and moved into the CPU traverses via ________.
data bus

46. Windows OS : Interrupt | Unix : __________


signal

47. To which thread would the signal be given if a multi-threaded process receives
signal.
to the subprocess to which the signal applies
to every subprocess in the job
to certain subprocess in the job

48. Assume you have the following jobs to execute with one processor, with the jobs
arriving at the following times and requiring the following amount of CPU.
8.25

49. What term best describe when CPU switches to another process, the system must
save the state of the old process and load the saved state for the new process.
Context Switch

50. In a certain system of processes, process A arrived at time 0, process B at


time 1, and process C at time 2. Process A needs 5 seconds in the CPU, process B
needs 3 seconds, and process C needs 1 second. All processes are totally CPU bound
and process-switching time is negligible so that after 9 seconds all processes have
completed. At what time does process A complete if the process-scheduling algorithm
is priority scheduling, each process has a higher priority than the previous one?
(Type the number only)
6

Common questions

Powered by AI

Using Direct Memory Access (DMA) allows high-speed I/O devices to transfer data directly to and from memory without CPU intervention, which significantly reduces CPU overhead and increases efficiency . This can improve system performance, especially when handling large volumes of data or high-speed data transfer requirements . By relegating data transfer tasks to the DMA controller, the CPU is freed to perform other processing tasks, optimizing overall system throughput .

Context switching involves saving the state of the current process and loading the state of the next process to be executed by the CPU . While it allows multiple processes to share the CPU, promoting concurrent execution, context switching can be resource-intensive and cause system overhead due to the frequent saving and loading of process states, especially if many processes are ready to run . The performance impact is observed as increased kernel time and decreased CPU time available for actual process execution . Efficient process scheduling strategies and minimizing context switches can lessen these effects and improve overall system performance .

Long-term schedulers, also known as admission schedulers, control the degree of multiprogramming by deciding which jobs or processes are admitted to the system for processing . Short-term schedulers, or CPU schedulers, decide which of the ready, memory-resident processes will be executed next by the CPU, focusing on maintaining high CPU utilization and responsiveness . Medium-term schedulers, involved in swapping, manage processes that are in the ready queue but are swapped out of RAM due to resource constraints, optimizing the balance between I/O and CPU demands . Together, these schedulers optimize CPU cycles, memory usage, and the overall process throughput, ensuring efficient system performance .

Secondary storage serves as an extension to main memory by providing a larger, non-volatile space for data storage . Its significance lies in expanding storage capacity beyond what is possible with just primary memory, offering a cost-effective solution for storing large datasets that do not fit into RAM . While slower than primary memory, effective use of secondary storage through virtual memory schemes and caching techniques minimizes its performance drawbacks and maximizes storage and retrieval efficiency, vital for large-scale applications .

Caching involves storing frequently accessed data in faster storage layers, potentially main memory, to minimize access time and improve performance . It acts as a bridge between slow secondary storage and the CPU, ensuring quick data retrieval. Challenges include cache coherency, where multiple cache copies can lead to inconsistencies, and cache hit ratio optimization, balancing size and speed . Efficient caching requires sophisticated algorithms to predict usage patterns and manage storage resources effectively to maintain performance advantages .

The key components of a microkernel architecture include the minimal kernel that handles only essential functions such as communication between hardware and software, basic process management, and simple inter-process communication. Non-essential services like device drivers, file systems, and network protocols are moved to user space as separate processes . This separation minimizes the kernel size, reduces complexity, and enhances system stability and security, as faults in user-space services do not affect the entire system .

Multi-core CPUs challenge programmers with issues such as task identification, load balancing, data dependency management, and debugging . Programmers must write concurrent code that effectively divides tasks across cores, ensuring balanced workload distribution and minimizing data contention between tasks sharing data. Strategies include using parallel programming models like OpenMP or MPI, algorithms that account for data locality, and advanced debugging tools to track concurrency issues . Implementing fine-grained synchronization and data-sharing techniques like lock-free programming can also help in overcoming these multi-core challenges .

In a client-server architecture, a compute-server system provides an interface for clients to store and retrieve files . It processes client requests, manages access to stored data, and performs operations on that data without exposing the underlying storage complexity to clients. This separation of responsibilities allows clients to focus on data usage while the server handles storage logistics, ensuring efficient data management and scalability . The system can concurrently serve multiple clients by leveraging task distribution and network protocols, optimizing resource use and throughput .

In UNIX systems, signals provide a mechanism for process management and inter-process communication by allowing processes to receive asynchronous notifications of significant events . Signals are used to interrupt processes, allowing them to execute signal handlers when specific conditions occur, such as termination requests or division by zero errors . This capability facilitates dynamic and flexible management of process execution and can be critical in real-time systems where timely response to events is required .

The round-robin scheduling algorithm assigns a fixed time quantum for process execution, which in this case is 2 milliseconds. The algorithm cycles through the processes in the ready queue, giving each process a chance to run for at most the time quantum or less if the process needs less time . This approach can lead to fair sharing of CPU time across processes but might result in longer average waiting and turnaround times compared to other algorithms like Shortest Remaining Time First (SRTF) or Priority Scheduling which optimize those metrics .

You might also like