Process Management in Operating Systems: Scheduling, States, and Context Switch
Classified in Computers
Written on in
English with a size of 38.54 KB
Process Management — Operating Systems Exercises
1 — Batch vs Interactive and Monoprogramming vs Multiprogramming
Question: Analyze the differences between:
- Batch processes and interactive processes.
- Monoprogramming and multiprogramming.
- Compare the text graphs (process management — figures 9.5 and 9.7) and establish similarities and differences.
- When we speak of the process (program execution), do we refer to batch processes, interactive, or both?
- What is the difference between simultaneous execution and concurrent execution?
Answer:
- Batch processes are executed without interactive user input during execution; jobs are collected and run by the system. Interactive processes require user interaction (e.g., a shell session, GUI application).
- Monoprogramming means the system runs one program at a time. Multiprogramming allows multiple programs to be in memory so the CPU can switch between them, improving CPU utilization.
- Comparing Figures 9.5 and 9.7 (process management):
- Similarities: both show sequences of process states and transitions during execution, I/O, and waiting.
- Differences: one figure represents a non-preemptive model where context switches occur at I/O or termination, while the other shows a preemptive/time-sharing model with the OS distributing CPU time among processes.
- When we speak of a process (program execution), we can refer to batch or interactive processes — both are types of processes.
- Simultaneous execution means tasks actually run at the same time (requires multiple processors or cores). Concurrent execution means tasks make progress in overlapping time periods on a single processor by interleaving execution; fast switching gives the illusion of simultaneous activity.
2 — Process in Memory or on Disk; State Transitions
Question: A process can be in memory or on disk (see sharing main memory / disk) (Figure 11). Analyze all possible transitions between states. Analyze Figure 9.11 and the subsequent text. Can the following transitions occur?
- ready (swapped out) → blocked (swapped out)
- running → blocked
Answer:
- When a process is swapped out (on disk) it is not actively executing. A swapped-out process cannot transition directly to a swapped-out blocked state; to become blocked the process must be brought back into memory and usually enter the blocked (waiting) state only after attempting an operation that causes blocking while resident. In other words, a process generally must be made ready or running in memory to change to blocked.
- Running → Blocked: Yes, a running process can transition to blocked (waiting) if it performs an operation that requires waiting (for I/O, synchronization, etc.). If a severe error occurs the running process may transition to a terminated/closed state instead.
- Refer to Figure 9.11 and the text for the complete state transition diagram, which typically includes: New → Ready → Running → Blocked → Ready (after I/O completion) → Terminated, plus additional swapped-in/swapped-out variants when memory/disk swapping is used.
3 — Tasks Performed During a Context Switch
Question: What two essential tasks must be performed when switching from one process to another? Who carries them out?
Answer:
- The two essential tasks during a context switch are:
- Save the outgoing process state: save CPU registers, program counter, stack pointer, and any processor-specific state and privileges.
- Restore the incoming process state: load the saved registers, program counter, stack pointer, memory mappings, and privileges so the incoming process can resume execution.
- Who performs them: the dispatcher (part of the short-term scheduler) implemented in the operating system kernel carries out context switching. The short-term scheduler decides which process runs next; the dispatcher performs the actual switch.
4 — Difference Between Figure 9.7 and Figure 9.8
Question: Within multiprogramming, what is the difference between FIGURE 9.7 and FIGURE 9.8?
Answer: The first figure depicts a non-preemptive scheduling model where a process yields the CPU only when it performs an I/O operation or terminates. The second figure depicts a preemptive (time-sharing) model where the operating system can preempt a running process and distribute CPU time among processes without waiting for I/O; this allows better responsiveness and fairness for interactive jobs.
5 — Scheduling Algorithms and Non-Preemptive Multiprogramming
Question: When it comes to scheduling algorithms, does it make sense to refer to non-preemptive multiprogramming? List some scheduling algorithms to consider.
Answer: Yes — whether preemptive or non-preemptive, a scheduling algorithm decides which process is assigned the processor. Non-preemptive multiprogramming is a valid model where the scheduler chooses the next process but does not forcibly preempt running processes.
Some common scheduling algorithms include:
- First-Come, First-Served (FCFS)
- Shortest Job Next (SJN) / Shortest Remaining Time
- Priority Scheduling (preemptive and non-preemptive)
- Round Robin (RR)
- Multilevel Queue Scheduling
- Multilevel Feedback Queue
6 — CPU Utilization in Multiprogramming
Question: The processor utilization rate represents the processor usage time over the total time. In multiprogramming, the processor is shared among different processes. When is the processor not being used?
Answer:
- The CPU is not being used when it is idle — for example, when all processes are blocked waiting for I/O or when the system is in the kernel handling system calls and the scheduler must decide which process to run next but there is nothing ready to execute.
- Idle time can occur during short moments when control transfers to the OS kernel and the dispatcher must select the next process; if no ready process exists the CPU remains idle until work arrives.
7 — Response Coefficient and System Slowness
Question: The response coefficient is a measure of the slowness of a computer. It is defined as the ratio of machine time to processing time. When is the machine faster: when the response coefficient is high or low?
Answer: The machine is faster when the response coefficient is lower (i.e., shorter machine time relative to processing time). A lower response coefficient indicates better responsiveness and less perceived slowness.
Notes: Terminology adjusted for clarity: "distributor" in the original text refers to the dispatcher. All original content has been preserved and clarified; figures referenced (e.g., 9.5, 9.7, 9.11) should be consulted in the textbook or course materials for exact diagrams.