Operating System Concepts: Hardware Interaction and Scheduling

Posted by Anonymous and classified in Computers

Written on in English with a size of 1.45 MB

Operating System Fundamentals

The Operating System (OS) serves several critical roles:

  • It acts as a resource manager, controlling access to the hardware.
  • It provides an abstraction layer, allowing user processes to call functions that access hardware via system calls.

User Mode vs. Supervisor Mode (Kernel Mode)

The CPU enforces separation between user processes and the OS kernel:

  • User Mode: Prohibits privileged instructions.
  • Kernel Mode (Supervisor Mode): Allows access to all hardware and privileged operations.

Program Status Word (PSW)

The PSW is a special register holding vital information, such as:

  • Access privilege mode.
  • Runtime execution conditions (e.g., condition codes).
  • Program Counter (PC) and Stack Pointer (SP).

Simplified Interrupt Handling Flow

  1. A device signals the interrupt controller via bus lines.
  2. The controller signals the CPU and places the interrupt number associated with the device on the bus.
  3. If interrupts are enabled on the CPU:
    1. Save PC, SP, and PSW.
    2. Disable further interrupts.
    3. Shift to supervisor mode.
    4. Execute the interrupt service routine (ISR) associated with the interrupt number from the interrupt service table.
    5. Restore PC, SP, & PSW and resume process execution.

Process Execution Contexts

  • User (Process/Thread): Runs process code and library functions; cannot directly access devices or the Memory Management Unit (MMU).
  • Kernel (OS): Runs on trap/interrupt paths; owns privileged operations (scheduling, VM management, I/O drivers).

A user process calls a library function, which issues a trap (software interrupt). The kernel validates arguments and performs privileged work (e.g., accessing a device or MMU). The hardware executes the operation (e.g., DMA or interrupt). The kernel returns to the user mode with the result.

Why are some instructions privileged? If user code could change page tables or disable interrupts, it could corrupt data belonging to the OS or other processes. The CPU enforces a mode bit so only kernel mode can execute these instructions.

A function call stays in user mode. A system call executes a special instruction (trap) that pushes the current context, flips to kernel mode, vectors to a handler, validates arguments, performs privileged work (e.g., MMU or device access), then executes return-from-trap to resume user mode.


Memory Hierarchy and Locality

Cache: Small, fast memory near the CPU; coherence (multi-core copies agree) and consistency are key concerns.

  • Spatial Locality: Neighbors of recently accessed data are likely to be used next (e.g., accessing elements in row-major arrays).
  • Temporal Locality: Recently used data is likely to be reused soon (e.g., loop variables).

Cache Write Policies

  • Write-through Policy: Every write updates both the cache line and main memory (simpler, but slower for write-heavy workloads).
  • Write-back Policy: Mark the cache line as "dirty"; write back to main memory only upon eviction (faster for repeated writes to the same location).

Regarding repeated variable updates, a write-back cache is generally better because it delays updates to main memory until the line is evicted, avoiding numerous slow memory writes.

If only one core is active, cache incoherency is not an issue. However, in a multi-core system accessing shared data, cache incoherency becomes a problem if different cores modify the same data concurrently.


Thread Multiplexing Models

The choice between Many:1 and 1:1 thread models impacts how blocking I/O affects processes:

  • Many:1 (User-level threads on one kernel thread): One blocking system call blocks all user threads, as the single kernel thread sleeps. Good for CPU-bound tasks on a single core where threads take turns.
  • 1:1 (Kernel threads): The kernel schedules around blocking I/O, allowing other threads in the process to continue. Provides true parallelism at the cost of heavier kernel overhead.

If a thread performs blocking system calls (I/O bound), the 1:1 model is preferred so the kernel can deschedule only that thread while others continue. With Many:1, the single kernel thread would sleep, stalling the entire process.

For CPU-bound threads working concurrently, 1:1 multiplexing allows them to run in parallel on separate cores. Many:1 would force them to take turns.

For CPU-bound threads that take turns, Many:1 might suffice as they do not need simultaneous execution.


CPU Scheduling Algorithms

First-Come, First-Served (FCFS)

  • Rule: Run jobs in the order they arrive; non-preemptive.
  • Goal: Simplicity; good if job lengths are similar.
  • Con: One long job at the front causes everyone else to wait (convoy effect).

wHV59lyPVX3NAAAAABJRU5ErkJggg==

Shortest Job First (SJF)

  • Rule: Pick the job with the shortest total CPU burst next (non-preemptive version).
  • Goal: Minimizes average waiting time if burst lengths are known.
  • Cons: Can starve long jobs; usually requires estimates of job length.

Round Robin (RR)

  • Rule: Uses a time slice (quantum, $q$). Run the front job for up to $q$ time units, then preempt and move it to the back if incomplete.
  • Goal: Fairness and responsiveness for interactive workloads.
  • Con: If $q$ is too small, context switching overhead increases; if $q$ is too large, it behaves like FCFS.

Preemption Characteristics

Non-preemptive Scheduling

  • Once the CPU is allocated to a process, it keeps running until it voluntarily yields (finishes its burst or blocks for I/O).
  • Examples: FCFS, SJF (non-preemptive).
  • Pros: Simple; low context-switch overhead.
  • Cons: Poor responsiveness (a long job can stall others $\rightarrow$ convoy effect); can starve short/interactive jobs.

Preemptive Scheduling

  • The OS can forcibly take the CPU away based on policy (e.g., a timer interrupt after a time slice, or when a higher-priority/shorter job arrives).
  • Pros: Better responsiveness and fairness; suitable for interactive systems.
  • Cons: Higher context-switch overhead; starvation is possible if priorities are not managed (aged).

OS Structure and Virtual Machines

Microkernel vs. Monolithic Kernel

  • Microkernel: Only the minimal core (IPC, scheduling, memory management) resides in the kernel; services run in user space (e.g., Mach, Minix). Smaller kernel, but higher IPC overhead.
  • Monolithic Kernel: Most services run in kernel space for performance.

The choice often balances microkernel for isolation/fault containment against monolithic for performance.

Virtual Machines (VMs)

A hypervisor manages VMs (Type 1 on bare metal, Type 2 on a host OS); each guest sees virtual hardware.

When a guest OS executes privileged operations (like I/O), the hypervisor intercepts and emulates them to maintain VM isolation.

A user program inside a VM runs in user mode. A call like read() triggers a trap to the guest kernel. Since the guest lacks direct hardware access, the CPU transfers control to the hypervisor, which performs the hardware operation and returns control to the guest kernel to complete the system call.

Process Metrics

Multiprogramming: Many processes share the CPU; the OS context switches to keep the CPU busy and overlap I/O waits.

PCB (Process Control Block): Contains PID, register state/PC, state, priority, page-table pointer, and open file table.

  • Wait Time: Time a process spends in the READY queue.
  • Turnaround Time: Total CPU Bursts + IO bursts + Wait Time.

TfYx+J8AAAAASUVORK5CYII=

Thread Multiplexing Application Example

When deciding on thread multiplexing:

  • I/O Bound Threads (B, E): Should use 1:1 multiplexing so that when one blocks, the kernel can schedule other threads, preventing process stalls.
  • CPU Bound Threads (D, F): Should use 1:1 multiplexing to run in parallel on separate cores. Many:1 would force them to take turns.
  • CPU Bound Threads (A, C) taking turns: Many:1 multiplexing might be suitable if they do not require simultaneous execution.

Related entries: