CPU Scheduling: Understanding Processes and Threads

Posted by miko_rodri and classified in Computers

Written at on English with a size of 5.07 KB.

1. Processes

A process is a program in execution. It is a unit of work within the system. A program is a passive entity, while a process is an active one. A process needs resources to accomplish its task (CPU, memory, I/O, files). Process termination requires the reclamation of any reusable resources. A single-threaded process has one program counter, specifying the location of the next instruction to execute. The process executes instructions sequentially, one at a time, until completion. A multithreaded process has one program counter per thread. Concurrency is achieved by multiplexing the CPUs among the processes or threads.

2. Process States

As a process executes, it changes its state:

  • New: The process is being created.
  • Running: Instructions are being executed.
  • Waiting: The process is waiting for some event to occur.
  • Ready: The process is waiting to be assigned to a CPU.
  • Terminated: The process has finished execution.

3. Context Switch

When the CPU switches to another process, the system must save the state of the old process and load the saved state for the new process. Context-switch time is overhead; the system does not perform useful work while switching.

4. Process Control Block (PCB)

The Process Control Block contains information associated with each process:

  • Process state
  • Program counter
  • CPU registers
  • CPU scheduling information
  • Memory-management information
  • Accounting information
  • I/O status information (process environment)

5. Single and Multithreaded Processes

Benefits of Multithreading:

  • Responsiveness
  • Easy Resource Sharing
  • Economy
  • Utilization of Multiprocessor Architectures

6. Threads

A thread is a unit for scheduling. One process can have multiple threads.

Advantages:

  • Creating a thread is faster than creating a process.
  • Context switching is faster for threads.
  • Better design of parallel applications with threads (more transparent design).

Example: File server in a LAN.

Many-to-one model: Used by older operating systems without thread support.

One-to-one: Supported by the kernel and provides better scheduling (does not block).

7. Scheduler

Long-term scheduler: Selects which processes should be brought into the ready queue. It is invoked very infrequently and controls the degree of multiprogramming.

Mid-term scheduler: Selects which processes to swap out to free memory or swap in if memory is free. It partially belongs to the memory manager.

Short-term scheduler (or CPU scheduler): Selects which process should be executed next and allocates the CPU. It is invoked very frequently and must be fast.

CPU scheduling decisions may take place when a process:

  1. Switches from running to waiting state.
  2. Switches from running to ready state.
  3. Switches from waiting to ready state.
  4. Terminates.

Scheduling under 1 and 4 is non-preemptive, while 2 and 3 are preemptive.

8. Dispatcher

The dispatcher module gives control of the CPU to the process selected by the short-term scheduler. Dispatch latency is the time it takes for the dispatcher to stop one process and start another running.

9. Scheduling Criteria and Optimization

  • CPU utilization: Keep the CPU as busy as possible (maximize it).
  • Throughput: The number of processes that complete their execution per time unit (maximize).
  • Turnaround time: The amount of time to execute a particular process (minimize).
  • Waiting time: The amount of time a process has been waiting in the ready queue (minimize).
  • Response time: The amount of time it takes from when a request was submitted until the first response is produced, not the final output (for time-sharing and interactive environments) (minimize).

10. First-Come, First-Served (FCFS) Scheduling

This is the simplest non-preemptive scheduling algorithm.

11. Shortest Job First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.

Two schemes:

  • Non-preemptive: Once the CPU is given to the process, it cannot be preempted until it completes its CPU burst.
  • Preemptive: If a new process arrives with a CPU burst length less than the remaining time of the current executing process, it preempts. This is known as Shortest-Remaining-Time (SRT).

SJF is optimal - it gives the minimum average waiting time for a given set of processes.

Entradas relacionadas: