Operating System Memory Management and Deadlock Prevention
Operating System Memory Management Fundamentals
The Operating System (OS) is responsible for crucial memory decisions: determining which programs reside in memory, where they are placed, how memory is protected, and what actions to take when memory resources are exhausted.
Parkinson's Law Applied to Computing
Parkinson’s Law states that programs expand to fill the memory available to hold them.
Models for Organizing Memory
Three primary models exist for structuring memory:
- Model A (User on Top, RAM on Bottom):
- Pros: Fast execution.
 - Cons: No protection (e.g., used in MS-DOS).
 
 - Model B (ROM on Top, User on Bottom):
- Pros: OS protected.
 - Cons: Slow and not flexible.
 
 - Model C (Drivers at Top, User in Middle, RAM at Bottom):
- Pros: Fast and secure.
 - Cons: Complex implementation (e.g., used in Windows 11).
 
 
Memory Protection and Allocation
- Base Register: Defines the starting physical address in memory where a process is loaded.
 - Limit Register: Specifies the maximum memory size a process can access. This mechanism ensures that no process "wanders" into another process's memory space.
 
Swapping and Fragmentation
Swapping is the process of moving programs or processes in and out of main memory (RAM). If RAM is full, the OS swaps out a program to secondary storage (disk/hard drive), loads a new program, and later swaps the old program back in.
Fragmentation occurs when there is free memory available, but it is not contiguous (not in one large chunk). This results in "holes" in memory, preventing new programs from fitting even if the total free space is sufficient.
Tracking Free Memory
- Bitmaps: Track free memory by dividing memory into fixed blocks. This method is easy to manage but can be slow for very large memory spaces.
 - Linked List: Keeps track of both used parts and free "holes" in memory. Generally faster than using bitmaps.
 
Memory Allocation Algorithms (Hole Fitting)
These algorithms determine which free hole in memory should be used for a new process:
- First Fit: Picks the first hole encountered that is large enough to fit the process.
 - Best Fit: Picks the smallest hole that is large enough, minimizing wasted space.
 - Worst Fit: Picks the largest hole available, maximizing the remaining free space (often leading to larger fragmentation).
 - Next Fit: Similar to First Fit, but continues the search from the position where the last allocation occurred.
 - Quick Fit: Maintains separate lists for popular memory sizes, allowing for very fast allocation.
 
Virtual Memory and Address Translation
Virtual Memory allows a program to be executed even if only a few parts are loaded into RAM. The program is conceptually split into pages (virtual units), which are mapped to frames (physical units in RAM). The remaining pages stay on disk, and the OS swaps pages in and out as needed.
Address Translation Components
- Page Table: Used by the OS to map a Virtual Address (logical address used by the program) to a Physical Address (the actual location in RAM).
 - MMU (Memory Management Unit): A dedicated part of the CPU responsible for translating virtual addresses into physical addresses using the page table.
 - TLB (Translation Lookaside Buffer): A high-speed cache that stores recent address translations.
- Without the TLB, every memory access requires a slow lookup in the page table.
 - With the TLB, lookups are significantly faster, improving performance.
 
 
Page Replacement Algorithms
When a new page needs to be loaded but RAM is full, the OS must decide which existing page to remove:
- FIFO (First-In, First-Out): Removes the page that has been in memory the longest (the oldest page).
 - Second Chance: An improvement on FIFO that gives a page a "second chance" before removal if it has been recently used.
 - Clock Algorithm: Similar to Second Chance, but uses a circular list structure for efficiency.
 - LRU (Least Recently Used): Removes the page that has not been accessed for the longest period of time.
 - Optimal Algorithm: Removes the page that will not be used for the longest time in the future (theoretically ideal but impossible to implement in practice).
 
Deadlocks: Causes and Conditions
A Deadlock occurs when a set of processes are blocked because each process is waiting for a resource that is currently held by another process in the same set.
Resource Types
- Preemptable Resources: Resources that can be taken away from a process without causing failure (e.g., RAM).
 - Nonpreemptable Resources: Resources that cannot be taken away without causing issues (e.g., stopping a printer mid-print job).
 
The Four Necessary Conditions for Deadlock
All four of these conditions must be true simultaneously for a deadlock to occur:
- Mutual Exclusion: Only one process at a time can use a specific resource.
 - Hold and Wait: A process holds at least one resource while waiting to acquire additional resources held by other processes.
 - No Preemption: A resource cannot be forcibly taken away from a process; it must be voluntarily released by the holding process.
 - Circular Wait: A closed chain of processes exists, where each process is waiting for a resource held by the next process in the chain.
 
Resource Allocation Graph Notation
In resource allocation graphs:
- An arrow drawn from a process to a resource signifies a request for that resource.
 - An arrow drawn from a resource to a process signifies ownership (the resource is currently allocated to that process).
 
English with a size of 6.41 KB