Operating System Concepts: Memory, Deadlocks, and I/O
Memory Models in Operating Systems
-
Model A (MS-DOS):
- Structure: Large user space at the top, small RAM at the bottom.
- Performance: Fast execution, long boot time.
- Protection: No protection.
-
Model B:
- Structure: Small ROM at the top, small user space at the bottom.
- Performance: Protected OS but slow (the entire OS must be read).
- Flexibility: Not flexible.
-
Model C (Windows 11):
- Structure: Select drivers at the top, large user space in the middle, RAM at the bottom.
- Performance: Fast and secure (key drivers stored in ROM).
Memory Protection and Management
Core Memory Management Issues
- Relocation Problem: Without memory abstraction, loading multiple programs causes incorrect memory addresses.
-
Base and Limit Registers:
- Base: Value added to addresses to find the program start.
- Limit: Prevents exceeding the allocated memory space.
- Swapping: Dedicating RAM sections to programs; freeing the space when the program finishes.
- Fragmentation: Occurs when there is not enough continuous space, leading to wasted memory.
- Room for Growth: Preemptively allocating extra space above or below a program's current memory block.
Memory Management Allocation Algorithms
These algorithms determine where a process is placed in memory:
-
First Fit: Uses the first available space found.
- Benefit: Fast.
- Drawback: May waste space.
-
Next Fit: Starts the search immediately after the location of the last placement.
- Benefit: Better distribution of memory usage.
-
Best Fit: Finds the closest size match for the required space.
- Benefit: Maximizes space utilization.
- Drawback: Slower search time.
- Worst Fit: Finds the biggest available spot. (Primarily used for comparison and benchmarking).
- Quick Fit: Estimates the best placement location based on sampling.
Virtual Memory Concepts
Virtual Memory Fundamentals
- Concept: Programs are split into overlays (stored on disk or in memory). The address space is divided into pages, meaning memory is not restricted to continuous blocks.
Benefits of Virtual Memory
- Can fit blocks of different sizes.
- The program uses memory as if it were continuous.
- Not limited by the physical RAM size.
Drawbacks of Virtual Memory
- Requires many instructions for management.
- Cannot guarantee consistent access speed (due to potential disk access).
Paging System Architecture
- MMU (Memory Management Unit): Interprets and arranges memory instructions for the CPU.
- Page: The fundamental memory reference unit.
- Page Size: The amount of memory allocated at once (e.g., 4 KB, 8 KB).
-
Page Table: Maps virtual memory addresses to physical memory addresses. Contains:
- Cache status.
- Protection flags.
- Page frame number.
- TLB (Translation Lookaside Buffer): A specialized cache that speeds up paging by storing recent memory status translations.
Page Replacement Algorithms
Algorithms used to decide which page to remove from memory when a new page needs to be loaded:
- Optimal: Requires perfect knowledge of the future (a theoretical benchmark). (Example: 10 pages in the scenario.)
- NRU (Not Recently Used): Removes pages that have not been recently accessed.
- FIFO (First-In, First-Out): Kicks out the oldest page loaded into memory. (Example: 16 pages in the scenario.)
- Second-Chance: A variation of FIFO that skips a page on the first attempt if it has been recently used.
- Clock: Uses sectors with an iterator; skips used pages. After a full loop, it starts kicking out pages.
- LRU (Least Recently Used): Removes the page that has been unused for the longest period of time.
- Working Set: Replaces pages that are outside the current working set (the set of pages actively being used by a process).
- WSClock: Combines the Clock algorithm and the Working Set concept.
- Belady's Anomaly: The counter-intuitive phenomenon where increasing the number of available page frames might actually increase the required paging rate (more page faults).
Segmentation
Segmentation Concepts and Benefits
- Concept: Breaks programs into logical address spaces, similar to rooms with different functions.
-
Benefits:
- Easier location and protection of data.
- Segments can exceed physical memory size.
- Allows different protection levels for procedures and data.
- Dynamic table sizes are easier to handle.
- Facilitates sharing procedures between processes.
Deadlocks and Resource Management
Resource Types
- Preemptable: Resources that can be taken away and given back without disrupting the computation (e.g., RAM).
- Non-preemptable: Resources where modifying or taking them away causes problems (e.g., Printer, Keyboard).
Resource Use Process
- Request
- Semaphore lock
- Use
- Release
- Semaphore unlock
Deadlock Conditions (All Four Required)
A deadlock occurs only if all four of these conditions are met simultaneously:
- Mutual Exclusion: The resource is locked for exclusive use by one process.
- Hold and Wait: A process keeps resources it already holds while waiting for additional resources.
- No Preemption: Resources cannot be forcibly taken away from the process holding them.
- Circular Wait: A closed chain of processes exists, where each process is waiting for a resource held by the next process in the chain.
Deadlock Handling Strategies
- Ostrich Solution: Ignoring the problem (the most common approach in practice for non-critical systems).
- Detection and Recovery: Allowing deadlocks to occur, then detecting and fixing them afterward.
- Dynamic Avoidance: Careful resource allocation based on future needs.
- Prevention: Eliminating one of the four required deadlock conditions.
Deadlock Detection Methods
- Resource Graph: Uses arrows to show ownership and requests. Cycles in the graph indicate a deadlock.
- Resource Allocation Matrix: Tracks existing resources versus available resources. A system is in a safe state if it can satisfy all demands in some sequential order.
Deadlock Recovery Methods
- Preemption: Taking a resource away from a process. Risk: May corrupt data.
- Rollback: Restoring the system to a previous snapshot. Drawback: Does not fix the root cause.
- Process Killing: Terminating one or more processes involved in the deadlock. Drawback: Leads to loss of work or data.
Deadlock Avoidance
- Banker's Algorithm: Calculates safe allocation paths to ensure the system remains in a safe state.
- Principle: Over-allocate resources, assuming not all processes will need their maximum resources simultaneously.
Deadlock Prevention Strategies
Strategies focusing on eliminating the four necessary conditions:
- Mutual Exclusion: Allowing sharing (Note: Allowing sharing causes race conditions, so this is often impractical for non-sharable resources).
- Hold and Wait: Requiring processes to request all necessary resources at once, or only hold one resource at a time and use it immediately.
- No Preemption: Allowing preemption (Note: Allowing preemption can corrupt data for non-preemptable resources).
- Circular Wait: Disallowing requesting resources that are currently held by other processes in a way that forms a cycle. Risk: May cause starvation.
Other Concurrency Issues
- Livelock: A form of deadlock where processes appear active but are stuck in an infinite loop of state changes, undetected by the OS as a true deadlock.
- Halting Problem: The theoretical impossibility of determining, for an arbitrary program, whether it will eventually stop or run forever.
Input/Output (I/O) Systems
I/O Device Types
- Block Devices (e.g., SSD): Store information in fixed-size blocks and transfer data in units of entire blocks.
- Character Devices (e.g., Mouse, Keyboard): Deliver or accept a stream of characters. They are generally not addressable and do not support seek operations.
I/O Addressing Schemes
- Separate I/O and Memory Space: The system always knows the allocation, but it is less flexible.
-
Memory-Mapped I/O: Allocates a portion of RAM address space to the device registers.
- Benefit: Flexible.
- Drawback: More devices mean less program space available.
- Hybrid Solution: Uses some separation and some memory allocation, often keeping certain critical devices separate.
I/O Data Transfer Methods
- Bus: Acts as a freeway for generic data. The main bus connects all devices; specific buses exist between memory and the CPU.
- Direct I/O: The CPU is heavily involved in the transfer process, which wastes CPU time.
- Direct Memory Access (DMA): The CPU hands off the transfer task to a DMA controller, significantly reducing CPU time wasted on I/O.
DMA Process Steps
- CPU programs the DMA controller.
- DMA requests the data transfer.
- Data is moved directly between the device and memory.
- The controller sends an acknowledgment (Ack).
- The DMA controller interrupts the CPU when the transfer is finished.
Interrupts and CPU Execution
Interrupt Types
-
Precise Interrupt:
- The Program Counter (PC) is saved in a known place.
- All instructions before the PC are fully executed.
- No instructions after the PC are executed.
- The execution state of the PC instruction is known.
- Imprecise Interrupt: One or more of the precise interrupt properties are not met (typically, instructions are not fully executed).
Instruction Execution Order
- In-order Instructions: Run in the order they are accessed. The CPU waits if an instruction cannot run yet. Associated with precise instruction sets.
- Out-of-order Instructions: Run when able, not necessarily in the access order. Associated with imprecise instruction sets.
- Speculative Execution: The CPU predicts the next instruction to run (e.g., branch prediction) to maximize throughput.
Security Vulnerabilities (Meltdown & Spectre)
- Meltdown: A vulnerability that breaks the process isolation barrier, allowing unauthorized reading or writing to other address spaces.
- Spectre: Exploits speculative execution by predicting code running without actually being on the CPU.
- Solution: Disabling out-of-order instructions (Note: These vulnerabilities primarily affected Intel CPUs).
I/O Software Goals and Implementation
I/O Software Design Goals
- Device Independence: Performance should be independent of other devices.
- Uniform Naming: Using the same addressing conventions for all devices.
- USB (Universal Serial Bus): A generic standard designed to connect all types of devices.
- Error Handling: Errors are generated and handled by the controller (low level).
- Synchronous vs. Asynchronous: Synchronous I/O requires the CPU to wait for completion; asynchronous I/O allows the CPU to continue processing while the I/O operation runs.
- Buffering: Temporary data storage used to meet device constraints and balance data transfer rates.
I/O Implementation Methods
- Programmed I/O (PIO): The CPU wastes time performing I/O operations directly.
- Interrupt-Driven I/O: The controller interrupts the CPU only when the operation is finished. The CPU is involved only at the beginning and end.
- I/O Using DMA: The CPU hands off the task entirely to the DMA unit, minimizing CPU involvement.
I/O Software Layers (From Top to Bottom)
- User-level I/O Software: The actual application (e.g., Discord).
- Device-independent OS Software: How the system uses the device generically.
- Device Drivers: The interface between the device and the OS, typically written by the hardware designer.
- Device-independent Drivers: Generic keys or interfaces for generic use cases.
- Interrupt Handlers: Software routines that handle interruption signals.
- Hardware: The physical components.
File Systems and Storage Management
File System Types and Compatibility
- Different Types: Apple (HFS+ or APFS), Windows (NTFS), Linux (ext4, Btrfs), FAT32.
- Compatibility: FAT32 works across multiple operating systems but fails to transfer file permissions or handle large files efficiently.
File System Components
- File Table: Fast access is important. Millions of small files can slow down access significantly.
- Paging File: Temporarily buffers files (Note: This is generally not optimal for performance).
- Swap Space: A separate disk partition dedicated to virtual memory usage.
-
File Sizes:
- FAT32 (32-bit addressing): Maximum file size is typically 4 GB.
- 64-bit addressing: Supports file sizes up to 16 Exabytes (EB).
File System Structures
- NTFS (Windows): All information is stored in a single Master File Table (MFT). Windows sections off a portion of the hard drive for this.
- Linux/ext4: Uses "inodes" instead of a single table. Metadata (hidden file information) is kept in a specific section.
Journaling File Systems
- Non-Journaling (e.g., EXT2): No transaction tracking. High risk of data loss upon improper shutdown.
- Journaling (e.g., EXT3): Tracks transaction progress, safeguarding against corruption. It knows the transaction state but uses overhead space.
- EXT4: Solves many startup and performance issues found in earlier EXT versions.
UNIX File Permissions
- Principle: Everything is treated as a file and thus has permissions assigned (facilitating machine-to-machine consistency).
-
Octal Numbers: Permissions are represented by octal values:
- Execute (x) = 1
- Write (w) = 2
- Read (r) = 4
-
Three Classes: Permissions are defined for three classes:
- User/Owner
- Group
- Other
-
Examples:
- 700 (rwx------): Read, Write, Execute for Owner only.
- 664 (rw-rw-r--): Read and Write for Owner and Group; Read only for Others.
- 511 (r-x--x--x): Read and Execute for Owner; Execute only for Group and Others.
Trusted Platform Module (TPM)
- Function: Provides kernel verification signatures, checking if the operating system kernel has been tampered with before booting.