Operating System Fundamentals: Concurrency, Memory, and Sockets
Classified in Technology
Written on in
English with a size of 7.09 KB
Concurrency Issues in Process Execution
Understanding Concurrency Problems
Concurrency issues arise when multiple processes or threads execute simultaneously and access shared resources. These issues can lead to incorrect results or system instability if not managed properly.
Mutual Exclusion
Mutual exclusion ensures that when one process enters its critical section (the part of the code that accesses shared resources), it must complete its execution within that section before any other process can enter its own critical section for the same shared resource. This mechanism is crucial for preventing data corruption and maintaining data integrity.
Deadlock
Deadlock occurs when two or more processes are running and each requires a resource that is currently held by another process. Consequently, both processes wait indefinitely for the release of the shared resource, leading to a complete standstill where no process can proceed. This situation often arises from a combination of conditions including mutual exclusion, hold and wait, no preemption, and circular wait.
Starvation
Starvation is a situation where a process is repeatedly denied access to a shared resource, even though the resource becomes available. For example, if three processes (P1, P2, P3) need access to a shared resource, and the scheduler continuously grants access to P1 and P2, P3 might never get a chance to use the resource, effectively being starved.
Threads and Multithreading
Introduction to Threads (Pthreads)
Threads are lightweight units of execution within a process. They share the same memory space and resources of their parent process but have their own program counter, stack, and set of registers. This allows for concurrent execution within a single program, improving responsiveness and resource utilization.
Common Pthread functions for thread management include:
pthread_t thread;: Declares a variable to identify a thread.pthread_create(&thread, NULL, function(void *), NULL);: Creates a new thread, which will execute the specifiedfunction.pthread_join(thread, NULL);: Waits for the specified thread to terminate before the calling thread continues its execution.pthread_exit(NULL);: Terminates the calling thread.
Virtual Memory and Buffer Management Schemes
What is Virtual Memory?
Virtual memory is a fundamental feature of modern operating systems that allows a program to use more memory than is physically available in RAM. Its primary task is to manage the flow of programs between disk storage and RAM, providing valid memory references to the CPU. This creates the illusion of a large, contiguous memory space for each process, simplifying programming and enhancing system efficiency.
Memory Partitioning Schemes
Memory management schemes determine how memory is allocated to processes. Two basic approaches are:
Fixed Partitioning
In fixed partitioning, RAM is divided into partitions of the same predetermined size. This scheme limits the number of processes that can run concurrently to the number of available partitions. A significant drawback of fixed partitioning is its susceptibility to internal fragmentation.
Dynamic Partitioning
Dynamic partitioning assigns memory space to processes as needed, with partitions varying in size according to the process's requirements. While more flexible than fixed partitioning, this scheme is prone to external fragmentation over time.
Memory Fragmentation Issues
Internal Fragmentation
Problem: Internal fragmentation occurs primarily in fixed partitioning schemes (and also in paging, though minimized). It happens when a process is allocated a block of memory that is larger than its actual requirement. The unused space within that allocated block cannot be used by any other process, leading to wasted memory.
Solution in Modern Operating Systems: The problem of internal fragmentation is largely addressed by paging. In paging, both physical memory (frames) and logical memory (pages) are divided into fixed-size, small blocks. This ensures that memory is allocated in precise, small units, significantly minimizing the amount of wasted space within allocated blocks.
External Fragmentation
Problem: External fragmentation occurs in dynamic partitioning schemes. As processes are loaded into and removed from memory, the free memory space becomes broken up into many small, non-contiguous blocks. Although the total amount of free memory might be sufficient to satisfy a new process's request, no single contiguous block is large enough, preventing the process from being loaded.
Solution in Modern Operating Systems: Segmentation is a memory management scheme that helps solve external fragmentation. In segmentation, processes are divided into a set of logical units called segments, which do not need to have the same length. Each segment can be loaded into a different, non-contiguous block of physical memory, allowing for more efficient utilization of fragmented free space.
Sockets for Inter-Process Communication
Defining a Socket
A socket serves as an endpoint for communication between processes. Sockets are created by server programs and client applications when they need to establish communication with other processes, whether on the same machine or across a network. They act as a fundamental communication channel, enabling potentially unrelated processes to share data and interact.
Client-Server Socket Communication Flow
The basic structure for establishing communication between a client and a server using sockets involves a series of well-defined steps:
Client-Side Operations:
socket(): Creates a new socket endpoint.connect(): Initiates a connection to the server's socket (opens the connection).write(): Sends data (e.g., a request) to the server.read(): Receives data (e.g., a response) from the server.close(): Closes the socket connection.
Server-Side Operations:
socket(): Creates a new socket endpoint.bind(): Associates the socket with a specific local IP address and port number.listen(): Puts the socket into a passive listening mode, waiting for incoming client connection requests.accept(): Accepts an incoming client connection, creating a new socket specifically for that client.read(): Receives data (e.g., a request) from the connected client.write(): Sends data (e.g., a response) back to the client.close(): Closes the socket connection with the client.