Linux Memory Management and Scheduling Objectives
Classified in Computers
Written at on English with a size of 4.55 KB.
Linux Memory Management
There are two components to memory management under Linux. First, the physical memory-management system deals with allocating and freeing pages, groups of pages, and small blocks of memory. The second component handles virtual memory, which is memory mapped into the address space of running processes.
Management of Physical Memory
The primary physical memory manager in the Linux kernel is the page allocator. This allocator is responsible for allocating and freeing all physical pages, and is capable of allocating ranges of physically contiguous pages on request. The allocator uses the buddy heap algorithm to keep track of available physical pages. A buddy-heap allocator pairs adjacent units of allocatable together; hence its name. Each allocatable memory region has an adjacent partner, or buddy, and whenever two allocated partner regions are both freed up, they are combined to form a larger region.
Management of Virtual Memory
The Linux virtual-memory system is responsible for maintaining the address space visible to each process. It creates pages of virtual memory on demand, and manages the loading of those pages from disk or their swapping back out to disk as required. Under Linux, the virtual-memory manager maintains two separate views of a process's address space: as a set of separate regions, and as a set of pages.
Scheduling Objectives
Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduling discipline should:
- Be fair: A scheduling discipline is fair if all processes are treated the same, and no process can suffer indefinite postponement.
- Maximize throughput: A scheduling discipline should attempt to service the largest possible number of processes per unit time.
- Maximize the number of interactive users receiving acceptable response times (i.e. at most a few seconds).
- Be predictable: A given job should run in about the same amount of time and at about the same cost regardless of the load on the system.
- Minimize overhead: Interestingly, this is not generally considered to be one of the most important objectives. Overhead is commonly viewed as wasted resources. But a certain portion of system resources invested as overhead can greatly improve overall system performance.
- Balance resource use: The scheduling mechanisms should keep the resources of the system balanced.
UNIX File System
The UNIX file system has both absolute path names and relative path names. Absolute path names start at the root of the file system and are distinguished by a slash at the beginning of the path name; /usr/local/font is an absolute path name. Relative path names start at the current directory, which is an attribute of the process accessing the path name. Thus, local/font indicates a file or directory named font in the directory local in the current directory, which might or might not be /usr.
Critical Section Problem
The larger segments of atomic instructions are made to be declared as atomic or at least behave as atomic towards certain commands of certain processes. These segments of code are called Critical Section (CS) and the task to make CS behave as atomic is known as Critical Section problem. Generally, solutions of CS problem can be subdivided into two groups:
Hardware Solution:
Disallows interrupts and does not allow any process to take CPU time while a Critical Section is executed.
Software Solution:
Lets interrupts happen, but does not let any process that modifies shared resources to trespass. This defines the section of code that modifies shared resources as critical and provides a software routine that will act like CPU for CS's (i.e. will let them be executed one at a time).
Consider a system consisting of n processes {P0, P1, c, Pn-1}. Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The below algorithm solves the CS Problem.
Two-Process Solutions
The processes are numbered P0 and P1. For convenience, when presenting Pi, we use Pj to denote the other process; that is, j = 1 - i.
Algorithm:
Our first approach is to let the process share a common integer variable initialized to 0 (or 1). If turn = i, then process Pi is allowed to execute in its critical section. The structure of process Pi is shown in Figure 1.19. The solution ensures that only one process at a time can be in its critical state.