CPU Memory Hierarchy and Cache Mapping Techniques
Posted by Anonymous and classified in Technology
Written on in
English with a size of 4.58 KB
Memory Hierarchy: Faster, Smaller, Costlier
The computer memory hierarchy organizes storage based on speed, size, and cost, moving from the fastest (CPU internal) to the slowest (external storage):
- Registers: CPU internal (Fastest, Smallest)
- Cache: L1, L2, L3 (Fast)
- Main Memory: RAM (DRAM)
- Secondary Storage: HDD, SSD
- Tertiary Storage: Optical, Tape (Slowest, Largest)
What Is Cache Memory?
Cache memory is a small, high-speed memory located close to the CPU. It stores frequently accessed data and instructions so the processor doesn’t have to fetch them repeatedly from slower main memory (RAM).
Cache significantly reduces memory access time, thereby improving overall system performance.
Why Cache Outperforms Main Memory (RAM)
Cache memory achieves superior speed due to several architectural factors:
- Proximity to CPU: It is physically located on or very near the processor chip, reducing signal latency.
- Faster Technology: Cache is built using Static RAM (SRAM), which is significantly faster than the Dynamic RAM (DRAM) used for main memory.
- Smaller Size: Its limited size makes it easier and quicker to search and access data.
- Parallel Access: Cache operations can often be accessed in parallel while other memory operations are in progress.
Cache Mapping Techniques
When data is loaded from RAM to cache, mapping techniques determine where the data block is placed. There are three primary types:
1. Direct Mapping
Each block of main memory maps to exactly one specific cache line.
Mapping Formula: Main Memory Block → (Block Number MOD Number of Cache Lines) → Cache Line
Example: If the cache has 8 lines, Block 12 from memory maps to line 4 (12 MOD 8 = 4).
- Pros: Simple implementation, fast lookup.
- Cons: High chance of collisions (conflicts), leading to poor utilization.
2. Fully Associative Mapping
A memory block can be placed anywhere in the cache, allowing maximum flexibility.
Mapping Concept: Main Memory Block → Any Free Cache Line
- Pros: Highly flexible, resulting in fewer conflicts and a better hit rate.
- Cons: Requires complex hardware to search all cache lines simultaneously, making it expensive and slower than direct mapping.
3. Set-Associative Mapping
This is a hybrid approach. The cache is divided into sets, and each memory block maps to a specific set, but can be placed into any line within that set.
Mapping Formula: Main Memory Block → (Block Number MOD Number of Sets) → Specific Set → Any Line in Set
Example: In a 4-way set associative cache (each set has 4 lines), Block 12 maps to Set 4 and can occupy any of the 4 lines within Set 4.
- Pros: Offers balanced performance and complexity, achieving a much better hit rate than direct mapping.
- Cons: Slightly slower lookup than direct mapping due to searching within the set.
Cache Hit vs. Cache Miss
- Cache Hit: The requested data is found in the cache, resulting in fast access.
- Cache Miss: The requested data is not in the cache, requiring a fetch from slower main memory (RAM).
Hit Ratio and Miss Ratio
These metrics quantify cache efficiency:
- Hit Ratio: Calculated as (Number of Cache Hits) / (Total Memory Accesses).
- Miss Ratio: Calculated as 1 − Hit Ratio.
A higher hit ratio indicates better system performance.
Calculating Address and Data Lines
This calculation determines the bus width required for a specific memory configuration.
Example Calculation: Given 16KB × 8 memory.
- 16KB = 16 × 1024 = 16,384 memory locations.
- Each location stores 8 bits (1 byte).
Address Lines Determination
To address 16,384 unique locations, we calculate $2^n = 16,384$. This yields $n = 14$.
Result: 14 address lines are required.
Data Lines Determination
Since each location is 8 bits wide, the system requires 8 data lines.