Memory words in cache are combined to form a small group, known as cache blocks or lines or frames. An address is assigned to each block, referred as tag. The collection of tag addresses currently assigned to the cache, which can be non-contiguous, is stored as directory.
Figure shows the organization of a cache/main memory system. Consider a main memory consists of 2n addressable locations, each having a unique n-bit address.
For mapping purposes, let this memory consist of K number of fixed length blocks. So, There are m = 2n / K blocks. Cache consists of C lines and K words each and the number of lines is always less than the number of main memory blocks (C<M). At any time some subset of the memory blocks resides in the cache lines. Each line includes a tag that identifies which particular memory block is currently being stored. Usually the tag is the portion of memory address. To improve the performance, the time required to check tag addresses and access the data from cache memory must be less than the time required to access cache memory. –
A cache is arranged within a computer in two general ways:
In the look through design the processor communicates via a separate local bus that is isolated from the main system bus. Thus, the system bus is available for use by other units to communicate with the memory, A look through cache allows the local bus to be wider than the system bus that speeds up the cache main memory transfers. The disadvantage of this design is higher complexity, higher cost and takes longer for M2 to respond to the processor when a cache miss occurs.
The above content is relevant to following tags:
- cache memory in detail with picture
- lookaside cache