![]() True, The cache is designed to speed up the back and forth of information between the main memory and the CPU. Option 4: The last-level cache is designed for high capacity rather than how latency. So Workloads with high temporal locality benefits from higher cache block sizes. Option 3: Workloads with high temporal locality benefit from smaller cache block sizes.įalse, When the CPU accesses the current main memory location for reading required data or instruction, it also gets stored in the cache memory which is based on the fact that the same data or instruction may be needed in near future. Therefore, subsequent accesses are more likely to hit because of spatial locality. Option 2: Workloads with high spatial locality benefit from smaller cache block sizes.įalse, The advantage of a block size greater than one is that when a miss occurs and the word is fetched into the cache, the adjacent words in the block are also fetched. ![]() ![]() Option 1: Doubling the block size and halving the number of sets will reduce capacity misses.įalse, Increasing the block size means more adjacent words will be fetched on each miss, so references to these words will not cause compulsory misses - this exploits spatial locality. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |