Home
Blog
Understanding Temporal Locality
Updated
Dot Net Perls

Understanding Temporal Locality

When writing a program, it is tempting to try to count instructions to analyze its performance. Clearly a program that runs 100 instructions should be faster than one that runs 110. But many factors can muddy the analysis.

Consider the processor's memory cache—if we load a file from the disk into memory, we can access that file again with minimal delay as it is in the CPU cache. This means that operations that act on a region of memory are faster if they are done nearer in time to one another.

This concept is known as temporal locality, and it has some implications:

Reordering function calls in a program to increase repeated accesses to the same regions of memory can lead to performance gains.
Counting instructions is not enough to optimize a program—we must consider the memory cache.
Generally, operating on one file at a time, and then moving on to the next file, is best for performance.

At times I have had a program that opens many files, and then tries to process them all at once. But by understanding temporal locality, it is possible to optimize for the CPU cache—we open a file, process it fully, and then move to the next file. This can lead to a measurable performance improvement.

Dot Net Perls is a collection of pages with code examples, which are updated to stay current. Programming is an art, and it can be learned from examples.
Donate to this site to help offset the costs of running the server. Sites like this will cease to exist if there is no financial support for them.
Sam Allen is passionate about computer languages, and he maintains 100% of the material available on this website. He hopes it makes the world a nicer place.
An RSS feed is available for this blog.
Home
Changes
© 2007-2025 Sam Allen