Home » Questions » Computers [ Ask a new question ]

Why has the size of L1 cache not increased very much over the last 20 years?

Why has the size of L1 cache not increased very much over the last 20 years?

The Intel i486 has 8 KB of L1 cache. The Intel Nehalem has 32 KB L1 instruction cache and 32 KB L1 data cache per core.

Asked by: Guest | Views: 312
Total answers/comments: 3
Guest [Entry]

"30K of Wikipedia text isn't as helpful as an explanation of why too large of a cache is less optimal. When the cache gets too large the latency to find an item in the cache (factoring in cache misses) begins to approach the latency of looking up the item in main memory. I don't know what proportions CPU designers aim for, but I would think it is something analogous to the 80-20 guideline: You'd like to find your most common data in the cache 80% of the time, and the other 20% of the time you'll have to go to main memory to find it. (or whatever the CPU designers intended proportions may be.)

EDIT: I'm sure it's nowhere near 80%/20%, so substitute X and 1-X. :)"
Guest [Entry]

"Cache size is influenced by many factors:

Speed of electric signals (should be if not the speed of light, something of same order of magnitude):

300 meters in one microsecond.
30 centimeters in one nanosecond.

Economic cost (circuits at different cache levels may be different and certain cache sizes may be unworth)

Doubling cache size does not double performance (even if physics allowed that size to work) for small sizes doubling gives much more than double performance, for big sizes doubling cache size gives almost no extra performance.
At wikipedia you can find a chart showing for example how unworth is making caches bigger than 1MB (actually bigger caches exist but you must keep in count that those are multiprocessor cores.)
For L1 caches there should be some other charts (that vendors don't show) that make convenient 64 Kb as size.

If L1 cache size didn't changed after 64kb it's because it was no longer worth. Also note that now there's a greater ""culture"" about cache and many programmers write ""cache-friendly"" code and/or use prefetech instructions to reduce latency.

I tried once creating a simple program that was accessing random locations in an array (of several MegaBytes):
that program almost freezed the computer because for each random read a whole page was moved from RAM to cache and since that was done very often that simple program was draining out all bandwith leaving really few resources for the OS."
Guest [Entry]

Actually L1 cache size IS the biggest bottleneck for speed in modern computers. The pathetically tiny L1 cache sizes may be the sweetspot for the price, but not the performance. L1 cache can be accessed at GHz frequencies, the same as processor operations, unlike RAM access 400x slower. It is expensive and difficult to implement in the current 2 dimensional design, however, it is technically doable, and the first company which does it successfully, will have computers 100's of times faster and still running cool, something which would produce major innovations in many fields and are only accessbile currently through expensive and difficult to program ASIC/FPGA configurations. Some of these issues are to do with proprietary/IP issues and corporate greed spanning now decades, where a puny and ineffectual cadre of engineers are the only ones with access to the inner workings, and who are mostly given marching orders to squeeze out cost-effective obfuscated protectionist nonsense. Overly privatized research always leads to such technologic stagnation or throttling (as we have seen in aerospace and autos by the big manufacturers and soon to be pharma). Open source and more sensible patent and trade secret regulation benefiting the inventors and public (rather than the company bosses and stockholders) would help here a lot. It should be a no-brainer for development to make much larger L1 caches and this should and could have been developed decades ago. We would be a lot further ahead in computers and many scientific fields using them if we had.