Tuesday, November 29, 2016

Making the Most of Memory


Always so much to learn. For example, did you know that the time it takes to access your computers memory is MANY times slower than the time to access the CPU caches? What does this mean for how we structure our programs and architectures? We'd want to take full advantage of the CPU Cache (L1) and the L2 cache if we are concerned at all with application performance. What this means is structuring components and code so that caches can be used.


Even if we go only to the DRAM (and not disk cache), it only holds the data for ms before having to be refreshed...this means you have under 100ms (as of the article in 2008) to process everything in memory before it cycles and has to be refreshed.


All of this has implications on the design of your objects, structs, etc. SRAM in the L2 cache is limited in memory space due to power consumption and costs. Given this, it seems logical that keeping your data bags as small in size as possible would be of great advantage to performance. Additionally, limiting loops and batches could also help to improve performance.


Architecturally speaking, since operations are also cached in the L1, you'd want to keep the same operations (looking at you data access) on the same CPU/L1. So much to learn...


Read about it in depth at https://lwn.net/Articles/250967/

No comments:

Post a Comment