The traditional measure of an algorithm’s performance was its asymptotic time complexity – the Big-O function that provides an upper bound to the number of instructions it needs to execute. But in modern computers the instruction count is now only one of the requirements that we need to consider: in particular, the importance of caching in modern processor architectures means that a program that is sympathetic to caching requirements, for example by respecting data locality and minimising false sharing, will outperform another that may execute many fewer instructions, if the second one uses cache less efficiently.
What does this mean for the Java programmer? The Java Collections Framework, used daily by every working Java programmer, is interface-based: its design assumes that you will choose an interface—Set, List, or Map—according to the functional requirements of your application, and an implementation of that interface guided by the expected usage scenarios. But we can't any longer use the Big-O as the only, or sometimes even the main, way of choosing between collection implementations. In this talk we'll look at some common Java collections in the light of this new way of judging efficiency, and explore the practical implications for the working Java programmer.