Zhao Zhang breaks down computer memory research that changed the field
A professor explains:
When researchers submit new work for fellow scholars to address, at that moment it’s possible only to imagine whether it will have a significant impact. Twenty years later, you can know. Research by Zhao Zhang and Zhichun Zhu of UIC electrical and computer engineering just won the “Test of Time” award for the highest-impact research presented at the ACM/IEEE MICRO conference back in 2000.
What was the paper about, and why was it so important? We asked Zhang to describe it in terms that even a non-memory expert could understand. Read his summary below, drawing on the explanatory power that he brings to students in his courses.
How would you explain the fundamental problem your paper sought to address?
There was an organizational problem in computer memory design at that time. Computer memory is divided into many “banks,” like shelves in a grocery store. For the best performance, memory traffic shall spread out among banks. However, we found that for some unknown reason, memory traffic tends to concentrate in a few memory banks during a short time window. That puzzled us, because most computer programs should have their data evenly distributed to memory banks.
What was your solution?
We believed there must be a reason, and we tried hard to find it out. Luckily, we found it. There is another level of memory in computer, called cache, that buffers the memory data for quick access. Memory traffic happens when data are moved between cache and memory. Cache is divided into “sets,” similar to “banks.” There is a tendency that data items stored in the same cache set are moved together. The problem is that data items mapped to the same cache set are likely to be located in the same memory bank. So, when those data items move in and out of cache, they cause congestion to the bank where they are located.
The fundamental issue is that there is a symmetry in how data items are mapped to cache sets and memory banks. Each data item has a memory address of multiple binary bits. A subset of those address bits decides the memory bank location, and another subset decides the cache set location. The problem is: the two subsets use many of the same bits. They had to use those bits; otherwise, data wouldn’t be evenly distributed among memory banks or cache sets.
To solve the problem, we proposed a simple but creative solution. There were other address bits used in neither memory bank mapping nor cache set mapping. We used XOR gates, which are commonly used in computer design, to hash a subset of those bits with the bits used in original memory bank mapping. Then, the mapping of data items to memory banks and cache sets become different, but each of them is still an even distribution.
What was the reaction to your paper back in 2000?
The paper was well received by the research community. A few months after its publication, microprocessor designers at Sun Microsystems, Inc., which is now part of the Oracle Corp., contacted us. They verified our idea and then integrated it in Sun’s UltraSPARC microprocessors.
The idea has since been used in Intel, AMD, Sun UltraSPARC, and other microprocessors. One reason for its popularity is the simplicity of the idea. The XOR operation is very fast, and the cost of an XOR gate is very low, so the solution has virtually no overhead. But it is very effective in solving the problem.
How has this paper from 2000 shaped your work since then?
We moved on to study other research problems in computer hardware design, with a focus on computer memory. We have proposed more solutions to increase memory performance, improve memory energy efficiency, utilize new memory technologies, address security concerns of computer memory, and enhance the reliability of memory. The area is even more exciting than before.
How did you feel about receiving the Test of Time award?
It is a great honor, and we appreciate the research community giving us this honor.