Material Detail
Cache Performance of Indexing Data Structures
The speed of computer processors is growing rapidly in comparison to the speed of DRAM chips. The cost of a cache miss, measured in processor clock cycles, is increasing exponentially, and this is quickly becoming a bottleneck for indexing in main memory. We study several indexing data structures on a simulated architecture and show that the relative performance of cache-conscious indexing structures is increasing with memory latency. In addition, we show that top-down algorithms for maintaining these structures reduce the total instruction count, leading to a modest improvement in execution time over the corresponding bottom-up algorithms.
Quality
-
Editor Reviews
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info