Concurrent queue algorithms have been subject to extensive research. However, the target hardware, evaluation methodology, and competing algorithms considered in the publication of any two given concurrent queue algorithms often share only minimal overlap. A meaningful comparison and educated choice of queue algorithm based on the published data is, thus, exceedingly difficult.
With the continuing trend towards more and more heterogeneous systems, it is becoming more and more important to not only evaluate and compare novel and existing queue algorithms across a wide range of target architectures, but to also be able to continuously re-evaluate queue algorithms in light of novel architectures and capabilities.
To address this need, we present AnyQ, an evaluation framework for concurrent queue algorithms. We design a set of programming abstractions that enable the mapping of concurrent queue algorithms and benchmarks to a wide variety of target architectures. We demonstrate the effectiveness of these abstractions by showing that a queue algorithm expressed in a portable, high-level manner can achieve performance comparable to hand-crafted implementations. We design a system for testing and benchmarking queue algorithms.
Using the developed framework, we investigate concurrent queue algorithm performance across a range of both CPU as well as GPU architectures. In hopes that it may serve the community as a starting point for building a common repository of concurrent queue algorithms as well as base for future research, all code and data is made available as open source software to be found under https://github.com/AnyDSL/anyq.
Please note that not for all combinations of queue algorithm and hardware platform are complete result sets available. This might be caused by one of the following reasons:
TODO: add links/references to algorithm papers
TODO: encourage community to amend more algorithms or provide benchmark results for additional hardware