DistCL Download Registration

DistCL: A Framework for the Distributed Execution of OpenCL Kernels

Abstract: GPUs are used to speed up many scientific computations; however, to use several networked GPUs concurrently, the programmer must explicitly partition work and transmit data between devices. We propose DistCL, a novel framework that distributes the execution of OpenCL kernels across a GPU cluster. DistCL makes multiple distributed compute devices appear to be a single compute device. DistCL abstracts and manages many of the challenges associated with distributing a kernel across multiple devices including: (1) partitioning work into smaller parts, (2) scheduling these parts across the network, (3) partitioning memory so that each part of memory is written to by at most one device, and (4) tracking and transferring these parts of memory. Converting an OpenCL application to DistCL is straightforward and requires little programmer effort. This makes it a powerful and valuable tool for exploring the distributed execution of OpenCL kernels. We compare DistCL to SnuCL, which also facilitates the distribution of OpenCL kernels. We also give some insights: distributed tasks favor more compute bound problems and favour large contiguous memory accesses. DistCL achieves a maximum speedup of 29.1 and average speedups of 7.3 when distributing kernels among 32 peers over an Infiniband cluster.

If you use DistCL in your work, please cite our MASCOTS 2013 paper:

Tahir Diop, Steven Gurfinkel, Jason Anderson and Natalie Enright Jerger. "DistCL: A Framework for Distributed Execution of OpenCL Kernels". In Proceedings of the International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS). April 2013.

Please register below to access the DistCL source files. Note your information will remain private.


Name:
Email:
Institution/Affiliation:
Optional Comments:
ALL SOFTWARE IS PROVIDED AS IS AND WITH NO WARRANTY WHETHER EXPRESSED OR IMPLIED.