seminars
Detail
Publication date: 1 de June, 2021Single Operation Multiple Data – Data Parallelism at Subroutine Level
The future of computing relies on parallelism. The multi-core architecture has become the de facto standard for processor design, crossing the boundaries of servers and personal computers to hand-held devices, such as tablets and cellular phones. This status-quo clashes however with the complexity of writing parallel code, an issue that has been driving a substantial amount of research
This seminar addresses the particular topic of data-parallelism. The efficient implementation of this domain decomposition technique is closely linked to the underlying hardware infrastructure. This ultimately transpires in the available programming models, entailing awareness of the target architecture. For instance, data-parallelism is usually explored at loop-level in both distributed and shared memory environments. However, in the former it requires data distribution strategies that are too expensive, performance-wise, to be used in the latter. Moreover, in the growing field of GPGPU, the reference APIs require data-parallelism to be expressed through computational kernels, rather than by annotating loops
Our approach explores the concept of Single Operation Multiple Data to provide a uniform abstraction for data-parallel computing. The calling of a subroutine in this context spawns multiple execution flows, each operating on distinct partitions of the input dataset. Such computations can be expressed by simply annotating sequential subroutines with data distribution and reduction policies, delegating the lower-level details to dedicated compilers and runtime systems. The presentation will overview the key concepts of the model and how it can be used as an unified abstraction in the data-parallelism context. It will also provide details on some prototype implementations and their performance results.
Date | 31/10/2012 |
---|---|
State | Concluded |