The cost of moving data is becoming a dominant factor for performance and energy efficiency in high performance computing systems. To minimize data movement, applications have to consider initial data placement and optimize both vertical data movement in the memory hierarchy and horizontal data transfer between processing units.<br>While trends in computer architecture suggest that the number of computing cores on a node is continuing to increase, it is likely that some long–held programmability assumptions such as cache coherence across a whole compute node will no longer be valid on future systems. At the same time, the inclusion of high–bandwidth memory and non–volatile storage will further complicate the programming of HPC systems. To address this situation, application developers need to be equipped with new techniques, tools, libraries, and programming abstractions to deal with data locality as a first class concern.<br>Topics of the DLMCS workshop include, but are not limited to:<br>Programming abstractions for data locality<br>Approaches for multi–level locality<br>Support for data locality in task–based programming models<br>Global address space approaches and data locality<br>Language extensions and domain–specific libraries for locality<br>On–chip networks and data locality<br>Hardware mechanisms for exploiting locality<br>Locality in large–scale HPC interconnect networks (inter–node locality)<br>Advances in cache coherence protocols and modern shared memory systems<br>Data locality and communication avoidance<br>Data locality and multi–tier memory systems<br>Approaches for processing in memory<br>Dataflow approaches<br>
Abbrevation
DLMCS
City
Granada
Country
Spain
Deadline Paper
Start Date
End Date
Abstract