Abbrevation
APMM
City
Bologna
Country
Italy
Deadline Paper
Start Date
End Date
Abstract

With multi&#8211; and many&#8211;core based systems, performance increase on the microprocessor side will continue according to Moore&#8242;s Law, at least in the near future&#046; However, the already existing performance limitations due to slow memory access are expected to get worse with multiple cores on a chip, and complex hierarchies of cache memory will make it hard for users to fully exploit the theoretically available performance&#046; In addition, the increasingly hybrid and hierarchical design of compute clusters and high&#8211;end supercomputers, as well as the use of accelerator components (GPGPUs by AMD and NVIDIA, Intel Xeon Phi, Intel SCC, integrated GPUs etc&#046;) add further challenges to efficient programming in HPC applications&#046;<br>Therefore, compute and data intensive tasks can only benefit from the hardware&#8242;s full potential, if both processor and architecture features are taken into account at all stages &#8211; from the early algorithmic design, via appropriate programming models, up to the final implementation&#046;<br>The APMM Workshop topics of interest include (but are not limited to) the following:<br>&#8211; Hardware&#8211;aware, compute&#8211; and memory&#8211;intensive simulations of real&#8211;world problems in computational science and engineering (for example, from applications in electrical, mechanical, civil, or medical engineering)&#046;<br>&#8211; Manycore&#8211;aware approaches for large&#8211;scale parallel simulations in both implementation and algorithm design, including scalability studies&#046;<br>&#8211; Parallelisation on HPC platforms; esp&#046; platforms with hierarchical communication layout, multi&#8211;/many&#8211;core platforms, NUMA architectures, or accelerator components (Intel Xeon Phi, NVIDIA and AMD GPU, Tilera, FPGA, integrated GPUs (such as AMD APUs or Intel Haswell/Ivy Bridge))&#046;<br>&#8211; Parallelisation with appropriate programming models and tool support for multi&#8211;core and hybrid platforms&#046;<br>&#8211; concepts for exploiting emerging vector extensions of instruction sets<br>&#8211; Software engineering, code optimisation, and code generation strategies for parallel systems with multi&#8211;core processors&#046;<br>&#8211; Tools for performance and cache behavior analysis (including cache simulation) for parallel systems with multi&#8211;core processors&#046;<br>&#8211; Performance modelling and performance engineering approaches for multi&#8211;thread and mutli&#8211;process applications&#046;<br>