<P>Although parallel programming has been a concern for decades, the new generation of processors has fostered the research in the area. Multicore chips have become a standard, without any clear new solution to program them, besides OpenMP, Posix Threads and MPI. Moreover, the increase in the number of cores that is expected within the chips has already raised the issue of the performance. For instance, the scalability of parallel programs running on chips multicores, with clear bottlenecks such as a unique memory bus, is an issue. The treatment of Non–Uniform Memory Accesses in the current parallel languages or libraries is an other open point which may turn crucial as the number of cores increases. Finally, Graphics Processing Units (GPUs) are also<BR>emerging as a new source of parallel platforms.<BR><BR><B>Keywords:</B><BR><BR> – Parallel Languages and Libraries.<BR> – Tools for parallel programming: debuggers, libraries and performance analyzers.<BR> – Compilers.<BR> – Scheduling.<BR> – Influence of the architectural design on the performance of a parallel program.<BR> – Models for Parallel Programming.<BR> – Programmation of GPUs.<BR> – Performance Evaluation.<BR> – Parallel Applications.<BR></P>
Abbrevation
IMPAR
City
Sao Paulo
Country
Brazil
Deadline Paper
Start Date
End Date
Abstract