On the road to exascale, multi–core processors and many–core accelerators/coprocessors are increasingly becoming key–building blocks of many computing platforms including laptops, high performance workstations, clusters, grids, and clouds. Optimization techniques such as heuristics are often used to improve the performance of those computing resources. On the other hand, plenty of hard problems in a wide range of areas including engineering design, telecommunications, logistics, biology, etc., are often modeled and tackled using optimization approaches. These approaches fall into two major categories: meta–heuristics (evolutionary algorithms, particle swarm, ant or bee colonies, simulated annealing, Tabu search, etc.) and exact methods (Branch–and–X, dynamic programming, etc.). Nowadays, optimization problems become increasingly large and complex, forcing the use of parallel computing for their efficient and effective resolution. The design and implementation of parallel optimization methods raise several issues related to the characteristics of these methods and those of the new hardware execution environments at the same time.<br>This workshop seeks to provide an opportunity for the researchers to present their original contributions on the joint use of advanced (discrete or continuous, single or multi–objective, static or dynamic, deterministic or stochastic, hybrid) optimization methods and distributed and/or parallel multi/many–core computing, and any related issues.<br>The POMCO Workshop topics include (but are not limited to) the following:<br>– Parallel models (island, master–worker, multi–start, etc.) for optimization methods revisited for multi–core and/or many–core (MMC) environments.<br>– Parallel mechanisms for hybridization of optimization algorithms on MMC environments<br>– Implementation issues of parallel optimization methods on MMC workstations, MMC clusters, MMC grids/clouds, etc.<br>– Software frameworks for the design and implementation of parallel and/or distributed MMC optimization algorithms.<br>– Computational/theoretical studies reporting results on solving challenging problems using MMC computing.<br>– Energy–aware optimization for/with MMC parallel and/or distributed optimization methods.<br>– Optimization techniques for efficient compiling, scheduling, etc. for MMC environments<br>
Abbrevation
POMCO
City
Innsbruck
Country
Austria
Deadline Paper
Start Date
End Date
Abstract