Data–intensive workflows (a.k.a. scientific workflows) are routinely used in most scientific disciplines today, especially in the context of high–performance, parallel and distributed computing. They provide a systematic way of describing a complex scientific process and rely on sophisticated workflow management systems to execute on a variety of parallel and distributed resources. With the dramatic increase of raw data volume in every domain, they play an even more critical role to assist scientists in organizing and processing their data and to leverage HPC or HTC resources, being at the interface between end–users and computing infrastructures.<br>This workshop focuses on the many facets of data–intensive workflow management systems, ranging from actual execution to service management and the coordination and optimization of data, service and job dependencies. The workshop covers a broad range of issues in the scientific workflow lifecycle that include: data–intensive workflows representation and enactment; designing workflow composition interfaces; workflow mapping techniques to optimize the execution of the workflow for different infrastructures; workflow enactment engines that need to deal with failures in the application and execution environment; and a number of computer science problems related to scientific workflows such as semantic technologies, compiler methods, scheduling and fault detection and tolerance.<br><div>The topics of the workshop include but are not limited to:<br></div><div><br></div>– Big Data analytics workflows<br>– Data–driven workflow processing (including stream–based workflows)<br>– Workflow composition, tools, and languages<br>– Workflow execution in distributed environments (including HPC, clouds, and grids)<br>– Reproducible computational research using workflows<br>– Dynamic data dependent workflow systems solutions<br>– Exascale computing with workflows<br>– In Situ Data Analytics Workflows<br>– Interactive workflows (including workflow steering)<br>– Workflow fault–tolerance and recovery techniques<br>– Workflow user environments, including portals<br>– Workflow applications and their requirements<br>– Adaptive workflows<br>– Workflow optimizations (including scheduling and energy efficiency)<br>– Performance analysis of workflows<br>– Workflow debugging<br>– Workflow provenance<br><div>– Workflows in constrained environments e.g. IoT, Edge computing, etc.</div><div><br></div>
Abbrevation
WORKS
City
Dallas
Country
United States
Deadline Paper
Start Date
End Date
Abstract