In the research area of real–time and embedded systems, the comparison<br>among results achieved by different research efforts is often very<br>difficult or even impossible due to the lack of common tools, data<br>sets or methodologies upon which the comparison is based. For example,<br>different authors use different algorithms for generating random task<br>sets, different application traces when simulating dynamic real–time<br>systems, different simulation engines when simulating scheduling<br>algorithms. To make the problem worse, different research communities<br>(e.g., the real–time, networking, storage, parallel and distributed<br>computing, Service–Oriented, GRID and cloud computing, etc.) often<br>consider the same or very similar problems and scenarios (e.g.,<br>performance of multimedia applications) from different but<br>complementary perspectives, and they use very different abstraction<br>models and simulation engines, making it very difficult to build an<br>integrated view and compare approaches among each other.<br>Research in the field of real–time and embedded systems (and not only)<br>would greatly benefit from the availability of well–engineered,<br>possibly open tools, simulation frameworks and data sets which may<br>constitute a common metrics for evaluating simulation or experimental<br>results in the area. Also, it would be nice to have a possibly wide<br>set of reusable data sets or behavioural models coming from realistic<br>industrial use–cases over which to evaluate the performance of novel<br>algorithms. Availability of such items would increase the possibility<br>to compare novel techniques in dealing with problems already tackled<br>by others from the multifaceted viewpoints of effectiveness, overhead,<br>performance, applicability, and others.<br>The ambitious goal of the International Workshop on Analysis Tools and<br>Methodologies for Embedded and Real–time Systems is to start creating<br>a common ground and a community to collect methodologies, software<br>tools, best practices, data sets, application models, benchmarks and<br>any other way to improve comparability of results in the current<br>practice of research in real–time and embedded systems. People from<br>industry are also welcome to contribute with realistic data sets or<br>methods coming from their own experience, which in the midterm may<br>serve as benchmarks for assessing RT research efforts.<br>FOCUS OF THE 2013 EDITION<br>––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––<br>Often the research literature on real–time and embedded systems<br>insists in giving importance mostly to task scheduling, neglecting<br>other practical aspects of the system achitecture that may strongly<br>impact the performance of distributed real–time applications, such as:<br>the presence of shared caches and multiple memory controllers and<br>paths to the main memory in multi–core and multi–processor systems<br>(and particularly in NUMA architectures); network technologies and<br>scheduling, beyond the well–known and well–investigated CAN bus, like<br>point–to–point or standard TCP/IP; disk access and scheduling, along<br>with the possibility to simulate different existing storage<br>technologies (e.g., SSD vs traditional HDs) and architectures (e.g.,<br>NAS). Furthermore, it is often very useful if the simulation accounts<br>for some critical elements of the run–time environment software<br>architecture, such as: the device driver architecture of the Operating<br>System; the presence of hypervisors and various virtualization<br>technologies, along with their architecture in handling interrupts and<br>communications; probabilistic models of the impact of factors that may<br>be outside of the control of the system designer, such as: latency and<br>bandwidth variability over open TCP/IP networks, such as the Internet,<br>or over wireless networks; impact of virtualization technologies;<br>workload fluctuations at run–time; etc.<br>All of the above factors, and surely many others, have non–negligible<br>impact on the responsiveness of distributed real–time systems and<br>applications, and deserve to be accurately modelled and simulated when<br>evaluating novel mechanisms and comparing approaches among each other,<br>in order to achieve realistic results.<br>The focus of the 2013 edition of WATERS is on tools, benchmarks, and<br>data sets that are useful for a comprehensive analysis and evaluation<br>of systems where many of the above factors are considered in an<br>integrated way (e.g., including an integrated view on computing,<br>networking, and storage aspects).<br>SCOPE<br>––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––<br>The workshop seeks original contributions on methods and tools for<br>real–time and embedded systems analysis, simulation, modelling and<br>benchmarking. We look for papers describing well–engineered, highly<br>reusable, possibly open, tools, methodologies, benchmarks and data<br>sets that can be used by other researchers.<br>Areas of interest include, but are not limited to:<br>* Simulation of real–time, distributed and embedded systems<br>* Simulation of multi–core, many–core and massively parallel and<br>distributed systems<br>* Modelling, analysis and simulation of the various components of<br>the run–time environment, including the Operating System, the<br>hypervisor, or complex middleware components<br>* Tools and methodologies for real–time analysis<br>* Instrumentation, tracing methods and overhead analysis, including<br>proper accounting of the overheads due to various virtualization<br>technologies<br>* Power consumption models and experimental data for real–time<br>power–aware systems<br>* Simulation, instrumentation and analysis of complex distributed<br>systems infrastructures such as for Service–Oriented, GRID and<br>Cloud Computing infrastructures, when supporting real–time and<br>QoS–aware applications<br>* Realistic case studies and reusable data sets<br>* Comparative evaluation of existing algorithms and techniques<br>
Abbrevation
WATERS
City
Paris
Country
France
Deadline Paper
Start Date
End Date
Abstract