Technological background

Beside theory and experiment, numerical simulation is the third pillar for gaining scientific cognition. Cost- and time-intensive development processes in industrial and scientific environments can be significantly accelerated and performed more economically by simulations on powerful computing systems. Due to the increasing complexity of applications and the needs of modern research the ever increasing demand for compute power can only be satisfied by progressive developments of computer systems. By reaching technological and economical limits in design and fabrication of processor technologies, further necessary improvements could no longer be achieved by pure increment of sequential compute capabilities. In previous processor generations steady performance increases where due to increases of the clock rate. In former years program developers could hence rely on automatic performance gains. As a direct consequence, efficiency of software was not a major concern. But now the situation has changed completely. Caused by the inevitable change to multicore technologies all further performance gains depend on a labor-intensive and tedious adaptation of software and algorithms to the changed conditions and a full exploitation of the available parallel potential.

Since a couple of years all major processor vendors are offering multicore processors as a promising solution for undamped performance increase. By invention of multicore technologies a constant rate of growth with respect to theoretically available compute capability is foreseen. Furthermore, solutions are provided to overcome the growth of power requirements. The new technologies are prevalent. With the associated change of paradigms towards parallelism major responsibility is now put on software architects, programmers and algorithm designers. It is implicitly assumed that the required level of parallelism is intrinsic to the applications. Many application codes that have evolved over a long time are currently not ready to exploit the available parallel potential exhaustively. Immense development work is necessary to restructure, parallelize and re-implement applications in a hardware-aware manner. In many cases it is still unclear if the chosen way and the bet on parallelism can bring the expected improvements. In the current situation with a multiplicity of available technologies ranging from multicore-CPUs, graphics processing units (GPUs), the STI Cell processor (known from PlayStation 3), Field Programmable Gate Arrays (FPGAs), tiled manycore architectures, and manifold coprocessor concepts (e.g. ClearSpeed accelerators), there are no unified approaches or standards available. When it comes to programming models and environments, we find vendor specific approaches and diverging concepts.

For efficient parallel implementations and optimal results underlying algorithms and mathematical solution methods have to be carefully adapted to architectural constraints like small grained parallelism and memory or bandwidth limitations that require for additional communication and synchronization. A comprehensive knowledge of underlying hardware is mandatory for application programmers. Hence, there is strong need for virtualization concepts that free programmers from hardware details maintaining best performance and enable deployment in heterogeneous and hybrid environments.

In the field of high performance computing (HPC) on huge, expensive and centrally managed supercomputers development of parallel applications and adapted software is subject of main research for more than four decades. But all the acquired cognitions cannot be transferred to multicore technologies directly because the emerging technologies are organized in a different dimension with further levels of parallelism and nested memory subsystems. Moreover, applications are no longer limited to specialized and well-chosen problems. Multicore is affecting all areas of industry and science and there is no way to get around. But due to experiences made so far conclusions can be made with respect to the expected complexity of the upcoming work. In the area of numerical simulation many interesting and promising results have been obtained. But all the problems considered show similar characteristics like problems examined in the HPC area. The proof of general applicability has still to be given, in particular for desktop applications and embedded systems where programmers are facing unknown problems.

Studies made so far show that by the sea change of technologies all available software compilations, libraries and tools can only be reused with limitations. Beside known bottlenecks like memory access and bandwidth, limiting factors are mainly given by parallel software and applied methodologies. Further challenges are heterogeneity of hardware and scalability of applications with respect to thousands of cores. Future directions are unclear. Only complete redevelopments of software are expected to bring the necessary boost. But time and cost as well as unresolved problems do not give rise to a rapid solution in general. As an essential observation in nature, nearly all events and processes happen in a weakly or strongly coupled parallel manner. In a similar sense modern hardware is providing a comprehensive potential of parallelism. At a first glance only humans and their way of sequential thinking and formal description of things seem to be opposed to parallelism. Maybe a complete rethinking and new perspectives will give us new insights.

Conference goal

The complex solution process involves many different disciplines. A plurality of applications evolve from engineering sciences or physics, e.g. problems from computational fluid dynamics. Modeling and implementation of these problems require sophisticated mathematical methods and numerical algorithms. Due to technological complexity, a tight collaboration between computer scientists and software developers is necessary to map the implementations to parallel platforms. Electrical engineers and hardware designers have to adapt the hardware characteristics to the application demands. A huge expertise in scientific simulation and parallel projects is available in the area of HPC and has to be transferred to systems of desktop size. For an optimal utilization of resources and best possible efficiency with respect to throughput of the applications, the solution methods and the target platforms can not be investigated separately. In contrast to past years, recent technologies require profound knowledge on hardware and their structure. On the other hand applied algorithms have to be carefully adapted to the hardware characteristics. A unified approach can only be accomplished in an interdisciplinary context. This conference on “Facing the Multicore-Challenge” is dedicated to this comprehensive task. After a successful first conference in Heidelberg and the multitude of open topics we continue the conference in Karlsruhe in 2011.

This conference aims at advancement of young researchers by addressing their needs in a particular manner. A broad spectrum of interdisciplinary topics shall be elaborated. The conference shall bring together young scientists doing research in the areas of computer science, high performance computing, applied mathematics or engineering disciplines. The main focus is set to interdisciplinary and international exchange of ideas. Young researchers shall give reports on their recent experience and discuss current and future research activities. An important part is to gain insights into ways of looking at problems, operations and ideas of other disciplines. As a central topic persistent problems shall be analyzed and future perspectives of technologies shall be evaluated. Based on the conference's broad approach different angles of view shall be utilized to outline new solutions and develop new ideas. Presented investigations and studies shall end in discussions on recent and future developments in multicore technologies. Furthermore, this conference shall serve as a platform for kick-off of common projects and research activities.

Conference outline

The conference starts with introductory tutorials on hardware, programming models, parallel applications and tools. In three focus sessions on hardware and parallel programming, multicore applications and practice, experience and results urgent problems shall be elaborated in a modular manner and links between different disciplines shall be identified. Focus sessions are opened by invited lectures that outline problems, side conditions, solution aspects and objectives. Each session shall be closed by short talks from young scientists (graduate or Ph.D. Students) giving reports on their activities, results and future research.

The combination of speakers with different factual issues shall broaden the horizon of young researchers. By an interaction of new ideas from junior researchers and the wide-ranging experience from experts of neighboring disciplines an optimal balance shall be provided to provide and discuss solutions to recent problems and open topics by a joint approach.

With designated participation of researching industry partners (hardware vendors and software developers) an interface between academic activities and industry research shall be provided. The study on effects of further technological developments shall be illuminated from the viewpoint of the user and the vendor.

An important part of the conference relies on the interaction and open discussion between the audience and the presenters. A further goal of the conference is the integration of graduate students, Ph.D. Students, postdoctoral researchers and leaders of junior research groups.

Due to their complexity all resulting problems need to be tackled in an interdisciplinary approach resulting in a unified big picture – ranging from application-specific motivation, modeling, design of algorithms and mathematical solution methods as well as the corresponding mapping to parallel architecture to efficient implementations. Only in a common course of action the multicore-challenge can be faced. The interaction of mathematics, computer science, high performance computing and engineering disciplines shall be highlighted in particular.

Facing the Multicore-Challenge I in 2010

The Conference for Young Scientists on Facing the Multicore-Challenge, March 17-19, 2010, was gratefully granted by the
Heidelberg Academy of Sciences / Heidelberger Akademie der Wissenschaften,
Karlstr. 4, 69117 Heidelberg.

The 2010 conference program is still available.