Plenary Session

Monday, September 21, 2020, 9:00-11:00 MSK

Conference Opening
Vladimir Voevodin, Moscow State University, Russia

On the Convergence of HPC and AI [PDF]
Michael Resch, University of Stuttgart, Germany
Supercomputing has seen a tremendous development over the last decades. The field that starts to replace supercomputing as the most visible and debated topic in IT seems to be Artificial Intelligence. In this talk we will present how AI can benefit from the usage of supercomputers and how supercomputing centers can act as enabler and accelerators for AI research. We will also present how AI can help to boost productivity and quality of supercomputing simulations.

Performance Evaluation of SX-Aurora TSUBASA and Its Quantum Annealing-Assisted Application Design [PDF]
Hiroaki Kobayashi, Tohoku University, Japan
In this talk, I will present the overview of our on-going project entitled "A Quantum-Annealing-Assisted Next Generation HPC Infrastructure and its Applications.” This project explores the design space of the new-generation high performance computing infrastructure that incorporates an emerging computing technology into the classical computing platform. SX-Aurora TSUBASA plays a main role in this infrastructure, however, it is assisted by a quantum-annealing machine to accelerate some class of applications kernels such as optimization and clustering. First, I will show the performance of SX-Aurora TSUBASA by using several representative kernels, and then discuss some applications to be implemented on a hybrid computing environment of SX-Aurora TSUBASA and the D-wave machine.

The Tianhe’s approach to IO-500: Experience & Practice [PDF]
Ruibo Wang, National University of Defense Technology, China
After a brief introduction to the IO-500 list, this talk focuses on our I/O enhanced system: Tianhe-2E. The brand-new system achieved the top bandwidth performance and got the 3rd position in the IO-500 list last year (2019). The perspective of I/O enhanced computer system design, especially the lessons we’ve learned are discussed toward the end of the talk.

Less Moore, more Brain [PDF]
Thomas Ludwig, German Climate Computing Centre & Universität Hamburg, Germany
The current decelerating progress in semiconductor technology in particular with transistors limits the future increase of computational performance. This becomes a challenge for high performance computing and leading edge science. The talk will highlights trends of this development and illustrate ways to increase scientific productivity by using concepts of machine learning. As an example we highlight the situation at the German Climate Computing Centre and its community of earth science researchers.


Monday, September 21, 2020, 11:30-13:30 MSK

Convergence, balance and standardization of systems and solutions development for HPC
Nikolay Mester, Intel

RSC technologies for the efficient solvinf of hard problems
Alexander Moskovsky, RSC

Data Centric Systems for the Exascale Era [PDF]
Mike Woodacre, CTO HPC, HPE Fellow
This presentation will cover HPE approach to HPC/AI workflows, including use of heterogenous nodes, analytics, and storage, including the latest update on the HPE Cray EX system, as well as specific examples used to help tackle the challenge of COVID-19 workflows combining HPC/AI/HPDA.

Development of the ARM ecosystem for artificial intelligence, cloud and high performance computing [PDF]
Valery Cherepennikov, Huawei

How to speedup your cluster 10x times with no rework
Konstantin Mozgovoy, IBM East Europe / Asia

NVIDIA Platform for AI and HPC [PDF]
Anton Dzhoraev, NVIDIA

Dell Solutions for HPC [PDF]
Nikita Stepanov, Dell Technologies

Implementation of SL-AV global atmosphere model with 10km resolution [PDF]
Mikhail Tolstykh, Gordey Goyman, Rostislav Fadeev and Vladimir Shashkin


Tuesday, September 22, 2020, 17:00-19:00 MSK

HPC perspectives for Europe - Towards a European exascale ecosystem [PDF]
Thomas Lippert, Jülich Supercomputing Center, Germany
Almost ten years ago, the PRACE partners started with advanced HPC services dedicated to European science. PRACE is supported by the PRACE member states and by the EU through six + one implementation projects so far and has been able to create a common European umbrella over national HPC ecosystems. Remarkably, PRACE now has over 70 partner institutions, 7 high-profile systems, 700 major projects, over 100 petaflop/s accumulated peak performance and 12000 trainees in PRACE training courses. His scientific case from 2018/19 is considered a showcase roadmap of how science (and industry) will benefit from exascale supercomputers. With the EuroHPC Joint Undertaking (EuroHPC JU), the Union has created in 2018 a pan-European organization with 32 members, which will extend the development of a European supercomputing infrastructure at the top of the deployment pyramid, i.e. with systems owned by Europeans at European level. So far, five petascale systems and three pre-exascale systems designed for the 200 petaflop/s range are in the procurement phase. In addition, EuroHPC is now the main organization for financing hardware and software development at European level and is strongly committed to enabling the construction of exascale systems based on European technologies. This effort will continue in 2023 towards the first exascale systems in Europe. Forschungszentrum Jülich is strongly involved in this process and is leading important European-funded projects towards exascale. The DEEP series of such projects has introduced a new computing paradigm, the Modular Supercomputing Architecture (MSA), which aims at the most cost-effective and efficient exascale computing with a focus on the simulation of complex systems and AI-based simulations.

BSC: The future HPC will be open [PDF]
Mateo Valero, Barcelona Supercomputing Center, Spain
The combination of technology trends and exponential growth of data and compute have ushered in an era of software/hardware co-design to meet the major KPIs for the system. Open hardware is required to participate in this new era and open ISAs like RISC-V enable this capability. Open ISAs provide a final ingredient to produce an open ecosystem for HPC, from software all the way down to the chips. With the momentum behind RISC-V, we believe this ecosystem will dominate this open stack and we are using HPC as the pathfinder to define this new open world. BSC is leading RISC-V projects across two major thrusts that reflect the major compute components in an HPC system: Accelerators and CPUs. BSC is leading EPI Stream 3, a collection of RISC-V accelerators, including a vector accelerator based on the new RISC-V vector extension. This design will evolve into 2 accelerator chiplets (vector accelerator and ML/Stencil accelerator) sharing a common I/O and memory subsystem in EPI Pilot2 (submitted proposal). This pilot will produce chiplets that are coherent, scalable and independent, with only European IP, targeting a small geometry European fab. In additional, BSC is building infrastructure to support future accelerator and CPU designs with the large scale FPGA emulation testbed call MEEP. We see an integrated future of chiplets and HBM memory. We can leverage the HBM memory in the FPGA as well as other hard macros to emulate these systems, at scale. This testbed is also defining the generation of vector accelerators beyond EPI as an example of the capabilities of MEEP as an Software Development Vehicle and pre-silicon validation platform. The BSC is leading several CPU projects to build up the expertise and know-how for the full CPU design cycle, from specification, to chip fabrication, the other pillar of general purpose processing. Finally, we are targeting a high performance 2-way out-of-order processor design with on- and off-chip coherence in the eProcessor project. These projects not only focus on the hardware design, but also the entire software stack to enable the entire open HPC ecosystem.

HPC: The Where We Are Today And A Look Into The Future [PDF]
Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, and University of Manchester, USA

Conference Close
Vladimir Voevodin, Moscow State University, Russia