Back Cite

PRACE

Partnership for Advanced Computing in Europe

PRACE https://tess.oerc.ox.ac.uk/content_providers/prace Partnership for Advanced Computing in Europe /system/content_providers/images/000/000/018/original/prace-logo.png?1530807794
Found 0 materials.
Showing 30 upcoming events out of 34. Found 650 past events. View all results.
  • Introduction to PETSc @ MdlS/Idris

    2 - 3 July 2020

    Introduction to PETSc @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/introduction-to-petsc-mdls-idris-66ee6086-a743-480f-8316-152692bae0de The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/). It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/891/ 2020-07-02 07:30:00 UTC 2020-07-03 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Hybrid Programming in HPC @ LRZ

    20 - 21 April 2020

    Introduction to Hybrid Programming in HPC @ LRZ https://tess.oerc.ox.ac.uk/events/introduction-to-hybrid-programming-in-hpc-lrz-a1756d98-1a5f-4840-bf4e-eae5cda4de1b Overview Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. The course is a PRACE training event. It is organized by LRZ in cooperation with HLRS, RRZE, and VSC (Vienna Scientific Cluster). Agenda & Content (preliminary) 1st day 09:30 Registration 10:00 Welcome 10:05 Motivation 10:15 Introduction 10:45 Programming Models            - Pure MPI 11:05 Coffee Break 11:25  - Topology Optimization 12:05    Practical (application aware Cartesian topology) 12:45  - Topology Optimization (Wrap up) 13:00 Lunch 14:00  - MPI + MPI-3.0 Shared Memory 14:30    Practical (replicated data) 15:00 Coffee Break 15:20  - MPI Memory Models and Synchronization 16:00    Practical (substituting pt-to-pt by shared memory) 16:45 Coffee Break 17:00    Practical (substituting barrier synchronization by pt-to-pt) 18:00 End 19:00 Social Event at Gasthof Neuwirt (self paying) 2nd day 09:00 Programming Models (continued)            - MPI + OpenMP 10:30 Coffee Break 10:50     Practical (how to compile and start) 11:30     Practical (hybrid through OpenMP parallelization) 13:00 Lunch 14:00  - Overlapping Communication and Computation 14:20     Practical (taskloops)  15:00 Coffee Break 15:20  - MPI + OpenMP Conclusions 15:30  - MPI + Accelerators 15:45 Tools 16:00 Conclusions 16:15 Q&A 16:30 End https://events.prace-ri.eu/event/902/ 2020-04-20 07:30:00 UTC 2020-04-21 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    8 - 9 June 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/introduction-to-scalapack-and-magma-libraries-mdls-idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/919/ 2020-06-08 07:30:00 UTC 2020-06-09 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Spring School in Computational Chemistry 2020 @ CSC

    10 - 13 March 2020

    Spring School in Computational Chemistry 2020 @ CSC https://tess.oerc.ox.ac.uk/events/spring-school-in-computational-chemistry-2020-csc Description The Spring School provides a comprehensive, tutorial-style, hands-on, introductory and intermediate-level treatment to the essential ingredients for molecular modeling and computational chemistry using modern supercomputers. The School program is being prepared, but the main content will be similar to last years and consists of: Classical molecular dynamics, intro + hands on (1 day) Electronic structure theory, intro  + hands on (1 day) Machine learning in chemistry, intro + hands on Special topics: e.g. on Visualization, Enhanced Sampling Techniques, etc. The school is a must for graduate students in the field, providing an overview on "what can be calculated and how should it be done", without forgetting the important aspect of network building. Watch a short video of one our favourite lecturers contemplate this related to the 2019 School. To get an idea of the depth in which the topics are covered, take a look at the materials from 2019 School. The School is already fully booked, but we still accept a few registrations to the waiting list. We will notify participants as early as possible if seats become available. Learning outcome The learning outcome is to gain an overview of the two main methods in computational chemistry — molecular dynamics and electronic structure calculations — in connection with related HPC software packages and other useful skills in the trade. The workshop is also suited for an intensive crash course (the first two days) in computational modelling and is expected to be useful for students and researchers also in physics, materials sciences and biosciences. The following "Special topics" then build on this foundation. Prerequisites Working knowledge and some work experience from some branch of computational chemistry will be useful. Basic linux skills for hands-on exercises and elementary Python for Machine Learning hands-on. More detailed description of pre-requisites and links for self study. Programme The timetable can be seen on the left menu and materials (uploaded after the School) accessed at the bottom of the page. For an overview of the previous event, read a summary blog of the 2019 School. In 2021 the School is likely organized in mid March - stay tuned! Software used in the School TBA   Lecturers  Dr. Filippo Federici Canova, Aalto University, Finland Dr. Mikael Johansson, University of Helsinki, Finland Dr. Luca Monticelli, IBCP (CNRS), Lyon, France Dr. Michael Patzschke, Helmholtz-Zentrum Dresden-Rossendorf, Germany Prof. Patrick Rinke, Aalto university, Finland Dr. Martti Louhivuori, CSC - IT Center for Science, Finland Dr. Atte Sillanpää, CSC - IT Center for Science, Finland Dr. Nino Runeberg, CSC- IT Center for Science, Finland TBC Language:  English Price:         Free of charge https://events.prace-ri.eu/event/942/ 2020-03-10 07:00:00 UTC 2020-03-13 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • Uncertainty quantification @MdlS

    11 - 13 May 2020

    Uncertainty quantification @MdlS https://tess.oerc.ox.ac.uk/events/uncertainty-quantification-mdls-081c2005-af76-434f-95f4-3c40dfdaf8bc Uncertainty in computer simulations, deterministic and probabilistic methods for quantifying uncertainty, OpenTurns software, Uranie software Content Uncertainty quantification takes into account the fact that most inputs to a simulation code are only known imperfectly. It seeks to translate this uncertainty of the data to improve the results of the simulation. This training will introduce the main methods and techniques by which this uncertainty propagation can be handled without resorting to an exhaustive exploration of the data space. HPC plays an important role in the subject, as it provides the computing power made necessary by the large number of simulations needed. The course will present the most important theoretical tools for probability and statistical analysis, and will illustrate the concepts using the OpenTurns software. Course Outline Day 1 : Methodology of Uncertainty Treatment – Basics of Probability and Statistics •    General Uncertainty Methodology (30’) : A. Dutfoy •    Probability and Statistics: Basics (45’) : G. Blondet •    General introduction to Open TURNS and Uranie (2 * 30’) : G. Blondet, J.B. Blanchard •    Introduction to Python and Jupyter (45’): practical work on distributions manipulations Lunch •    Uncertainty Quantification (45’) : J.B. Blanchard •    OpenTURNS – Uranie practical works: sections 1, 2 (1h): G. Blondet,  J.B. Blanchard,  A. Dutfoy •    Central tendency and Sensitivity analysis (1h): A. Dutfoy Day 2 : Quantification, Propagation and Ranking of Uncertainties •    Application to OpenTURNS and Uranie (1h): section 3 M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard •    Estimation of probability of rare events (1h): G. Blondet •    Application to OpenTURNS and Uranie (1h): M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard Lunch •    Distributed computing (1h) : Uranie (15’, F. Gaudier, J.B. Blanchard), OpenTURNS (15’, G. Blondet), Salome et OpenTURNS (30’, O. Mircescu) •    Optimisation and Calibration (1h) : J.B. Blanchard, M. Baudin •    Application to OpenTURNS and Uranie (1h): J.B. Blanchard, M. Baudin Day 3 : HPC aspects – Meta model •    HPC aspects specific to the Uncertainty treatment (1h) : K. Delamotte •    Introduction to Meta models (validation, over-fitting) – Polynomial chaos expansion (1h) : JB Blanchard, C. Mai, •    Kriging meta model (1h): C. Mai Lunch •    Application to OpenTURNS and Uranie (2h) : C. Mai, G. Blondet, J.B. Blanchard •    Discussion /  Participants projects Learning outcomes Learn to recognize when uncertainty quantification can bring new insight to simulations. Know the main tools and techniques to investigate uncertainty propagation. Gain familiarity with modern tools for actually carrying out the computations in a HPC context. Prerequisites Basic knowledge of probability will be useful, as will a basic familiarity with Linux. https://events.prace-ri.eu/event/931/ 2020-05-11 07:30:00 UTC 2020-05-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Systems Workshop: Programming MareNostrum 4 @ BSC

    26 - 27 February 2020

    Systems Workshop: Programming MareNostrum 4 @ BSC https://tess.oerc.ox.ac.uk/events/systems-workshop-programming-marenostrum-4-bsc-79e95f4b-055f-4b64-a716-5250d4731892 The registration to this course is now open. Please, bring your own laptop.  All the PATC courses at BSC are free of charge. Course convener: David Vicente Lecturers: David Vicente, Javier Bartolomé, Jorge Rodríguez, Carlos Tripiana, Oscar Hernandez, Félix Ramos, Cristian Morales, Francisco González, Ricard Zarco, Helena Gómez, Pablo Ródenas, Gaurav Saxena y Maicon Faria. Objectives: The objective of this course is to present to potential users the new configuration of MareNostrum and a introduction on how to use the new system (batch system, compilers, hardware, MPI, etc).Also It will provide an introduction about RES and PRACE infrastructures and how to get access to the supercomputing resources available. Learning Outcomes: The students who finish this course will know the internal architecture of the new MareNostrum, how it works, the ways to get access to this infrastructure and also some information about optimization techniques for its architecture. Level: INTERMEDIATE -for trainees with some theoretical and practical knowledge; those who finished the beginners course. Prerequisites:  Any potential user of a HPC infrastructure will be welcome Agenda: DAY 1 (Feb. 26) 09:00 - 17:00                                  Session 1 / 09:00 – 13:00 (2:45 h lectures, 0:45h practical)                                        9:00. - 9:30 Introduction to BSC, PRACE PATC and this training (David Vicente) 9:30 - 10:30 MareNostrum 4 – the view from System administration group (Javier Bartolomé) 10:30 - 11:00 COFFEE BREAK       11:00 - 11:45 How to use MN4 – Basics: Batch system, file systems, compilers, modules, DT, BSC commands     (Félix Ramos, Francisco González, Ricard Zarco, Helena Gómez) 11:45 - 12:30 Hands-on I (Félix Ramos, Francisco González, Ricard Zarco, Helena Gómez) 12:30 - 13:00 Deep Learning and Big data tools on MN4  (Carlos Tripiana) 13:00 - 14:15 LUNCH (not hosted)           Session 2 / 14:15 – 17:00 (2:15h)                                          14:15 - 15:15 How to use MN4 – Parallel programming: OpenMP, Hands-on II (Jorge Rodríguez, Maicon Saul Faria) 15:15 - 16:00 How to use MN4 – Parallel programming: MPI (Pablo Ródenas, Gaurav Saxena) 16:00 - 16:30 COFFEE BREAK       16:30 - 17:00 How to use MN4 – Parallel programming: MPI Hands-on III (Pablo Ródenas, Gaurav Saxena)                                            DAY 2 (Feb. 27) 09:00 - 13:00                                  Session 3 / 09:00 – 13:00 (2:00h lectures, 1:30 h practical)                                        9:00 - 9:30 How can I get resources from you? - RES (David Vicente) 9:30 - 10:00 How can I get Resources from you? – PRACE (Cristian Morales) 10:00 - 10:30 HPC Architectures (David Vicente) 10:30 - 11:00 COFFEE BREAK       11:00 - 12:00 Containers on HPC (Óscar Hernández) 12:00 - 13:00 Debugging on MareNostrum, from GDB to DDT (Óscar Hernández, Cristian Morales) END of COURSE https://events.prace-ri.eu/event/943/ 2020-02-26 08:00:00 UTC 2020-02-27 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction for Simulation Environments for Life Sciences @ BSC

    11 - 12 March 2020

    Introduction for Simulation Environments for Life Sciences @ BSC https://tess.oerc.ox.ac.uk/events/introduction-for-simulation-environments-for-life-sciences-bsc-ec25f9cc-4359-48dd-aee9-fb93506889fa The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Objectives: The course will make the attendants familiar with simulation technologies used in Life Sciences and their specific adaptation to HPC environment Course convener: Josep Gelpi Detailed outline: Introduction to biomolecular simulation Coarse-grained and atomistic simulation strategies Automated setup for simulation HPC specifics: Large scale parallelization, use of GPU’s Storage and strategies for large scale trajectory analysis Learning Outcomes: Setup, execute, and analyze standard simulations in HPC environment Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) INTERMEDIATE: for trainees with some theoretical and practical knowledge. https://events.prace-ri.eu/event/954/ 2020-03-11 08:00:00 UTC 2020-03-12 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Petaflop System Administration; Marenostrum 4 @ BSC

    18 - 19 March 2020

    Petaflop System Administration; Marenostrum 4 @ BSC https://tess.oerc.ox.ac.uk/events/petaflop-system-administration-marenostrum-4-bsc-3be57377-8110-481b-ab23-2130e24bae44 The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Javier Bartolome, Systems Group Manager, Operations - System Administration, BSC Objectives: Explain the different components that MareNostrum 4 is composed, which were the design decisions taken and why. Explain how the system administration is taken in this Petaflop system. Learning Outcomes: The students will learn how MareNostrum 4 is organized and how it works. This can have some insights and ideas about how to manage clusters of thousands of nodes in a HPC or no-HPC environment. Level: INTERMEDIATE, for trainees with some theoretical and practical knowledge; those who finished the beginners course. Prerequisites: Experience on Linux system administration is required. Lecturer: Javier Bartolome, Systems Group Manager, Operations - System Administration, BSC https://events.prace-ri.eu/event/953/ 2020-03-18 08:00:00 UTC 2020-03-19 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to CUDA Programming @ BSC

    30 March - 2 April 2020

    Introduction to CUDA Programming @ BSC https://tess.oerc.ox.ac.uk/events/introduction-to-cuda-programming-bsc-a6735828-29e4-4301-afd2-4b51d1a9134f All PATC Courses at BSC do not charge fees. PLEASE BRING YOUR OWN LAPTOP. Local Web Page: This course will provide very good introduction to the PUMPS Summer School run jointly with NVIDIA -also  at Campus Nord, Barcelona. For further information visit the school website  as this school has attendee selection process. You may also be interested in our Introduction to OpenACC course. Convener:  Antonio Peña, Computer Sciences Senior Researcher, Accelerators and Communications for High Performance Computing, BSC Objectives:  The aim of this course is to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Many commercial offerings from NVIDIA, AMD, and Intel already offer such levels of concurrency. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors.   The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors. Level: INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Basics of C programming and concepts of parallel processing will help, but are not critical to follow the lectures. https://events.prace-ri.eu/event/955/ 2020-03-30 07:00:00 UTC 2020-04-02 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to OpenACC @ BSC

    3 April 2020

    Introduction to OpenACC @ BSC https://tess.oerc.ox.ac.uk/events/introduction-to-openacc-bsc-88807e54-88f1-4947-a6f8-dd4e2c66b6f0 All PATC Courses at BSC do not charge fees. PLEASE BRING YOUR OWN LAPTOP. Local Web Page: This is an expansion of the topic "OpenACC and other approaches to GPU computing" covered on this year's and last year's editions of the Introduction to CUDA Programming. This course will provide very good introduction to the PUMPS Summer School run jointly with NVIDIA -  also  at Campus Nord, Barcelona. For further information visit the school website. Convener: Antonio Peña, Computer Sciences Senior Researcher, Accelerators and Communications for High Performance Computing, BSC Objectives:  As an NVIDIA GPU Center of Excellence, BSC and UPC are deeply involved in research and outreach activities around GPU Computing. OpenACC is a high-level, directive-based programming model for GPU computing. It is a very convenient language to leverage the GPU power with minimal code modifications, being the preferred option for non computer scientists. This course will cover the necessary topics to get started with GPU programming in OpenACC, as well as some advanced topics. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors. Level:  BEGINNERS: for trainees from different background or very little knowledge. https://events.prace-ri.eu/event/956/ 2020-04-03 07:00:00 UTC 2020-04-03 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Heterogeneous Programming on GPUs with MPI + OmpSs @ BSC

    4 - 5 March 2020

    Heterogeneous Programming on GPUs with MPI + OmpSs @ BSC https://tess.oerc.ox.ac.uk/events/heterogeneous-programming-on-gpus-with-mpi-ompss-bsc-41b0ab3c-13c9-4b4e-926c-8256af6b7e0c The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Objectives:  The tutorial will motivate the audience on the need for portable, efficient programming models that put less pressure on program developers while still getting good performance for clusters and clusters with GPUs. More specifically, the tutorial will: Introduce the hybrid MPI/OmpSs parallel programming model for future exascale systems Demonstrate how to use MPI/OmpSs to incrementally parallelize/optimize: MPI applications on clusters of SMPs, and Leverage CUDA kernels with OmpSs on clusters of GPUs Level: INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Requirements:  Good knowledge of C/C++ Basic knowledge of CUDA/OpenCL Basic knowledge of Paraver/Extrae Learning Outcomes: The students who finish this course will be able to develop benchmarks and simple applications with the MPI/OmpSs programming model to be executed in clusters of GPUs. https://events.prace-ri.eu/event/951/ 2020-03-04 08:30:00 UTC 2020-03-05 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Heterogeneous Memory Usage @ BSC

    25 February 2020

    Introduction to Heterogeneous Memory Usage @ BSC https://tess.oerc.ox.ac.uk/events/introduction-to-heterogeneous-memory-usage-bsc The registration to this course is now open. All PATC Courses at BSC do not charge fees. PLEASE BRING YOUR OWN LAPTOP. Convener:  Antonio Peña, Computer Sciences Senior Researcher, Accelerators and Communications for High Performance Computing, BSC Objectives:  The objective of this course is to learn how to use systems with more than one memory subsystem. We will see the different options on using Intel’s KNL memory subsystems and systems equipped with Intel’s Optane technology. Learning Outcomes: The students who finish this course will able to leverage applications using multiple memory subsystems Level: INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course Prerequisites: Basic skills in C programming. Agenda: 9:00-9:30 Registration   9:30-10:30 Introduction to Memory Technologies Petar Radojkovic 10:30-11:00 Coffee Break   11:00-12:30 Use of Heterogeneous Memories Antonio J. Peña 12:30-13:00 Hands-on: Environment Setup Marc Jordà 13:00-14:30 Lunch   14:30-18:00 Hands-on Marc Jordà https://events.prace-ri.eu/event/913/ 2020-02-25 08:00:00 UTC 2020-02-25 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    6 - 7 April 2020

    Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/performance-portability-for-gpu-application-using-high-level-programming-approaches-with-kokkos-mdls-idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos https://github.com/kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (http://www.idris.fr/jean-zay/), equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. https://events.prace-ri.eu/event/892/ 2020-04-06 07:30:00 UTC 2020-04-07 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    27 - 29 May 2020

    Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca https://tess.oerc.ox.ac.uk/events/tools-and-techniques-to-quickly-improve-performances-of-hpc-applications-in-solid-earth-cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the  CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: - use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of    ChEESE project - find and solve all the main bottlenecks of an application with respect to appropriate computational metrics    and the machine used - use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: -Basic knowledge of LINUX/UNIX. -Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms   for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara   https://events.prace-ri.eu/event/973/ 2020-05-27 07:00:00 UTC 2020-05-29 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • High Performance Molecular Dynamics@CINECA

    6 - 8 April 2020

    High Performance Molecular Dynamics@CINECA https://tess.oerc.ox.ac.uk/events/high-performance-molecular-dynamics-cineca-97caa064-4e86-4140-b421-eaa0180632b1   Description:This course is designed for those users who wish run classical molecular dynamics programs on modern supercomputers. By understanding better the HPC infrastructures and the algorithms used to exploit them, the aim is to give researchers the tools to run simulations in the most efficient way possible on current and future supercomputers. During the course students will be guided on how to prepare and run simulations on Cineca's HPC systems with the GROMACS application, one of the most common and efficient molecular dynamics applications available. Skills: By the end of the course each student should be able to: comprehend the basic principles of classical molecular dynamics (MD). understand the common algorithms for the optimization and parallelization of MD applications and the factors limiting the performance and parallel scaling. run and optimize MD simulations on advanced, multicore architectures equipped with both conventional processors and accelerators such as NVIDIA GPUs. design a simulation project for a computing resource application. Target audience:  Scientists with research interests in classical molecular dynamics in computational biology, chemistry or biophysics.   Pre-requisites: Research interest in classical molecular dynamics with a focus on the simulation of biomolecular systems. Basic knowledge of UNIX and concepts of parallel computing. Grant: A grant of 300 EUR (for foreign students) and 150 EUR (for Italian students) will be available for participants not funded by their institution and not working in the Bologna area. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lessons and about 1 month after the ending of the course. For further information about how to submit for the grant, please wait the confirmation email that you are accepted to the course about 3 weeks before the date of the beginning of the lessons.  The lunch for the 3 days will be provided by Cineca. Coordinating Teacher: Dr. Andrew Emerson https://events.prace-ri.eu/event/974/ 2020-04-06 07:00:00 UTC 2020-04-08 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • 16th Advanced School on Parallel Computing @ Cineca

    24 - 28 February 2020

    16th Advanced School on Parallel Computing @ Cineca https://tess.oerc.ox.ac.uk/events/16th-advanced-school-on-parallel-computing-cineca Application deadline: January 24th, 2020 Description: Heterogeneous architectures with nodes featuring accelerator cards or sockets are taking an important share in the HPC market, given their superiority in term of flop/watt with respect to CISC and RISC architecture. To be effective on heterogeneous architecture applications usually requires important refactoring and adaptation, and many programming paradigms are available, some vendor specific and some other defined by an open standard, but without a clear winner yet (e.g. as it is the case for message passing communications where there is MPI, available for all network technologies). This school focus on software development techniques to address the implementation of new HPC applications and the re-factory of existing ones, in the era of heterogeneous, energy efficient, massively parallel architectures, toward exascale, with theoretical lectures and hands-on sessions on the different most promising programming techniques and paradigms for accelerated computing. Software engineering techniques and high productivity languages will complement lectures on parallel programming and porting toward new architectures, to allow the implementation of application that can be maintained across a complex and fast evolving HPC architectures.   Topics: Heterogeneous architectures Elements of software engineering Parallel programming techniques for accelerated computing, including CUDA, OpenMP, OpenACC, SYCL Parallel programming techniques for massively parallel applications Models for applications integrating MPI, OpenMP OpenACC, CUDA and CUDA Fortran paradigms Target audience: The school is aimed at PRACE users, final year master students, PhD students, and young researchers in computational sciences and engineering, with different backgrounds, interested in applying the emerging technologies on high performance computing to their research. Pre-requisites: Good knowledge of parallel programming with MPI and/or OpenMP, knowledge of FORTRAN and C languages. Basic knowledge of parallel computer architectures. Admitted students: Attendance is free. A grant of 300 EUR (for students working abroad) and 150 EUR (for students working in Italy) will be available for participants not funded by their institution and not working or living in the Bologna area. Documentation will be required. Lunches for the 5 days will be provided by Cineca. Each student will be given a two month access to the Cineca's supercomputing resources. The number of participants is limited to 25 students. Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM. DUE TO PRIVACY REASON THE STUDENTS ADMITTED AND NOT ADMITTED WILL BE CONTACTED VIA EMAIL ON JANUARY, FRIDAY 31st. IF YOU SUBMITTED AND DON'T RECEIVE THE EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.   Acknowledgement: The support of CINI for the software engineering module is gratefully acknowledged.   https://events.prace-ri.eu/event/976/ 2020-02-24 08:00:00 UTC 2020-02-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Modern Scientific C++ @ MdlS/Idris

    20 - 23 April 2020

    Modern Scientific C++ @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/modern-scientific-c-mdls-idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates https://events.prace-ri.eu/event/977/ 2020-04-20 07:30:00 UTC 2020-04-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Performance Analysis and Tools @ BSC

    2 - 3 March 2020

    Performance Analysis and Tools @ BSC https://tess.oerc.ox.ac.uk/events/performance-analysis-and-tools-bsc-0184c0d3-9502-459b-9b25-7aab3b1ce61b The registration to this course is now open. Please, bring your own laptop.  All the PATC courses at BSC are free of charge. Course convener: Judit Gimenez, Tools Group Manager, Computer Sciences - Performance Tools, BSC Objectives: The objective of this course is to learn how Paraver and Dimemas tools can be used to analyze the performance of parallel applications and to familiarize with the tools usage as well as instrumenting applications with Extrae. Learning Outcomes:The students who finish this course will have a basic knowledge on the usage of the BSC performance tools. They will be able to apply the same methodology to their applications, identifying potential bottlenecks and getting hints on how to improve the applications performance. Level:  INTERMEDIATE - for trainees with some theoretical and practical knowledge. (All courses are designed for specialists with at least finished 1st cycle degree) Prerequisites: Good knowledge of C/C++ Basic knowledge of CUDA/OpenCL Basic knowledge of MPI, OpenMP https://events.prace-ri.eu/event/950/ 2020-03-02 08:30:00 UTC 2020-03-03 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Fortran for Scientific Computing @ HLRS

    20 - 24 April 2020

    Fortran for Scientific Computing @ HLRS https://tess.oerc.ox.ac.uk/events/fortran-for-scientific-computing-hlrs-1cb217e6-3571-40ec-8a2e-9706c485c94d Overview This course is dedicated to scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Only the last three days of this course are sponsored by the PATC project. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/979/ 2020-04-20 06:30:00 UTC 2020-04-24 13:30:00 UTC [] [] [] workshops_and_courses [] []
  • Programming paradigms for GPU devices@Cineca

    1 - 3 April 2020

    Programming paradigms for GPU devices@Cineca https://tess.oerc.ox.ac.uk/events/programming-paradigms-for-gpu-devices-cineca-70ddbc51-7d5b-4797-a8ed-740aa6a908d4 This course gives an overview of the most relevant GPGPU computing techniques to accelerate computationally demanding tasks on HPC heterogeneous architectures based on GPUs. The course will start with an architectural overview of modern GPU based heterogeneous architectures, focusing on its computing power versus data movement needs. The course will cover both a high level (pragma-based) programming approach with OpenACC for a fast porting startup, and lower level approaches based on nVIDIA CUDA and OpenCL programming languages for finer grained computational intensive tasks. A particular attention will be given on performance tuning and techniques to overcome common data movement bottlenecks and patterns. Skills: By the end of the course, students will be able to: understand the strengths and weaknesses of GPUs as accelerators program GPU accelerated applications using both higher and lower level programming approaches overcome problems and bottlenecks regarding data movement between host and device memories make best use of independent execution queues for concurrent computing/data-movement operations Target Audience: Researchers and programmers interested in porting scientific applications or use efficient post-process and data-analysis techniques in modern heterogeneous HPC architectures. Prerequisites: A basic knowledge of C or Fortran is mandatory. Programming and Linux or Unix. A basic knowledge of any parallel programming technique/paradigm is recommended. Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. L.Ferraro   https://events.prace-ri.eu/event/972/ 2020-04-01 07:00:00 UTC 2020-04-03 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Interactive High-Performance Computing with Jupyter @ JSC

    21 - 22 April 2020

    Interactive High-Performance Computing with Jupyter @ JSC https://tess.oerc.ox.ac.uk/events/interactive-high-performance-computing-jsc-f66aaeb4-2add-4e74-8b25-619aaf06fe16 Interactive exploration and analysis of large amounts of data from scientific simulations, in-situ visualization and application control are convincing scenarios for explorative sciences. Based on the open source software Jupyter or JupyterLab, a way has been available for some time now that combines interactive with reproducible computing while at the same time meeting the challenges of support for the wide range of different software workflows. Even on supercomputers, the method enables the creation of documents that combine live code with narrative text, mathematical equations, visualizations, interactive controls, and other extensive output. However, a number of challenges must be mastered in order to make existing workflows ready for interactive high-performance computing. With so many possibilities, it's easy to lose sight of the big picture. This course provides a detailed introduction to interactive high-performance computing. The following topics are covered: Introduction to Jupyter Parallel computing using Jupyter Coupling and control of simulations Interactive & in-situ visualization Simulation dashboards Prerequisites: Experience in Python Application Registrations are only considered until 20 March 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Jens Henrik Göbbert, Alice Grosch, JSC Contact For any questions concerning the course please send an e-mail to j.goebbert@fz-juelich.de https://events.prace-ri.eu/event/980/ 2020-04-21 06:00:00 UTC 2020-04-22 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Deep Learning Models @ JSC

    12 - 14 May 2020

    Introduction to Deep Learning Models @ JSC https://tess.oerc.ox.ac.uk/events/introduction-to-deep-learning-models-jsc This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de. https://events.prace-ri.eu/event/983/ 2020-05-12 11:00:00 UTC 2020-05-14 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • High-performance scientific computing in C++ @ JSC

    15 - 17 June 2020

    High-performance scientific computing in C++ @ JSC https://tess.oerc.ox.ac.uk/events/high-performance-scientific-computing-in-c-jsc-d364b0d6-9ab6-47f7-91df-001a0ae8b495 Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance by today's standards requires understanding and exploiting multiple levels of parallelism, and understanding C++ code from a performance centric viewpoint. In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers already familiar with C++17 (at the very least C++14), who wish to develop maintainable and fast applications. They will learn techniques to better utilize CPU caches, instruction pipelines, SIMD functionality and multi-threading. Shared memory parallel programming on multiple CPU cores will be introduced using parallel STL of C++17 and Intel (R) Threading Building Blocks. The participants will also learn basic GPGPU programming in C++ using NVidia CUDA and Thrust. Prerequisites: Good working knowledge of C++, especially the C++14 standard. Please check with these questions whether your C++ knowlegde fulfills the requirements. Application Registrations are only considered until 15 May 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructor: Dr. Sandipan Mohanty, JSC Contact For any questions concerning the course please send an e-mail to s.mohanty@fz-juelich.de https://events.prace-ri.eu/event/984/ 2020-06-15 07:00:00 UTC 2020-06-17 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • GPU Programming with CUDA @ JSC

    4 - 6 May 2020

    GPU Programming with CUDA @ JSC https://tess.oerc.ox.ac.uk/events/gpu-programming-with-cuda-jsc-d88b62e7-f316-4f89-b8bb-1ea0fdb1850a GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model CUDA GPU libraries like CuBLAS and CuFFT Tools for debugging and profiling Performance optimizations Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell, experience in C/C++ Application Registrations are only considered until 31 March 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Jochen Kreutz, Dr. Andreas Herten, JSC; Jiri Kraus, NVIDIA Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/981/ 2020-05-04 07:00:00 UTC 2020-05-06 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action! @SURFsara

    15 April 2020

    Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action! @SURFsara https://tess.oerc.ox.ac.uk/events/advanced-topics-in-scientific-visualization-with-blender-geometry-scripts-animation-action-surfsara DESCRIPTION This is a follow-up course to the basics course "Data, lights, camera, action! Scientific visualization done beautifully using Blender". It is aimed at researchers that are already familiar with working in Blender, but want to dive deeper into the possibilities. The following subjects, in relation to scientific visualization with Blender, are treated in the course: - Mesh editing - Python scripting in Blender - Advanced import of data - Animation - Node-based materials and shading This is a full-day course with hands-on assignments. The goal is to learn further practical skills in using Blender for scientific visualization. We like to encourage participants to bring along the data they normally work with, or a sample thereof, and would like to apply the course knowledge to. Note that there will another edition of the Basics course later in 2020.   https://events.prace-ri.eu/event/992/ 2020-04-15 07:30:00 UTC 2020-04-15 15:15:00 UTC [] [] [] workshops_and_courses [] []
  • Heterogeneous Programming on FPGAs with OmpSs@FPGA @BSC

    6 March 2020

    Heterogeneous Programming on FPGAs with OmpSs@FPGA @BSC https://tess.oerc.ox.ac.uk/events/heterogeneous-programming-on-fpgas-with-ompss-fpga-bsc The registration of this course is now open. Please, bring your own laptop.  All the PATC courses at BSC are free of charge. Course convener: Xavier Martorell Objectives:This tutorial will introduce the audience to the BSC tools for heterogenous programming on FPGA devices. It describes OmpSs@FPGA, as a productive programming environment for compute systems with FPGAs. More specifically, the tutorial will: Introduce the OmpSs@FPGA programming model, how to write, compile and execute applications on FPGAs Show the "implements" feature to explot parallelism across cores and IP cores Demonstrate how to analyze applications to determine which portions can be executed on FPGAs, and use OmpSs@FPGA to parallelize/optimize them. Learning Outcomes: The students who finish this course will be able to develop benchmarks and simple applications with the OmpSs@FPGA programming model to be executed in FPGA boards, like Zedboard or Xilinx ZCU102. Level: INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Requirements:  Good knowledge of C/C++ Basic knowledge of acceleration architectures and offloading models Basic knowledge of Paraver/Extrae https://events.prace-ri.eu/event/952/ 2020-03-06 08:30:00 UTC 2020-03-06 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • PRACE & E-CAM Tutorial on Machine Learning and Simulations @ICHEC

    10 - 13 March 2020

    PRACE & E-CAM Tutorial on Machine Learning and Simulations @ICHEC https://tess.oerc.ox.ac.uk/events/prace-e-cam-tutorial-on-machine-learning-and-simulations-ichec Overview The 4-day school will focus on providing the participants with a concise introduction to key machine and deep learning (ML & DL) concepts, and their practical applications with relevant examples in the domain of molecular dynamics (MD), rare-event sampling and electronic structure calculations (ESC). ML is increasingly being used to make sense of the enormous amount of data generated every day by MD and ESC simulations running on supercomputers. This can be used to obtain mechanistic understanding in terms of low-dimensional models that capture the crucial features of the processes under study, or assist in the identification of relevant order parameters that can be used in the context of rare-event sampling. ML is also being used to train neural network based potentials from ESC which can then be used on MD engines such as LAMMPS allowing orders of magnitude increase in the dimensionality and time scales that can be explored with ESC accuracy. So while the first half of this school will cover the fundamentals of ML and DL, the second half will be dedicated to relevant examples of how these techniques are applied in the domains of MD and ESC. Learning outcomes By the end of the school, participants are expected to: Gain an understanding of the fundamental concepts of ML and DL, including how neural networks function, different types of topologies, common pitfalls, etc. Be able to implement basic deep learning workflows using Python. Leverage existing framework to discover molecular mechanisms from MD simulations. Utilise the PANNA toolkit to create neural network models for atomistic systems and generate results that can be integrated with MD packages. Prerequisites Participants are expected to have a working knowledge of Python (i.e. familiar with the basic syntax and constructs, have used Python before for at least a few months) and have a basic understanding of the fundamental physics behind molecular dynamics simulations and electronic structure calculations. All participants are expected to bring his/her own laptop to the school to conduct hands-on exercises. Registration There is no registration charge accepted participants. However, all participants must register and due to limited space, in the event of high demand, participants will be selected according to expressions of interest provided. Non-academic participants are welcome to register to the school but should notify the organisers in order to pre-empt issues with third party copyright material that will be used for parts of the school. https://events.prace-ri.eu/event/995/ 2020-03-10 09:00:00 UTC 2020-03-13 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Advanced Parallel Programming @ CSC

    25 - 27 March 2020

    Advanced Parallel Programming @ CSC https://tess.oerc.ox.ac.uk/events/advanced-parallel-programming-csc-5cbcf98e-1e6f-4b59-b809-a49f5a878c77 Description This course addresses hybrid programming by combining OpenMP and MPI, as well as more advanced topics in MPI. Also, parallel I/O is discussed and exemplified in the course. The course consists of lectures and hands-on exercises. Learning outcome After the course the participants should have an idea about more advanced techniques and best practices in parallel programming, and on how to scale up parallel applications and optimize them to different platforms. Prerequisites The PTC course Introduction to Parallel Programming or similar background knowledge together with fluency in Fortran and/or C programming languages will be assumed. Agenda (tentative) Day 1: Wednesday, March 25 09.00-09.45 Course intro, MPI & OpenMP recap 09.45-10.00 Coffee break 10.00-11.00 Exercises 11.00-11.30 Hybrid MPI + OpenMP programming I 11.30-12.00 Exercises 12.00-13.00 Lunch break 13.00-13.45 Hybrid MPI + OpenMP programming II 13.45-14.30 Exercises 14.30-14.45 Coffee break 14.45-15.15 Advanced MPI I: Communication topologies 15.15-16.15 Exercises 16.15-16.30 Summary of Day 1 Day 2: Thursday, March 26 09.00-09.45 Advanced MPI II: User-defined datatypes 09.45-10.00 Coffee break 10.00-11.15 Exercises 11.15-12.00 Advanced MPI III: One-sided communication 12.00-13.00 Lunch break 13.00-14.30 Exercises 14.30-14.45 Coffee break 14:45-15:15 Parallel I/O with Posix 15.15-16.15 Exercises 16.15-16.30 Summary of Day 2 Day 3: Friday, March 27 09.00-09.45 Parallel I/O with MPI 09.45-10.00 Coffee break 10.00-11.15 Exercises 11.15-12.00 Parallel I/O with MPI cont'd 12.00-13.00 Lunch break 13.00-14.15 Exercises 14.15-14.30 Coffee break 14.30-15.15 Parallel I/O with HDF5 15.15-16.15 Exercises 16.15-16.30 Summary of Day 3 Lecturers:  Sami Ilvonen (CSC),  Martti Louhivuori (CSC)   Language:  English Price:          Free of charge https://events.prace-ri.eu/event/993/ 2020-03-25 07:00:00 UTC 2020-03-27 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Deep Learning and GPU Programming Workshop @ CSC

    22 - 24 April 2020

    Deep Learning and GPU Programming Workshop @ CSC https://tess.oerc.ox.ac.uk/events/deep-learning-and-gpu-programming-workshop-csc Overview NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning. The workshop combines lectures about fundamentals of Deep Learning for Computer Vision with lectures about Accelerated Computing with CUDA C/C++ and OpenACC. The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud. This workshop is co-organized by PRACE, Nvidia, CSC (Finland) and LRZ (Germany). Lecturers:  Dr. Momme Allalen (Germany), PD Dr. Juan Durillo Barrionuevo (Germany), Dr. Volker Weinberg (Germany) / All instructors are NVIDIA certified University Ambassadors. Language:  English Price:           Free of charge (3 training days / including daily 2 coffee breaks and lunch) Prerequisites and content level Please note, that the workshop is exclusively for verifiable students, staff, and researchers from any academic institution (for industrial participants, please contact NVIDIA for industrial specific training). Technical background, basic understanding of machine learning concepts, basic C/C++ or Fortran programming skills. Basics in Python will be helpful. Since Python 2.7 is used, the following tutorial can be used to learn the syntax: docs.python.org/2.7/tutorial/index.html. The content level of the course is broken down as: beginner's - 5,9 h (30%), intermediate - 13,7 h (70%). Important After you are accepted, please create an account under courses.nvidia.com/join using the same email address as for event registration, since lab access is given based on the event registration list. Please be aware that for adminstrative reasons, after you register, Nvidia will use your email address to contact you for the final feedback of the workshop. On the first day of the workshop, please also bring your student/academia ID. For entering CSC's facilities, please show your identification with a photo at the LSCK reception to get your Visitor QR-badge. Your ID can be: a passport / driver licence / ID-card or a valid student card with a photo. AGENDA Day 1: Fundamentals of Deep Learning for Computer Vision Description and learning outcomes Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities. During this day, you’ll learn the basics of deep learning by training and deploying neural networks. You’ll learn how to: Implement common deep learning workflows, such as image classification and object detection Experiment with data, training parameters, network structure, and other strategies to increase performance and capability Deploy your neural networks to start solving real-world problems Upon completion, you’ll be able to start solving problems on your own with deep learning. Day 2: Fundamentals of Accelerated Computing with CUDA C/C++ Description and learning outcomes The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 2nd day you experience C/C++ application acceleration by: Accelerating CPU-only applications to run their latent parallelism on GPUs Utilizing essential CUDA memory management techniques to optimize accelerated applications Exposing accelerated application potential for concurrency and exploiting it with CUDA streams Leveraging command line and visual profiling to guide and check your work Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast. Day 3: Fundamentals of Accelerated Computing with OpenACC Description and learning outcomes On the 3rd day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn: How to profile and optimize your CPU-only applications to identify hot spots for acceleration How to use OpenACC directives to GPU accelerate your codebase How to optimize data movement between the CPU and GPU accelerator Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications. https://events.prace-ri.eu/event/998/ 2020-04-22 06:00:00 UTC 2020-04-24 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • High-performance computing with Python @ JSC

    8 - 10 June 2020

    High-performance computing with Python @ JSC https://tess.oerc.ox.ac.uk/events/high-performance-computing-with-python-jsc-cc893f0a-b492-4ad8-bb73-05f407f0d701 Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly. This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools. The following topics will be covered: Interactive parallel programming with IPython Profiling and optimization High-performance NumPy Just-in-time compilation with numba Distributed-memory parallel programming with Python and MPI Bindings to other programming languages and HPC libraries Interfaces to GPUs This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC. Prerequisites: Good working knowledge of Python and NumPy Application Registrations are only considered until 7 May 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/982/ 2020-06-08 07:00:00 UTC 2020-06-10 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Machine Learning in HPC @GRNET

    9 - 11 March 2020

    Machine Learning in HPC @GRNET https://tess.oerc.ox.ac.uk/events/machine-learning-in-hpc-grnet Machine Learning in HPC 10 - 11 March 2020 Description After the course the participants should be able to understand basic principles in Machine Learning and apply basic machine learning methods. Learn how to efficiently use HPC infrastructures to get the best performance out of different machine learning tools. How to use these machine learning frameworks like: Tensorflow, PyTorch, Keras, Horovood with hands-on sessions. Learn using multiple GPUs to significantly shorten the time required to train lots of data, making solving complex problems feasible. Learn best-practices to avoid common mistakes to efficiently use the HPC infrastracture and overcome the scalability challenges when using parallel computing techniques Prerequisites The course addresses participants who are familiar with the Python programming language and have working experience with the Linux operating system and the use of the command line. Experience with parallel programming or gpu programming is not required. Knowledge of mathematical basics in linear algebra, and notions of machine learning will be helpful. Bring your own laptop in order to be able to participate in the training hands on. Hands on work will be done in pairs so if you don’t have a laptop you might work with a colleague. Course language is English. Registration The maximum number of participants is 30. Registrations will be evaluated on a first-come, first-served basis. GRNET is responsible for the selection of the participants on the basis of the training requirements and the technical skills of the candidates. GRNET will also seek to guarantee the maximum possible geographical coverage with the participation of candidates from many countries. Venue GRNET headquarters Address: 2nd  Floor, 7, Kifisias Av. GR 115 23 Athens Information on how to reach GRNET headquarters ia available on GRNET website: https://grnet.gr/en/contact-us/   Accommodation options near GRNET can be found at: https://grnet.gr/wp-content/uploads/sites/13/2015/11/Hotels-near-GRNET-en.pdf ARIS - System Information ARIS is the name of the Greek supercomputer, deployed and operated by GRNET (Greek Research and Technology Network) in Athens. ARIS consists of 532 computational nodes seperated in five “islands” as listed here: 426 thin nodes: Regular compute nodes without accelerator. 44 gpu nodes: “2 x NVIDIA Tesla k40m” accelerated nodes. 18 phi nodes: “2 x INTEL Xeon Phi 7120p” accelerated nodes. 44 fat nodes: Fat compute nodes have larger number of cores and memory per core than a thin node. 1 ml node: Machine Learning node consisting of 1 server, containing 2 Intel E5-2698v4 processors, 512 GB of central memory and 8 NVIDIA V100 GPU card. All the nodes are connected via Infiniband network and share 2PB GPFS storage.The infrastructure also has an IBM TS3500 library of maximum storage capacity of about 6 PB. Access to the system is provided by two login nodes. About Tutors Dr. Dellis (Male) holds a B.Sc. in Chemistry (1990) and PhD in Computational Chemistry (1995) from the National and Kapodistrian University of Athens, Greece. He has extensive HPC and grid computing experience. He was using HPC systems in computational chemistry research projects on fz-juelich machines (2003-2005). He received an HPC-Europa grant on BSC (2009). In EGEE/EGI projects he acted as application support and VO software manager for SEE VO, grid sites administrator (HG-02, GR-06), NGI_GRNET support staff (2008-2014). In PRACE 1IP/2IP/3IP/4IP/5IP/6IP he was involved in benchmarking tasks either as group member or as BCO (2010-2020). Currently he holds the position of “HPC Team leader” at GRNET S.A. where he is responsible for activities related to user consultations, porting, optimization and running HPC applications at national and international resources. Panos Louridas(Male) is an Associate Professor at the Department of Management Science and Technology of the Athens University of Economics and Business. His research interests include software systems, practical cryptography, business analytics, data science, and software analysis and design. He is the author of the well-received book “Real-World Algorithms: A Beginner’s Guide”, published by the MIT Press, and translated in Russian, Korean and Chinese. Panos Louridas has published widely in software engineering and data science; he is an active data scientist, and a seasoned software practitioner with over 25 years of professional practice. As a practitioner, he has been in charge of the Okeanos cloud computing platform (https://okeanos.grnet.gr) and the Zeus e-voting system (https://zeus.grnet.gr), used by thousands of users in production. He is a member of the ACM, the IEEE, Usenix, and the AAAS. He holds a PhD and an MSc in Software Engineering from the University of Manchester, and a Diploma in Computer Science from the University of Athens. Vasiliki Kougia (Female) She is currently a research assistant at Athens University of Economics and Business (AUEB) and a member of the Natural Language Processing group of AUEB. I received my M.Sc. degree in Computer Science from AUEB (2018-2019) and graduated from the Department of Informatics of the same university (2012-2018). She is a teaching assistant in the Practical Data Science and Text Analytics courses of the M.Sc. in Data Science and in the Natural Language Processing course of the M.Sc. in Computer Science, of AUEB (2019-2020). Her main research interest is Artificial intelligence and especially machine learning and deep learning methods for Natural Language Processing and Computer Vision. Konstantina Dritsa (Female) is a PhD candidate in the Business Analytics Laboratory of the Athens University of Economics & Business. Her research interests include all aspects of machine learning, with a focus on applications for predictions of source code properties. She holds a Bachelor from the Department of Management Science & Technology and an MSc in Information Systems, both by the Athens University of Economics and Business. She is a member of the Hellenic IT Museum, at the position of the administrative assistant of the Board of Advisors. She has previously worked in the travel industry as a Python developer and content editor. About GRNET GRNET – National Infrastructures for Research and Technology, is the national network, cloud computing and IT e-Infrastructure and services provider. It supports hundreds of thousands of users in the key areas of Research, Education, Health and Culture. GRNET provides an integrated environment of cutting-edge technologies integrating a country-wide dark fiber network, data centers, a high performance computing system and Internet, cloud computing, high-performance computing, authentication and authorization services, security services, as well as audio, voice and video services. GRNET scientific and advisory duties address the areas of information technology, digital technologies, communications, e-government, new technologies and their applications, research and development, education, as well as the promotion of Digital Transformation. Through international partnerships and the coordination of EC co-funded projects, it creates opportunities for know-how development and exploitation, and contributes, in a decisive manner, to the development of Research and Science in Greece and abroad. National Infrastructures for Research and Technology – Networking Research and Education www.grnet.gr, hpc.grnet.gr   https://events.prace-ri.eu/event/994/ 2020-03-09 22:30:00 UTC 2020-03-11 20:50:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to new accelerated partition of Marconi, for users and developers@CINECA

    15 - 17 June 2020

    Introduction to new accelerated partition of Marconi, for users and developers@CINECA https://tess.oerc.ox.ac.uk/events/introduction-to-new-accelerated-partition-of-marconi-for-users-and-developers-cineca Description: The present course intends to support the scientific community to efficiently exploit the architecture of the new accelerated partition of Marconi system. More precisely, the course aims at providing a full description of its  configuration, with special emphasis on main crucial aspects for users and application developers. For instance, details about compilation, debugging and optimization procedures will be provided, together with an overview of the libraries, tools and applications available on the system. Examples of submission jobs will be discussed, together with scheduler commands and queue definitions. Skills: By the end of the course each student should be able to: •    compile a code on this architecture in a performing way •    run a code taking advantage of accelerated resources •    move easily in the configured hpc environment Target Audience: Researchers and programmers who want to use this new accelerated partition of Marconi  Pre-requisites: No pre-requisites Grant: A grant of 200 EUR (for foreign students) and 100 EUR (for Italian students) will be available for participants not funded by their institution and not working in the Bologna area. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lessons and about 1 month after the ending of the course. For further information about how to submit for the grant, please wait the confirmation email that you are accepted to the course about 3 weeks before the date of the beginning of the lessons.  The lunch for the 2 days will be provided by Cineca. Coordinating Teacher: Dr. S.Giuliani https://events.prace-ri.eu/event/975/ 2020-06-15 07:00:00 UTC 2020-06-17 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Parallel Programming with Python @ BSC

    6 - 8 July 2020

    Parallel Programming with Python @ BSC https://tess.oerc.ox.ac.uk/events/parallel-programming-with-python-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell, CS/Programming Models Course Lecturers: Rosa M. Badia, CS/Workflows and Distributed Computing, Bryan Jiménez (University of Utrecht), Joan Verdaguer-Codina (COEIC) LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208 Level:  BASIC: for students with little previous experience with Python Prerequisites: Basic Python programming, all examples of the course will be presented in Python. Objectives: The objectives of this course are to understand the basic concepts on programming with Python and its support for parallelism. Learning Outcomes: The students who finish this course will be able to develop simple parallel benchmarks with Python, and analyze their execution and tune their behaviour in parallel architectures. Agenda: Day 1 (Monday, July 6th, 2020) Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical) 1. Introduction to parallel programming and Python 11:00  Coffee break 2. Practical: How to compile and run python applications   Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical) 1. Scientific Python: NumPy, SciPy, MatplotLib, Bokeh 16:00 Coffee break 2. Practical: Simple python programs and optimizations     Day 2 (Tuesday, July 7th, 2020) Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical) 1. Parallelism in Python. Shared memory 2. Introduction to performance analysis. Paraver: tool to analyze and understand performance 3. Python pools 11:00 Coffee break 3. Practical: Examples of python parallelism   Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical) 1. Distributed memory                  Visualizing distributed environments with Paraver 2. Python queues 16:00 Coffee break 3. Practical: Trace generation and trace analysis 4. Practical: environment on RPi   Day 3 (Wednesday, July 8th, 2020) Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical) 1. Introduction to PyCOMPSs 2. PyCOMPSs syntax 11:00 Coffee break 3.PyCOMPSs hands-on   Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical) 1. PyCUDA and support for accelerators 2. Debugging 16:00 Coffee break 3. Hands-on with PyCUDA   END of COURSE https://events.prace-ri.eu/event/999/ 2020-07-06 07:30:00 UTC 2020-07-08 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • Efficient Parallel Programming with GASPI @ HLRS

    18 - 19 June 2020

    Efficient Parallel Programming with GASPI @ HLRS https://tess.oerc.ox.ac.uk/events/efficient-parallel-programming-with-gaspi-hlrs-5b736fed-b9e8-41b9-a229-d03d53f7969d Overview In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com). GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/997/ 2020-06-18 07:00:00 UTC 2020-06-19 13:30:00 UTC [] [] [] workshops_and_courses [] []