Back Cite

PRACE

Partnership for Advanced Computing in Europe

PRACE https://tess.oerc.ox.ac.uk/content_providers/prace Partnership for Advanced Computing in Europe /system/content_providers/images/000/000/018/original/prace-logo.png?1530807794
Found 0 materials.
Showing 27 upcoming events. Found 616 past events. View all results.
  • Efficient Use of HPC Systems @GRNET

    11 - 12 December 2019

    Efficient Use of HPC Systems @GRNET https://tess.oerc.ox.ac.uk/events/efficient-use-of-hpc-systems-grnet-04a013f3-a76e-49c0-bb47-bde2d165f929 Efficient Use of HPC Systems 11 - 12 December 2019 Description The purpose of this course  is to present to existing and potential users of PRACE HPC systems an introduction on how to efficiently use these systems,their typical tools, software environment, compilers, libraries, MPI/OpenMP, batch system, etc. The trainees will learn what the HPC systems offer, how they work and how to apply for access to these infrastructures - both PRACE Tier-1 and Tier-0. Prerequisites The course addresses to any potential user of an HPC infrastructure.  Background in modules, compilers, MPI/OpenMP/Cuda, batch systems, running time consuming applications is desirable. Bring your own laptop in order to be able to participate in the training hands on. Hands on work will be done in pairs so if you don’t have a laptop you might work with a colleague. Course language is English. Registration The maximum number of participants is 25. Registrations will be evaluated on a first-come, first-served basis. GRNET is responsible for the selection of the participants on the basis of the training requirements and the technical skills of the candidates. GRNET will also seek to guarantee the maximum possible geographical coverage with the participation of candidates from many countries. Venue GRNET headquarters Address: 2nd  Floor, 7, Kifisias Av. GR 115 23 Athens Information on how to reach GRNET headquarters ia available on GRNET website: https://grnet.gr/en/contact-us/   Accommodation options near GRNET can be found at: https://grnet.gr/wp-content/uploads/sites/13/2015/11/Hotels-near-GRNET-en.pdf ARIS - System Information ARIS is the name of the Greek supercomputer, deployed and operated by GRNET (Greek Research and Technology Network) in Athens. ARIS consists of 532 computational nodes seperated in four “islands” as listed here: 426 thin nodes: Regular compute nodes without accelerator. 44 gpu nodes: “2 x NVIDIA Tesla k40m” accelerated nodes. 18 phi nodes: “2 x INTEL Xeon Phi 7120p” accelerated nodes. 44 fat nodes: Fat compute nodes have larger number of cores and memory per core than a thin node. 1 ml node: Machine Learning node consisting of 1 server, containing 2 Intel E5-2698v4 processors, 512 GB of central memory and 8 NVIDIA V100 GPU card All the nodes are connected via Infiniband network and share 2PB GPFS storage.The infrastructure also has an IBM TS3500 library of maximum storage capacity of about 6 PB. Access to the system is provided by two login nodes. About Tutors Dr. Dellis (Male) holds a B.Sc. in Chemistry (1990) and PhD in Computational Chemistry (1995) from the National and Kapodistrian University of Athens, Greece. He has extensive HPC and grid computing experience. He was using HPC systems in computational chemistry research projects on fz-juelich machines (2003-2005). He received an HPC-Europa grant on BSC (2009). In EGEE/EGI projects he acted as application support and VO software manager for SEE VO, grid sites administrator (HG-02, GR-06), NGI_GRNET support staff (2008-2014). In PRACE 1IP/2IP/3IP/4IP/5IP he was involved in benchmarking tasks either as group member or as BCO (2010-2018). Currently he is leader of the HPC team at GRNET S.A. Kyriakos Ginis received his Diploma in Electrical and Computer Engineering in 2003 from the National Technical University of Athens, Greece. Between 2004 and 2014 he participated in the european projects EGEE I/II/III and EGI as a grid site administrator of the HellasGrid sites HG-01-GRNET, HG-06-EKT and HG-08-Okeanos. Since 2014 he works at GRNET as a system administrator of the ARIS HPC system, primarily responsible for hardware, operating systems and file/storage systems. He continues maintaining the HellasGrid sites HG-06 and HG-08, and supports other GRNET services such as the unique and persistent identifiers (PID) service, also part of the EUDAT project. Nikoloutsakos Nikolaos holds a diploma of Engineering in Computer Engineering and Informatics (2014) from the University of Patras, Greece. From 2015 he works as software engineer at GRNET S.A. where he is part of the user application support team for the ARIS HPC system. He has been involved in major national and European projects, such as PRACE and EUDAT. His main research interests include parallel programming models, co-processor programming using GPUs and Intel Xeon Phis. About GRNET GRNET – National Infrastructures for Research and Technology, is the national network, cloud computing and IT e-Infrastructure and services provider. It supports hundreds of thousands of users in the key areas of Research, Education, Health and Culture. GRNET provides an integrated environment of cutting-edge technologies integrating a country-wide dark fiber network, data centers, a high performance computing system and Internet, cloud computing, high-performance computing, authentication and authorization services, security services, as well as audio, voice and video services. GRNET scientific and advisory duties address the areas of information technology, digital technologies, communications, e-government, new technologies and their applications, research and development, education, as well as the promotion of Digital Transformation. Through international partnerships and the coordination of EC co-funded projects, it creates opportunities for know-how development and exploitation, and contributes, in a decisive manner, to the development of Research and Science in Greece and abroad. National Infrastructures for Research and Technology – Networking Research and Education www.grnet.gr, hpc.grnet.gr https://events.prace-ri.eu/event/945/ 2019-12-11 08:00:00 UTC 2019-12-12 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • OpenMP Programming Workshop @ LRZ

    11 - 13 February 2020

    OpenMP Programming Workshop @ LRZ https://tess.oerc.ox.ac.uk/events/openmp-programming-workshop-lrz With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model. Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.  Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications.  Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997. This workshop will cover a wide range of  topics, reaching from the basics of OpenMP programming using the Parallelware tools to really advanced topics. Day 1 The first day will cover basic parallel programming with OpenMP on CPUs and GPUs using the Parallelware Trainer Software by Appentra Solutions (https://www.appentra.com/products/parallelware-trainer/). Appentra’s Parallelware tools are based on over 10 years of research by co-founder and CEO, Dr. Manuel Arenaz, who will be the lecturer of the first day. Parallelware  enables the identification of opportunities for parallelization and the provision of appropriate parallelization methods using state-of-the-art industrial standards. Parallelware Trainer was developed specifically to help improve the experience of HPC training, providing an interactive learning environment that uses examples that are the same or similar to real codes. Parallelware Trainer provides support for OpenMP (including multi-threading, offloading and tasking) and OpenACC (for offloading), providing users with the opportunity to use GPU services with either OpenMP or OpenACC. Day 2 and 3 Day 2 and 3 will cover advanced topics like (partly still to be confirmed): Mastering Tasking with OpenMP Host Performance NUMA Aware Programming, Thread Affinity Vectorization / SIMD Tool support for Performance and Correctness OpenMP for Heterogeneous Computing OpenMP 5.0 Features and Future Roadmap Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance. We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented. Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs. Finally, OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented. All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth. A detailled agenda of the course will be provided later. Topics might be still subject to change. For the hands-on sessions participants need to bring their own laptops with an ssh-client installed. The course is organized as a PRACE training event by LRZ in collaboration with Appentra Solutions, Intel and RWTH Aachen. Lecturers Dr. Manuel Arenaz is CEO at Appentra Solutions and professor of computer science at the University of A Coruña (Spain). Holds a PhD on advanced compiler techniques for automatic parallelization of scientific codes. After 10+ years teaching parallel programming at undergraduate and PhD levels, he strongly believes that the next generation of STEM engineers needs to be educated in HPC technologies to address the digital revolution challenge. Recently, he co-founded Appentra Solutions to commercialize products and services that take advantage of Parallware, a new technology for semantic analysis of scientific HPC codes. Dr. Reinhold Bader studied physics and mathematics at the Ludwigs-Maximilians University in Munich, completing his studies with a PhD in theoretical solid-state physics in 1998. Since the beginning of 1999, he has worked at Leibniz Supercomputing Centre (LRZ) as a member of the scientific staff. He is currently group leader of the HPC services group at LRZ, which is responsible for operation of all HPC-related systems and system software packages at LRZ. Reinhold also participates in the standardisation activities of the Fortran programming language in the international workgroup WG5 Dr.  Michael Klemm holds an M.Sc.  and a Doctor of Engineering degree from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany.  Michael Klemm is a Principal Engineer in the Compute Ecosystem Engineering organization of the Intel Architecture, Graphics, and Software group at Intel in Germany.  His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning.  Michael Klemm joined the OpenMP organization in 2009 and was appointed CEO of the OpenMP ARB in 2016. Dr. Christian Terboven is a senior scientist and leads the HPC group at RWTH Aachen University. His research interests center around Parallel Programming and related Software Engineering aspects. Dr. Terboven has been involved in the Analysis, Tuning and Parallelization of several large-scale simulation codes for various architectures. He is responsible for several research projects in the area of programming models and approaches to improve the productivity and efficiency of modern HPC systems. Dr. Volker Weinberg studied physics at the Ludwig Maximilian University of Munich and later worked at the research centre DESY. He received his PhD from the Free University of Berlin for his studies in the field of lattice QCD. Since 2008 he is working in the HPC group at the Leibniz Supercomputing Centre and is education and training coordinator at LRZ. Since 2019 he is LRZ representative in the OpenMP ARB and language committee.     https://events.prace-ri.eu/event/947/ 2020-02-11 08:00:00 UTC 2020-02-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to PETSc @ MdlS/Idris

    2 - 3 July 2020

    Introduction to PETSc @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/introduction-to-petsc-mdls-idris-66ee6086-a743-480f-8316-152692bae0de The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/). It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/891/ 2020-07-02 07:30:00 UTC 2020-07-03 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Programming Distributed Computing Platforms with COMPSs @ BSC

    28 - 29 January 2020

    Programming Distributed Computing Platforms with COMPSs @ BSC https://tess.oerc.ox.ac.uk/events/programming-distributed-computing-platforms-with-compss-bsc-b8b2498f-12b2-4c00-ae3f-a1eadeb9be0c Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department Lecturers:  Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages. Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework. A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises. Level: for trainees with some theoretical and practical knowledge. INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Programming skills in Java and Python    Agenda:  Day 1 (January 21) 9: 30 – 10:00 Roundtable. Presentation and background of participants 10:00 – 10:30 Introduction to COMPSs Motivation Setup of tutorial environment 10:30 – 13:00 PyCOMPSs Writing Python applications 11:00 – 11:30 Coffee break Python Hands-on using Jupyter notebooks 13:00-14:30 Lunch break 14:30 -15:15 How to debug COMPSs applications 15:15 -16:30 Python practical session (Bring your Own Code) 16:30 - Adjourn Day 2 (January 22) 9:30-11:00 COMPSs & Java Writing Java applications Java Hands-on 11:00 – 11:30 Coffee break 11:30-12:30 COMPSs Advanced Features Using binaries and MPI code COMPSs execution environment Integration with OmpSs 13:30 – 14:30 Lunch break 14:30-15:30 Cluster Hands-on (MareNostrum) 15:30 -16:30 Practical session (Bring your Own Code) COMPSs Installation & Final Notes END of COURSE       https://events.prace-ri.eu/event/907/ 2020-01-28 08:30:00 UTC 2020-01-29 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Hybrid Programming in HPC @ LRZ

    20 - 21 April 2020

    Introduction to Hybrid Programming in HPC @ LRZ https://tess.oerc.ox.ac.uk/events/introduction-to-hybrid-programming-in-hpc-lrz-a1756d98-1a5f-4840-bf4e-eae5cda4de1b Overview Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. The course is a PRACE training event. It is organized by LRZ in cooperation with HLRS, RRZE, and VSC (Vienna Scientific Cluster). Agenda & Content (preliminary) 1st day 09:30 Registration 10:00 Welcome 10:05 Motivation 10:15 Introduction 10:45 Programming Models            - Pure MPI 11:05 Coffee Break 11:25  - Topology Optimization 12:05    Practical (application aware Cartesian topology) 12:45  - Topology Optimization (Wrap up) 13:00 Lunch 14:00  - MPI + MPI-3.0 Shared Memory 14:30    Practical (replicated data) 15:00 Coffee Break 15:20  - MPI Memory Models and Synchronization 16:00    Practical (substituting pt-to-pt by shared memory) 16:45 Coffee Break 17:00    Practical (substituting barrier synchronization by pt-to-pt) 18:00 End 19:00 Social Event at Gasthof Neuwirt (self paying) 2nd day 09:00 Programming Models (continued)            - MPI + OpenMP 10:30 Coffee Break 10:50     Practical (how to compile and start) 11:30     Practical (hybrid through OpenMP parallelization) 13:00 Lunch 14:00  - Overlapping Communication and Computation 14:20     Practical (taskloops)  15:00 Coffee Break 15:20  - MPI + OpenMP Conclusions 15:30  - MPI + Accelerators 15:45 Tools 16:00 Conclusions 16:15 Q&A 16:30 End https://events.prace-ri.eu/event/902/ 2020-04-20 07:30:00 UTC 2020-04-21 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    6 - 7 April 2020

    Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/performance-portability-for-gpu-application-using-high-level-programming-approaches-with-kokkos-mdls-idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos https://github.com/kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Ouessant (http://www.idris.fr/ouessant/), equipped with Nvidia Pascal GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. https://events.prace-ri.eu/event/892/ 2020-04-06 07:30:00 UTC 2020-04-07 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • HPC and natural hazards: modelling tsunamis and volcanic plumes using European flagship codes @ BSC

    2 - 5 December 2019

    HPC and natural hazards: modelling tsunamis and volcanic plumes using European flagship codes @ BSC https://tess.oerc.ox.ac.uk/events/hpc-and-natural-hazards-modelling-tsunamis-and-volcanic-plumes-using-european-flagship-codes-bsc The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Arnau Folch Course lecturers:  Jorge Macías (Malaga University), Matteo Cerminara (INGV Pias), Leonardo Mingari (CASE Department, BSC) Objectives: This course focuses on modelling two of the highest impact natural hazards, volcanic eruptions and tsunamis. The objective is to give a succinct theoretical overview and then introduce students on the use of different HPC flagship codes included in the Center of Excellence for Exascale in Solid Earth (ChEESE). ASHEE is a volcanic plume and PDC simulator based on a multiphase fluid dynamic model conceived for compressible mixtures composed of gaseous components and solid particle phases. FALL3D is a Eulerian model for the atmospheric transport and ground deposition of volcanic tephra (ash) used in operational volcanic ash dispersal forecasts routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. T-HySEA solves the 2D shallow water equations on hydrostatic and dispersive versions. Based on a high-order Finite Volume (FV) discretisation (hydrostatic) with Finite Differences (FD) for the dispersive version on two-way structured nested meshes in spherical coordinates. Together with hands-on sessions, the course will also tackle post-process strategies based on python. In recent years, the Python programming language has become one of the most popular choice for geoscientists. Python is a modern, interpreted, object-oriented, open-source language easy to learn, easy to read, and fast to write. The proliferation of multiple open-source projects with libraries available every day, have facilitated a rapid scientific development in the geoscience community. In addition, the modern data structures and object-oriented nature of the language along with an elegant syntax, enable Earth scientists to write more robust and less buggy code. Learning outcomes: Participants will learn and gain experience in installing SE codes and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analyzing and visualizing model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area. Basic knowledge of LINUX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Basic knowledge of python Agenda: Day 1 Session 1 / 10:00am – 1:30pm (3 h lectures) 10:00-11:30 Volcanic clouds and plumes: Introduction to the physical problem 11:30-11:50 Coffee break 11:50-13:30 Introduction FALL3D 13:30-14:30 Lunch break   Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical) 14:30-16:00 Introduction to ASHEE 16:00-16:20 Coffee break 16:20-18:00 Installation and compilation of FALL3D and ASHEE   Day 2 Session 1 / 10:00am – 1:30pm (3 h hands-on) 10:00-11:30 FALL3D hands on I 11:30-11:50 Coffee break 11:50-13:30 FALL3D hands on II 13:30-14:30 Lunch break   Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical) 14:30-16:00 ASHEE hands on I 16:00-16:20 Coffee break 16:20-18:00 ASHEE hands on II   Day 3 Session 1 / 10:00am – 1:30pm (1:30 h lectures, 1:40 h practical) 10:00-11:30 Introduction to tsunami modeling and the Tsunami-HySEA code 11:30-11:50 Coffee break 11:50-13:30 Tsunami-HySEA: from simple to complex simulations 13:30-14:30 Lunch break   Session 2 / 2:30pm – 6:00 pm (3 h hands-on) 14:30-16:00 Tsunami-HySEA hands on I 16:00-16:20 Coffee break 16:20-18:00 Tsunami-HySEA hands on II   Day 4 Session 1 / 10:00am – 1:30pm (3 h lectures) 10:00-11:30 A brief introduction to the Python language and object oriented programming 11:30-11:50 Coffee break 11:50-13:30 Scientific computing tools and reading files and accessing remote data 13:30-14:30 Lunch break   A brief introduction to the Python language -Installing packages Object oriented programming -Examples on classes and motivation -How to make a class -Method Objects -Example: manipulating dates and times Scientific computing tools -Vectors and arrays: basic operations and manipulations -References and copies of arrays -Vectorization -Statistics tools -Data Analysis with Pandas Reading files and accessing remote data -Read and write multi-column data files -File formats used in geosciences netCDF, HDF5, HDF-EOS 2, and GRIB 1 and 2 -Data Access Services: OPeNDAP, NetCDF Subset Service, etc... -Example: Reading data from OpenDAP   Session 2 / 2:30pm – 6:00 pm (3h hands-on) 14:30-16:00 Visualization 16:00-16:20 Coffee break 16:20-18:00 Examples and exercises Visualization -Simple line plots -Adjusting the plot -Visualization of geographic data -3D Scientific data visualization   Examples and exercises -FALL3D pre and post-processing tools End of Course https://events.prace-ri.eu/event/906/ 2019-12-02 09:00:00 UTC 2019-12-05 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Node-Level Performance Engineering @ LRZ

    3 - 4 December 2019

    Node-Level Performance Engineering @ LRZ https://tess.oerc.ox.ac.uk/events/node-level-performance-engineering-lrz-3d8b9f51-5dc5-4c71-b637-121de55a1a92 This course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code. This is because parallelism takes us only half the way to good performance. Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes. Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.   The course is a PRACE training event. Introduction Our approach to performance engineering Basic architecture of multicore systems: threads, cores, caches, sockets, memory The important role of system topology Tools: topology & affinity in multicore environments Overview likwid-topology and likwid-pin Microbenchmarking for architectural exploration Properties of data paths in the memory hierarchy Bottlenecks OpenMP barrier overhead Roofline model: basics Model assumptions and construction Simple examples Limitations of the Roofline model Pattern-based performance engineering Optimal use of parallel resources Single Instruction Multiple Data (SIMD) Cache-coherent Non-Uniform Memory Architecture (ccNUMA) Simultaneous Multi-Threading (SMT) Tools: hardware performance counters Why hardware performance counters? likwid-perfctr Validating performance models Roofline case studies Dense matrix-vector multiplication Sparse matrix-vector multiplication Jacobi (stencil) smoother Optional: The ECM performance model https://events.prace-ri.eu/event/901/ 2019-12-03 08:00:00 UTC 2019-12-04 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Managing distributed data with Hecuba and dataClay @ BSC

    30 January 2020

    Managing distributed data with Hecuba and dataClay @ BSC https://tess.oerc.ox.ac.uk/events/managing-distributed-data-with-hecuba-and-dataclay-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Department and Research group: Computer Science - Workflows and Distributed Computing Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Anna Queralt, Distributed Object Management research line, Senior researcher Course Lecturers: Department and Research group: Computer Sciences - Workflows and Distributed Computing Alex Barceló, Distributed object Management research line, Researcher Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer Daniel Gasull, Distributed object Management research line, Research engineer Pol Santamaria, Data-driven Scientific Computing research line, Junior developer Anna Queralt, Distributed object Management research line, Senior researcher Objectives: The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes. Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments. Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities. Learning Outcomes:   In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks. A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud) Prerequisites: Basic programming skills in Python and Java. Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.   Agenda:    Day 1 (Jan 30) Session 1 / 9:30 – 13:00 9:30-10:00 Round table. Presentation and background of participants 10:00-11:00 Motivation, introduction and syntax of BSC storage platforms 11:00-11:30 Coffee break 11:30-12:15 Hands-on with storage API 12:15-13:00 COMPSs overview and how to parallelize a sequential application 13:00-14:30 Lunch break Session 2/ 14:30 – 18:00 14:30-16:00 Hecuba specifics and hands-on 16:00-16:30 Break 16:30-18:00 dataClay specifics and hands-on END of COURSE       https://events.prace-ri.eu/event/909/ 2020-01-30 08:30:00 UTC 2020-01-30 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Short course on HPC-based Computational Bio-Medicine @ BSC

    11 - 13 February 2020

    Short course on HPC-based Computational Bio-Medicine @ BSC https://tess.oerc.ox.ac.uk/events/short-course-on-hpc-based-computational-bio-medicine-bsc-414d1420-ae8c-42dd-844e-45c5ba53ff14 The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Mariano Vazquez Lecturers: Mariano Vázquez (BSC), Marco Verdicchio (SURFsara), Okba Hamitou (Bull), Gábor Závodszky (UvA), João Damas (ACELLERA), Adrià Perez (UPF), Phil Tooley (USFD), Ricard Borrell (BSC), Jazmín Aguado-Sierra (BSC), Dr Alexander Heifetz (EVOTEC), Andrea Townsend-Nicholson (UCL), Guillermo Marín (BSC) and Paul Melis (SURFsara) Objectives:The objetive of this course is to give a panorama on the use of HPC-based computational mechanics in Engineering and Environment through the projects BSC are carrying on. This panorama includes the basics of what is behind the main tools: computational mechanics and parallelization. The training is delivered in collaboration with the center of excellence CompBioMed. Learning outcomes: The course gives a wide perspective and the latest trends of how HPC helps in industrial, clinical and research applications allowing to achieve more realistic multiphysics simulations.  In addition, the student has the opportunity of running Jobs in Marenostrum supercomputer. Level: INTERMEDIATE: For trainees with some theoretical and practical knowledge Day 1 (Feb. 11) Session 1 / 9:00am – 1:00 pm (4 h lectures) 9:00-9:15 Wellcome (Mariano Vázquez, BSC) 9:15-10:50 Introduction to HPC in Computational Modelling (Marco Verdicchio, SURFsara) 10:50-11:10h Coffee Break 11:10-13:00 Predicting the risk of fall - A computational modelling approach using CT2S (Dr Alessandro Melis, USFD) 13:00-14:00 Lunch Break Session 6 / 2:00pm – 4:00 pm  14:00-15:30 Computational Hemodynamics on HPC (UvA) (Gábor Závodszky, UvA) 16:00-18:00 Visit to MareNostrum     Day 2 (Feb. 12) Session 3 / 9:00am – 1:00 pm (4 h lectures) 9:00-10:50 Data Visulization for Researchers Crash Course (Guillermo Marín, BSC) 10:50-11:10h Coffee Break 11:10-13:00 Parallel algorithms for Computational Mechanics (Guillaume Houzeaux - Ricard Borrell, BSC) 13:00-14:00 Lunch Break Session 4 / 2:00pm – 6:00 pm (2 h lectures, 2 h practical) 14:00-15:00 Introduction to Computer-Aided Drug Design (CADD) and GPCR Modelling (Dr Alexander Heifetz, EVOTEC) 15:00-16:00 Innovations in HPC-training for medical, science and engineering students (Andrea Townsend-Nicholson, UCL) 16:00-16:15 Coffee Break 16:15-18:00 Molecular Medicine: Hands On (Andrea Townsend-Nicholson, UCL)      Day 3 (Feb. 13) Session 5 / 9:00am – 1:00 pm (4 h lectures) 9:00-10:00 Fluid-Structure Interaction methods for biomechanics (C. Samaniego, D. Oks, A. Santiago) 10:00-12:00 Hands on on FSI modelling 12:00-13:00 Cardiac Modelling (J. Aguado-Sierra) 13:00-14:00 Lunch Break Session 6 / 2:00pm – 4:00 pm  14:00-15:30 Compilation and Optimization in the HPC environment (O. Hamitou)   https://events.prace-ri.eu/event/912/ 2020-02-11 08:00:00 UTC 2020-02-13 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Big Data Analytics @ BSC

    3 - 7 February 2020

    Big Data Analytics @ BSC https://tess.oerc.ox.ac.uk/events/big-data-analytics-bsc-7f199aa0-835c-44a2-8111-f6c92f8700e1 Registration is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener:  Maria-Ribera Sancho Objectives: The course brings together key information technologies used in manipulating, storing, and analysing data including: the basic tools for statistical analysis techniques for parallel processing tools for access to unstructured data storage solutions Learning outcomes: Students will be introduced to systems that can accept, store, and analyse large volumes of unstructured data. The learned skills can be used in data intensive application areas. Level: For trainees with some theoretical and practical knowledge Agenda: Day 1 (Feb 3) 9:30 – 11:00 Introduction to Big Data 11:00 - 11:30 Coffee break 11:30 – 13:00 Introduction to Big Data 13:00 – 14:00 Lunch Break 14:00 – 16:00 Practical Data Analytics for Solving Real World Problems (José Carlos Carrasco Jiménez, Researcher, BSC) Data analytics has changed the way we make decisions. We see the benefits and the advances in many fields that go from financial to medical and industrial applications due to the integration of advanced data analytics. In this course we will propose practical tips gained through our experience at BSC in big data analytics projects. We will also discover how to overcome some of the most challenging tasks in practical data analytics. 16:00 – 16:30 Coffee break 16:30 – 18:00 Hands-on (José Carlos Carrasco Jiménez, Researcher, BSC) This session will focus on several key methods and algorithms (both serial and parallel) that enable to discover global properties on data while dealing with Big Data: Network Science Multi Constrained and Multi-Objective Optimization Examples using the above approaches and some hands-on exercise   Day 2 (Feb 4) 9:30 – 13:00 Big Data Management (Albert Abelló, UPC, inLab FIB) Big Data has many definitions and facets, we'll pay attention to the problems we have to face to store it and how we can process it. More specifically, we'll focus on the Apache Hadoop ecosystem and its two basic components, namely HBase and MapReduce engine. 11:00 - 11:30 Coffee break Hands-on exercise 13:00 – 14:00 Lunch Break 14:00 - 16:00 NoSQL databases (Oscar Romero, Dept. of Service and Information System Engineering, UPC-BarcelonaTech) The relational model has dominated data storage systems since the mid 1970s. However, the changing storage needs over the past decade have given rise to new models for storing data, collectively known as NoSQL. In this presentation, we will focus on two of the most common types of NoSQL databases: document-oriented databases and graph databases and explain the use cases suitable for each of them. 16:00 - 16:30 Coffee break 16:30 - 18:00 Multidisciplinary research and data analytics: Smart Cities (Maria Cristina Marinescu, Computer Applications in Science&Engineering, BSC) Day 3 (Feb 5) 9:30 – 13:00 Data Analytics with Apache Spark (Josep Lluis Berral, Computer Sciences - Data Centric Computing, BSC) 11:00 - 11:30 Coffee break Apache Spark has become a consolidated technology for large-scale processing in a fast and general way, with “programmer-friendly” interfaces and official bindings for many of the most used languages (Java, Scala, Python and R), extensive documentation and development tools. This course introduces Apache Spark, as well as some of its core libraries for data manipulation, machine learning, data streams and graph analytics. 13:00 – 14:00 Lunch Break 14:00 – 15:30 Data Analytics with Apache Spark. Part 2 (Josep Lluis Berral, Computer Sciences - Data Centric Computing, BSC) 16:00 – 16:15 Coffee break 15:30 – 17:00 European project on Big Data Day 4 (Feb 6) 9:30 – 11:00 Introduction to Deep Learning 11:00 - 11:30 Coffee break 11:30 – 13:00 Introduction to Deep Learning 13:00 – 14:00 Lunch Break 14:00 - 16:00 Business Intelligence (Tomàs Aluja, UPC – Barcelona Tech) 16:00 - 16:30 Coffee break 16:30 – 18:00 Data analytics in societal challenges modeling: smart mobility and other related fields (Dra. Mari Paz Linares i Jamie Arjona (UPC, inLab FIB) Internet of Things, Big Data, Smart cities or Industry 4.0 are concepts that have raised in the last years with promises of solving daily human issues. In this session we will present how a combination of Internet of Things and Big Data can attack certain challenges and alleviate them. Day 5 (Feb 7) 9:30 – 13:00 Data Visualization Therory (Luz Calvo, User Experience And Interaction Designer, BSC) Therory Basic concepts Human perception Design Colour Audience / Validation / Bad practices Visualisation design process 11:00 - 11:30 Coffee break Tools for data visualization – Tableau – Data Wrapper – RawGraphs – Flourish – Carto Data visualisation with d3.js END of COURSE     https://events.prace-ri.eu/event/910/ 2020-02-03 08:30:00 UTC 2020-02-07 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • Big Data Analysis with Apache Spark @ CSC

    27 - 28 November 2019

    Big Data Analysis with Apache Spark @ CSC https://tess.oerc.ox.ac.uk/events/big-data-analysis-with-apache-spark-csc Description Data is everywhere and with the rapid growth in data volume that is being used in data analysis tasks, it gets more and more challenging for the user to process it using standard methods. One typically runs into several problems - low memory or cpu, waiting forever for a job to complete or starting all over again if a job fails. Enter Spark, a high-performance distributed computing framework, which allows us to tackle big-data problems by distributing the workload across a cluster of machines. Say goodbye to all those painful workloads forever. The two day course addresses the technical architecture and use cases of Spark, writing Spark code using Python, using Spark's machine learning library to perform ML based tasks. Then, we would be looking at the methods for running a spark cluster on CSC's container cloud Rahti, along with ways to manage and fine tune your cluster. The course will also demonstrate how to work with real-time data as well. The first day includes the overview, architectural concepts, programming with Spark's fundamental data structure (RDD) and Spark's Machine Learning library. The second day focuses on the analysis of data by running SQL queries in Spark, working with real-time data streams and how to setup and manage a spark cluster. Please NOTE: This is not a regular programming course, participants would be expected to learn emerging concepts in the field of big data / distributed processing, which might be completely different from the concepts of a general programming language. Learning outcome After the course the participants should be able to write simple to intermediate programmes in Spark using RDD and dataframes. Intended Audience and Prerequisites The course is intended for researchers, students, and professionals with programming skills, preferably in Python, as the exercises are in Python. Some knowledge of SQL is also recommended. IMPORTANT: THIS IS A BEGINNERS COURSE FOR SPARK If you are already familiar with it, please have a look at the agenda or email us to know more, whether the course content suits you or not. Agenda Day 1, Wednesday 27.11    09.00 – 09.45    Overview and architecture of Spark    09:45 – 10.30    Basics of RDDs and Demo    10.30 – 10.45    Coffee break    10.45 – 11.30    RDD: Transformations and Actions    11.30 – 12.00    Exercises    12.00 – 13.00    Lunch    13.00 – 13.30    Word Count Example    13.30 – 14.00    Exercises    14.00 – 14.30    Short overview of Machine learning library of Spark    14.30 – 14.45    Coffee break    14.45 – 15.30    Exercises    15.30 – 15.45    Wrap-up and further topics    15.45 – 16.00    Summary of the first day & exercises walk-through Day 2, Thursday, 28.11    09.00 – 09.30    Spark Dataframes and SQL Overview    09:30 – 10.15    Exercises    10.15 – 10.30    Coffee break    10.30 – 10.45    Dataframes and SQL (contd.)    10.45 – 12.00    Exercises    12.00 – 13.00    Lunch    13.00 – 14.00    Setting up a Spark cluster    14.00 – 14.30    Exercises    14.00 – 14.30    Best practices and other useful stuff    14.30 – 14.45    Coffee break    14.45 – 15.00    Brief overview of Spark Streaming    15.00 – 15.15    Demo: Processing live twitter stream data    15.15 – 16.00    Summary of the course & exercises walk-through Lecturers:  Apurva Nandan (CSC, lecturer), Anni Pyysing (CSC, teaching assistant) Language:   English Price:          Free of charge https://events.prace-ri.eu/event/930/ 2019-11-27 07:00:00 UTC 2019-11-28 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    8 - 9 June 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris https://tess.oerc.ox.ac.uk/events/introduction-to-scalapack-and-magma-libraries-mdls-idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/919/ 2020-06-08 07:30:00 UTC 2020-06-09 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to machine learning in Python with Scikit-learn @ MdlS/ICM

    18 December 2019

    Introduction to machine learning in Python with Scikit-learn @ MdlS/ICM https://tess.oerc.ox.ac.uk/events/introduction-to-machine-learning-in-python-with-scikit-learn-mdls-icm The rapid growth of artificial intelligence and data science has made scikit-learn one of the most popular Python libraries. The tutorial will present the main components of scikit-learn, covering aspects such as standard classifiers and regressors, cross-validation, or pipeline construction, with examples from various fields of application. Hands-on sessions will focus on medical applications, such as classification for computer-aided diagnosis or regression for the prediction of clinical scores. Learning outcomes : Ability to solve a real-world machine learning problem with scikit-learn Prerequisites : Basic knowledge of Python (pandas, numpy) Notions of machine learning No prior medical knowledge is required https://events.prace-ri.eu/event/933/ 2019-12-18 08:30:00 UTC 2019-12-18 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Spring School in Computational Chemistry 2020 @ CSC

    10 - 13 March 2020

    Spring School in Computational Chemistry 2020 @ CSC https://tess.oerc.ox.ac.uk/events/spring-school-in-computational-chemistry-2020-csc Description The Spring School provides a comprehensive, tutorial-style, hands-on, introductory and intermediate-level treatment to the essential ingredients for molecular modeling and computational chemistry using modern supercomputers. The School program is being prepared, but the main content will be similar to last years and consists of: Classical molecular dynamics, intro + hands on (1 day) Electronic structure theory, intro  + hands on (1 day) Machine learning in chemistry, intro + hands on Special topics: e.g. on Visualization, Enhanced Sampling Techniques, etc. The school is a must for graduate students in the field, providing an overview on "what can be calculated and how should it be done", without forgetting the important aspect of network building. Watch a short video of one our favourite lecturers contemplate this related to the 2019 School. To get an idea of the depth in which the topics are covered, take a look at the materials from 2019 School. Learning outcome The learning outcome is to gain an overview of the two main methods in computational chemistry — molecular dynamics and electronic structure calculations — in connection with related HPC software packages and other useful skills in the trade. The workshop is also suited for an intensive crash course (the first two days) in computational modelling and is expected to be useful for students and researchers also in physics, materials sciences and biosciences. The following "Special topics" then build on this foundation. Prerequisites Working knowledge and some work experience from some branch of computational chemistry will be useful. Basic linux skills for hands-on exercises and elementary Python for Machine Learning hands-on. More detailed description of pre-requisites and links for self study. Programme The timetable can be seen on the left menu and materials (uploaded after the School) accessed at the bottom of the page. For an overview of the previous event, read a summary blog of the 2019 School. In 2021 the School is likely organized in mid March - stay tuned! Software used in the School TBA   Lecturers  Dr. Filippo Federici Canova, Aalto University, Finland Dr. Mikael Johansson, University of Helsinki, Finland Dr. Luca Monticelli, IBCP (CNRS), Lyon, France Dr. Michael Patzschke, Helmholtz-Zentrum Dresden-Rossendorf, Germany Prof. Patrick Rinke, Aalto university, Finland Dr. Martti Louhivuori, CSC - IT Center for Science, Finland Dr. Atte Sillanpää, CSC - IT Center for Science, Finland Dr. Nino Runeberg, CSC- IT Center for Science, Finland TBC Language:  English Price:         Free of charge https://events.prace-ri.eu/event/942/ 2020-03-10 07:00:00 UTC 2020-03-13 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Heterogeneous Memory Usage @ BSC

    25 February 2020

    Introduction to Heterogeneous Memory Usage @ BSC https://tess.oerc.ox.ac.uk/events/introduction-to-heterogeneous-memory-usage-bsc The registration to this course is now open. All PATC Courses at BSC do not charge fees. PLEASE BRING YOUR OWN LAPTOP. Convener:  Antonio Peña, Computer Sciences Senior Researcher, Accelerators and Communications for High Performance Computing, BSC Objectives:  The objective of this course is to learn how to use systems with more than one memory subsystem. We will see the different options on using Intel’s KNL memory subsystems and systems equipped with Intel’s Optane technology. Learning Outcomes: The students who finish this course will able to leverage applications using multiple memory subsystems Level: INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course Prerequisites: Basic skills in C programming. Agenda: 9:00-9:30 Registration   9:30-10:30 Introduction to Memory Technologies Petar Radojkovic 10:30-11:00 Coffee Break   11:00-12:30 Use of Heterogeneous Memories Antonio J. Peña 12:30-13:00 Hands-on: Environment Setup   13:00-14:30 Lunch   14:30-18:00 Hands-on   https://events.prace-ri.eu/event/913/ 2020-02-25 08:00:00 UTC 2020-02-25 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Data, lights, camera, action! Scientific visualization done beautifully using Blender @SURFsara

    3 December 2019

    Data, lights, camera, action! Scientific visualization done beautifully using Blender @SURFsara https://tess.oerc.ox.ac.uk/events/data-lights-camera-action-scientific-visualization-done-beautifully-using-blender-surfsara DESCRIPTION (BASICS COURSE) Would you like to make 3D visualisations that are visually more attractive than what ParaView or VisIt can provide? Do you need an image for a grant application that needs to look spectacular? Would you like to create a cool animation of your simulation data? Then this course may be for you! The goal of this course is to provide you with hands-on knowledge to produce great images and basic animations from 3D scientific data. We will be using the open-source package Blender 2.8 (http://www.blender.org), which provides good basic functionality, while also being usable for advanced usage and general editing of 3D data. It is also a lot of fun to work with (once you get used to its graphical interface). Example types of relevant scientific data are 3D cell-based simulations, 3D models from photogrammetry, (isosurfaces of) 3D medical scans, molecular models and earth sciences data. Note that we don't focus on information visualization of abstract data, such as graphs (although you could convert those into a 3D model first and then use them in Blender). We like to encourage participants to bring along the data they normally work with, or a sample thereof, and would like to apply the course knowledge to. Topics covered: - Blender UI and workflow, scene structure - Basic importing of data - Simple 3D mesh editing with modifiers - Basic animation - Rendering, lighting and materials NOTE FROM THE TRAINERS This course was previously given in a single day, but we have now split it into a Basics and Advanced part each a full day. The course described above is the Basics part. The follow-up Advanced course with more in-depth information and a few extra topics will be held in Q1 2020.   IMPORTANT INFORMATION: WAITING LIST If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available. https://events.prace-ri.eu/event/934/ 2019-12-03 08:00:00 UTC 2019-12-03 16:40:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Deep Learning and Tensorflow@Cineca

    18 - 20 November 2019

    Introduction to Deep Learning and Tensorflow@Cineca https://tess.oerc.ox.ac.uk/events/introduction-to-deep-learning-and-tensorflow-cineca This course is an introduction to deep learning, the current most promising field of machine learning. We will illustrate the basic concepts of machine learning and the new trends, together with a discussion on past unfruitful approaches. Our aim is to enable the student to get acquainted with and to take advantage of deep learning methodologies, and hopefully to become able to design tasks to be run on cluster machines. The course will also focus on practical sessions dedicated to the introduction of the widely used TensorFlow framework. Skills: By the end of the course each student should be able to: understand the key features of deep learning understand some use cases using basics of TensorFlow Target Audience: Researchers and programmers interested in using deep learning. Pre-requisites: Knowledge of the basic fundamentals of machine learning and python language is useful but not necessary. Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. S.Tagliaventi   https://events.prace-ri.eu/event/936/ 2019-11-18 08:00:00 UTC 2019-11-20 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • High Performance Bioinformatics@CINECA

    9 - 11 December 2019

    High Performance Bioinformatics@CINECA https://tess.oerc.ox.ac.uk/events/high-performance-bioinformatics-cineca-90ebcfd9-1680-4d59-883c-114e20f175ff This course focuses on the development and execution of bioinformatics pipelines and on their optimization with regards to computing time and disk space. In an era where the data produced per-analysis is in the order of terabytes, simple serial bioinformatic pipelines are no longer feasible. Hence the need for scalable, high-performance parallelization and analysis tools which can easily cope with large-scale datasets. To this end we will study the common performance bottlenecks emerging from everyday bioinformatic pipelines and see how to strike down the execution times for effective data analysis on current and future supercomputers. As a case study, two different bioinformatics pipelines (whole-exome and transcriptome analysis) will be presented and re-implemented on the supercomputers of Cineca thanks to ad-hoc hands-on sessions aimed at applying the concepts explained in the course. Skills: By the end of the course each student should be able to: Manage the transfer of big data files and/or large number of files from the local computer to the Cineca platforms and vice versa Prepare the environment to analyse big amount of biological data on a supercomputer Run single parallel bioinformatic programs on a supercomputer Combine bioinformatics applications into pipelines on a supercomputer Target audience: Biologists, bioinformaticians and computer scientists interested in approaching large-scale NGS-data analysis for the first time. Pre-requisites: Basic knowledge of python and shell command line. A very basic knowledge of biology is recommended but not required. Grant The course is FREE of charge. The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (except Rome area). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. https://events.prace-ri.eu/event/939/ 2019-12-09 08:00:00 UTC 2019-12-11 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Data science with R @ Cineca

    25 - 27 November 2019

    Data science with R @ Cineca https://tess.oerc.ox.ac.uk/events/data-science-with-r-cineca-3b67ba61-eba8-4468-8f81-1a7226f3572a The purpose of this course is to present researchers and scientists with R implementation of Machine Learning methods. The first part of the course will consist of introductory lectures on popular Machine Learning algorithms including unsupervised methods (Clustering, Association Rules) and supervised ones (Decision Trees, Naive Bayes, Random Forests and Deep Neural Network). Basic Machine Learning concepts such as training set, test set, validation set, overfitting, bagging, boosting will be introduced as well as performance evaluation for supervised and unsupervised methods. The second part will consist of practical exercises such as reading data, using packages and building machine learning applications. Different options for parallel programming will be shown using specific R packages (parallel, h2o,…). For Deep Learning applications the Keras package will be presented. The examples will cover the analysis of large datasets and images datasets. Participants will use R on Cineca HPC facilities for practical assignments. Skills: At the end of the course, the student will be expected to have acquired:     • the ability to perform basic operations on matrices and dataframes      • the ability to manage packages     • the ability to navigate in the RStudio interface     • a general knowledge of Machine and Deep Learning methods     • a general knowledge of the most popular packages for Machine and Deep Learning     • a basic knowledge of different parallel programming techniques     • the ability to build machine learning applications with large datasets and images datasets Target audience: Students and researchers with different backgrounds, looking for technologies and methods to analyze a large amount of data. Pre-requisites: Participants must have a basic statistics knowledge. Participants must also be familiar with basic Linux and R language. Grant: The course is FREE of charge. The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the ROME area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside ROME). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. https://events.prace-ri.eu/event/938/ 2019-11-25 08:30:00 UTC 2019-11-27 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Practical Deep Learning @ CSC

    12 - 13 December 2019

    Practical Deep Learning @ CSC https://tess.oerc.ox.ac.uk/events/practical-deep-learning-csc-81fda6c5-67bf-4a60-a9bb-80b8f8fcd401 Description This course gives a practical introduction to deep learning, convolutional and recurrent neural networks, GPU computing, and tools to train and apply deep neural networks for natural language processing, images, and other applications. The course consists of lectures and hands-on exercises. TensorFlow 2, Keras, and PyTorch  will be used in the exercise sessions. CSC's Notebooks environment will be used on the first day of the course, and the new Puhti-AI partition on the second day. Learning outcome After the course the participants should have the skills and knowledge needed to begin applying deep learning for different tasks and utilizing the GPU resources available at CSC for training and deploying their own neural networks. Prerequisites The participants are assumed to have working knowledge of Python and suitable background in data analysis, machine learning, or a related field. Previous experience in deep learning is not required, but the fundamentals of machine learning are not covered on this course.  Basic knowledge of a Linux/Unix environment will be assumed. Agenda (tentative) Day 1, Thursday 12.12    09.00 – 11.00 Introduction to deep learning and to Notebooks    11.00 – 12.00 Multi-layer perceptrons    12.00 – 13.00 Lunch    13.00 – 14.30 Image data and convolutional neural networks    14.30 – 16.00 Text data, recurrent neural networks, and attention Day 2, Friday 13.12    09.00 – 10.30 Deep learning frameworks, GPUs, batch jobs    10.30 – 12.00 Image classification exercises    12.00 – 13.00 Lunch    13.00 – 14.00 Text categorization exercises    14.00 – 16.00 Cloud, using multiple GPUs Coffee will be served both for the morning and afternoon sessions Lecturers:  Markus Koskela (CSC),  Mats Sjöberg (CSC)   Language:  English Price:          Free of charge https://events.prace-ri.eu/event/941/ 2019-12-12 07:00:00 UTC 2019-12-13 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Uncertainty quantification @MdlS

    11 - 13 May 2020

    Uncertainty quantification @MdlS https://tess.oerc.ox.ac.uk/events/uncertainty-quantification-mdls-081c2005-af76-434f-95f4-3c40dfdaf8bc Uncertainty in computer simulations, deterministic and probabilistic methods for quantifying uncertainty, OpenTurns software, Uranie software Content Uncertainty quantification takes into account the fact that most inputs to a simulation code are only known imperfectly. It seeks to translate this uncertainty of the data to improve the results of the simulation. This training will introduce the main methods and techniques by which this uncertainty propagation can be handled without resorting to an exhaustive exploration of the data space. HPC plays an important role in the subject, as it provides the computing power made necessary by the large number of simulations needed. The course will present the most important theoretical tools for probability and statistical analysis, and will illustrate the concepts using the OpenTurns software. Course Outline Day 1 : Methodology of Uncertainty Treatment – Basics of Probability and Statistics •    General Uncertainty Methodology (30’) : A. Dutfoy •    Probability and Statistics: Basics (45’) : G. Blondet •    General introduction to Open TURNS and Uranie (2 * 30’) : G. Blondet, J.B. Blanchard •    Introduction to Python and Jupyter (45’): practical work on distributions manipulations Lunch •    Uncertainty Quantification (45’) : J.B. Blanchard •    OpenTURNS – Uranie practical works: sections 1, 2 (1h): G. Blondet,  J.B. Blanchard,  A. Dutfoy •    Central tendency and Sensitivity analysis (1h): A. Dutfoy Day 2 : Quantification, Propagation and Ranking of Uncertainties •    Application to OpenTURNS and Uranie (1h): section 3 M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard •    Estimation of probability of rare events (1h): G. Blondet •    Application to OpenTURNS and Uranie (1h): M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard Lunch •    Distributed computing (1h) : Uranie (15’, F. Gaudier, J.B. Blanchard), OpenTURNS (15’, G. Blondet), Salome et OpenTURNS (30’, O. Mircescu) •    Optimisation and Calibration (1h) : J.B. Blanchard, M. Baudin •    Application to OpenTURNS and Uranie (1h): J.B. Blanchard, M. Baudin Day 3 : HPC aspects – Meta model •    HPC aspects specific to the Uncertainty treatment (1h) : K. Delamotte •    Introduction to Meta models (validation, over-fitting) – Polynomial chaos expansion (1h) : JB Blanchard, C. Mai, •    Kriging meta model (1h): C. Mai Lunch •    Application to OpenTURNS and Uranie (2h) : C. Mai, G. Blondet, J.B. Blanchard •    Discussion /  Participants projects Learning outcomes Learn to recognize when uncertainty quantification can bring new insight to simulations. Know the main tools and techniques to investigate uncertainty propagation. Gain familiarity with modern tools for actually carrying out the computations in a HPC context. Prerequisites Basic knowledge of probability will be useful, as will a basic familiarity with Linux. https://events.prace-ri.eu/event/931/ 2020-05-11 07:30:00 UTC 2020-05-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • HPC Carpentry @ EPCC Edinburgh

    9 - 10 December 2019

    HPC Carpentry @ EPCC Edinburgh https://tess.oerc.ox.ac.uk/events/hpc-carpentry-epcc-edinburgh HPC Carpentries Course page All the information on this course can be found on the HPC Carpentry page for this workshop at: https://archer-cse.github.io/2019-12-09-epcc-hpcshell/ Details This course introduces accessing remote advanced computing facilities via the command line and High Performance Computing (HPC). After completing this course, participants will: Understand motivations for using HPC in research Understand how HPC systems are put together to achieve performance and how they differ from desktops/laptops Know how to connect to remote HPC systems and transfer data Be able to use the Bash command line on remote systems Know how to use a scheduler to work on a shared system Be able to use software modules to access different HPC software Be able to work effectively on a remote shared resource Full details including course timetable available soon. This course is being run with support from the ARCHER National Supercomputing Service and PRACE. This course is free to all. Pre-requisites There are no prerequisites for this workshop. Requirements Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed as detailed at the ARCHER Software setup page. They are also required to abide by ARCHER Training Code of Conduct. Accessibility We are committed to making this workshop accessible to everybody. The workshop organisers have checked that: The room is wheelchair / scooter accessible. Accessible restrooms are available. Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch and we will attempt to provide them. Course Materials Course page including slides and exercise material. Trainer Andy Turner Andy Turner leads the application support teams for the UK national HPC services ARCHER and Cirrus. He is also heavily involved in advanced computing training at EPCC. Andy has a particular interest in enabling new user communities to make use of HPC and the use of novel user engagement to improve the HPC user experience. He has been involved the HPC Carpentry initiative for the past two years. https://events.prace-ri.eu/event/924/ 2019-12-09 10:00:00 UTC 2019-12-10 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Systems Workshop: Programming MareNostrum 4 @ BSC

    26 - 27 February 2020

    Systems Workshop: Programming MareNostrum 4 @ BSC https://tess.oerc.ox.ac.uk/events/systems-workshop-programming-marenostrum-4-bsc-79e95f4b-055f-4b64-a716-5250d4731892 The registration to this course is now open. Please, bring your own laptop.  All the PATC courses at BSC are free of charge. Course convener: David Vicente Lecturers: David Vicente, Javier Bartolomé, Carlos Tripiana, Oscar Hernandez, Rubén Ramos, Félix Ramos, Pablo Ródenas, Jorge Rodríguez, Marta Renato, Cristian Morales Objectives: The objective of this course is to present to potential users the new configuration of MareNostrum and a introduction on how to use the new system (batch system, compilers, hardware, MPI, etc).Also It will provide an introduction about RES and PRACE infrastructures and how to get access to the supercomputing resources available. Learning Outcomes: The students who finish this course will know the internal architecture of the new MareNostrum, how it works, the ways to get access to this infrastructure and also some information about optimization techniques for its architecture. Level: INTERMEDIATE -for trainees with some theoretical and practical knowledge; those who finished the beginners course. Prerequisites:  Any potential user of a HPC infrastructure will be welcome Agenda: DAY 1 (Feb. 26) 9am - 5pm Session 1 / 9:00am – 1:00 pm (2:45 h lectures, 45’ practical) 09:00h - 09:30h Introduction to BSC, PRACE PATC and this training (David Vicente) 09:30h - 10:30h MareNostrum 4 – the view from System administration group (Javier Bartolomé) 10:30h – 11:00h COFFEE BREAK 11:00h - 11:30h Deep Learning and Big data tools on MN4 (Carlos Tripiana) 11:30h - 12:15h How to use MN4 – Basics: Batch system, file systems, compilers, modules, DT, DL, BSC commands (Félix Ramos, Francisco González, Ricard Zarco, Helena Gómez) 12:15h - 13:00h Hands-on I (Oscar Hernandez, Félix Ramos, Francisco González, Ricard Zarco, Helena Gómez) 13:00h - 14:30h LUNCH (not hosted) Session 2 / 2:30pm – 5:15 pm (3:30 h practical) 14:30h - 15:00h How to use MN4 – HPC architectures (David Vicente) 15:00h - 16:00h How to use MN4 – Parallel programming: OpenMP, Hands-on II (Guillermo Oyarzún, Jorge Rodríguez) 16:00h - 16:15h COFFEE BREAK 16:15h - 17:00h How to use MN4 – Parallel programming: MPI, Hands-on II (Guillermo Oyarzún, Jorge Rodríguez) 17:00h - Adjourn DAY 2 (Feb. 27) 9am - 1pm Session 3 / 9:00am – 1:00 pm (2 h lectures, 1:30 h practical) 09:00h - 09:30h Optional: Doubts + Continue previous hands-on + Tunning your app (David Vicente, Jorge Rodríguez) 09:30h - 10:00h How can I get resources from you? - RES (David Vicente) 10:00h - 10:30h How can I get Resources from you? - PRACE (Cristian Morales) 10:30h - 11:00h COFFEE BREAK 11:00h - 11:30h Debugging on MareNostrum, from GDB to DDT (Oscar Hernandez, Cristian Morales) 11:30h - 12:30h Hands-on III – Debugging your application (Oscar Hernandez, Cristian Morales) 12:30h - 13:00h Wrap-up : UserPortal. Can we help you in your porting ? How ? when ? (Carlos Tripiana, David Vicente) 13:00h - Adjourn END of COURSE https://events.prace-ri.eu/event/943/ 2020-02-26 08:00:00 UTC 2020-02-27 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • Parallel and GPU Programming in Python @SURFsara

    3 - 4 February 2020

    Parallel and GPU Programming in Python @SURFsara https://tess.oerc.ox.ac.uk/events/parallel-and-gpu-programming-in-python-surfsara-97372898-1260-433c-a3e4-021a5a82ab57 Scope of the course The Python programming language has become more and more popular among researchers for its simplicity and the availability of specific programming libraries, and at the same time the correct exploitation of heterogeneous architectures presents challenges for the development of parallel applications. In order to bring these two topics together, this course is focused on the use of Python on CPU and GPU platforms for scientific computing in general.   General description The basic concepts of good programming practices in Python and general parallel programming will be introduced, and then GPU computing will be explained combining the essential theory concepts with hands-on sessions. The proposed exercises will be tested in the supercomputing facilities provided by SURFsara using Python with different programming libraries: numba PyCUDA mpi4py   IMPORTANT INFORMATION: WAITING LIST If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available. https://events.prace-ri.eu/event/946/ 2020-02-03 08:00:00 UTC 2020-02-04 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • School on Numerical Methods for Parallel CFD @ Cineca

    2 - 6 December 2019

    School on Numerical Methods for Parallel CFD @ Cineca https://tess.oerc.ox.ac.uk/events/school-on-numerical-methods-for-parallel-cfd Description: The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers. Skills:  At the end of the course, the student will possess and know how to use the following skills: Numerical analysis Algorithms for PDE Solution Parallel computing (MPI, OpenMP, Accelerators)  HPC architecture Strategies for massively parallelization of numerical methods Numerical Libraries for HPC Target audience: MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD  Pre-requisites: Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution. Admitted students: Attendance is free. The number of participants is limited to 40 students. Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM. Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school. DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019. THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH. IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.   https://events.prace-ri.eu/event/929/ 2019-12-02 08:00:00 UTC 2019-12-06 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • GPU Programming with CUDA @ EPCC University of Edinburgh

    9 - 10 January 2020

    GPU Programming with CUDA @ EPCC University of Edinburgh https://tess.oerc.ox.ac.uk/events/gpu-programming-with-cuda-epcc-university-of-edinburgh GPU Programming with CUDA Graphics Processing Units (GPUs) were originally developed for computer gaming and other graphical tasks, but for many years have been exploited for general purpose computing in a number of areas. They offer advantages over traditional CPUs because they have greater computational capability, and use high-bandwidth memory systems (memory bandwidth is the main bottleneck for many scientific applications). Trainer Kevin Stratford Kevin has a background in computational physics and joined EPCC in 2001. He teaches on courses including 'Scientific Programming with Python' and 'GPU Programming with CUDA'.   Rupert Nash Rupert is an experienced trainer who works with CFD, C++ and GPUs, and who teaches courses including 'Modern C++' and 'GPU Programming with CUDA'.   Details This introductory course will describe GPUs, and the advantages they offer. It will teach participants how to start to program GPUs, which cannot be used in isolation, but are usually used in conjunction with CPUs. Important issues affecting performance will be covered. The course focuses on NVIDIA GPUs, and the CUDA programming language (an extension to C/C++ or Fortran). Please note the course is aimed at application programmers; it does not consider machine learning or any of the packages available in the machine learning arena. Hands-on practical sessions are included. You will require your laptop, and your institutional credentials to connect to eduroam. The training parctical exercises will be run on a web-based system so all you will need is a relatively recent web browser (Firefox, Chrome and Safari are known to work). This course is free to attend. Timetable Provisional timetable based on previous run - may be subject to change. Day 1 10:00 Introduction 10:20 GPU Concepts/Architectures 11:00 Break 11:20 CUDA Programming 12:00 A first CUDA exercise 13:00 Lunch 14:00 CUDA Optimisations 14:20 Optimisation Exercise 15:00 Break 15:20 Constant and Shared Memory 16:00 Exercise 17:00 Close Day 2 10:00 Recap 10:30 OpenCL and OpenACC directives 11:00 Break 11:20 OpenCL and / or Directives Exercises 12:00 Guest Lecture Alan Gray (NVIDiA) Overview of NVIDIA Volta 13:00 Lunch 14:00 Performance portability and Kokkos 14:30 Exercise: Getting started with Kokkos patterns 15:00 Break 15:10 Kokkos memory management 15:30 Memory management exercises 16:00 Close Course Materials Slides and exercise material for this course will be available soon.  Materials from a previous run can be seen here. Location The course will be held at EPCC, University of Edinburgh Registration Please use the registration page to register for this course. Questions? If you have any questions please contact the ARCHER Helpdesk. https://events.prace-ri.eu/event/935/ 2020-01-09 09:00:00 UTC 2020-01-10 17:00:00 UTC [] [] [] workshops_and_courses [] []