Start: Wednesday, 12 July 2017 @ 08:00

End: Friday, 14 July 2017 @ 16:30

Description:

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

This course is free to all academics. 

Intended Learning Outcomes

On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems.

Pre-requisite Programming Languages:

Fortran, C or C++. It is not possible to do the exercises in Java.

Timetable

Day 1

09:30  Message-Passing Concepts 10:15  Practical: Parallel Traffic Modelling 11:00  Break 11:30  MPI Programs 12:00  MPI on ARCHER 12:15  Practical: Hello World 13:00  Lunch 14:00  Point-to-Point Communication 14:30  Practical: Pi 15:30  Break 16:00  Communicators, Tags and Modes 16:45 Practical: Ping-Pong 17:30  Finish

Day 2

09:30  Non-Blocking Communication 10:00  Practical: Message Round a Ring 11:00  Break 11:30  Collective Communicaton 12:00  Practical: Collective Communication 13:00  Lunch 14:00  Virtual Topologies 14:30  Practical: Message Round a Ring (cont.) 15:30  Break 16:00  Derived Data Types 16:45  Practical: Message Round a Ring (cont.) 17:30  Finish

Day 3

09:30  Introduction to the Case Study 10:00  Practical: Case Study 11:00  Break 11:30  Practical: Case Study (cont.) 13:00  Lunch 14:00  Designing MPI Programs 15:00 Individual Consultancy Session 16:00  Finish

https://events.prace-ri.eu/event/616/

Event type:
  • Workshops and courses
Message-passing Programming with MPI @ EPCC https://tess.oerc.ox.ac.uk/events/message-passing-programming-with-mpi-epcc-504445a9-2e79-49c6-92e8-85899dc7f90b The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture. Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues. The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts. This course is free to all academics.  Intended Learning Outcomes On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems. Pre-requisite Programming Languages: Fortran, C or C++. It is not possible to do the exercises in Java. Timetable Day 1 09:30  Message-Passing Concepts 10:15  Practical: Parallel Traffic Modelling 11:00  Break 11:30  MPI Programs 12:00  MPI on ARCHER 12:15  Practical: Hello World 13:00  Lunch 14:00  Point-to-Point Communication 14:30  Practical: Pi 15:30  Break 16:00  Communicators, Tags and Modes 16:45 Practical: Ping-Pong 17:30  Finish Day 2 09:30  Non-Blocking Communication 10:00  Practical: Message Round a Ring 11:00  Break 11:30  Collective Communicaton 12:00  Practical: Collective Communication 13:00  Lunch 14:00  Virtual Topologies 14:30  Practical: Message Round a Ring (cont.) 15:30  Break 16:00  Derived Data Types 16:45  Practical: Message Round a Ring (cont.) 17:30  Finish Day 3 09:30  Introduction to the Case Study 10:00  Practical: Case Study 11:00  Break 11:30  Practical: Case Study (cont.) 13:00  Lunch 14:00  Designing MPI Programs 15:00 Individual Consultancy Session 16:00  Finish https://events.prace-ri.eu/event/616/ 2017-07-12 08:00:00 UTC 2017-07-14 16:30:00 UTC [] [] workshops_and_courses [] []

This Event is currently not included in any packages.

scraper created Message-passing Programming with MPI @ EPCC at 2017-04-12 03:22:46 UTC.