Partnership for Advanced Computing in Europe

Found 0 materials.
Showing 11 upcoming events. Found 389 past events. View all results.
  • Systems Workshop: Programming MareNostrum IV @ BSC

    26 - 27 Sep 2017

    Systems Workshop: Programming MareNostrum IV @ BSC The registration to this course will open in October. Please, bring your own laptop.  All the PATC courses at BSC are free of charge. Course convener: David Vicente  Objectives: The objective of this course is to present to potential users the new configuration of MareNostrum and a introduction on how to use the new system (batch system, compilers, hardware, MPI, etc).Also It will provide an introduction about RES and PRACE infrastructures and how to get access to the supercomputing resources available. Learning Outcomes: The students who finish this course will know the internal architecture of the new MareNostrum, how it works, the ways to get access to this infrastructure and also some information about optimization techniques for its architecture. Level: INTERMEDIATE -for trainees with some theoretical and practical knowledge; those who finished the beginners course. Prerequisites:  Any potential user of a HPC infrastructure will be welcome Agenda: DAY 1 09:00 Introduction to BSC, PRACE PATC and this training - David Vicente 09:30 MareNostrum IV – the view from System administration group - Javier Bartolomé 10:30 Coffee break 11:00 Visualization at BSC - Carlos Tripiana 11:30 How to use MareNostrum IV: BASIC Things, BATCH system, Filesystems, Compilers, Modules, DT, DL, BSC commands - Miguel Bernabeu, Borja Arias 12:15 Hands-on I - Miguel Bernabeu, Borja Arias 13:00 Lunch (not hosted) 14:30 How to use MN3 – Advanced I MPI implementations (PR), MPI IO (PR), Tuning MPI values for different applications (DV)  - Pablo Ródenas, Janko Strassburg, David Vicente 15:15 Hands-on II - Pablo Ródenas, David Vicente 16:00 Coffee break 16:15 How to use MN4 – Advanced II GREASY (PR), MIC (JR), Mathematical libraries MKL (JR) - Pablo Ródenas, Jorge Rodríguez 17:00 End of the first day DAY 2 09:00 You choose! - MareNostrum IV visit (In the chapel) - Doubts + Hands ON + Tunning your app (In the classroom) - David Vicente, Jorge Rodríguez 09:30 How can I get resources from you? - RES Jorge Rodríguez 10:00 How can I get Resources from you? - PRACE Janko Strassburg 10:30 Coffee break 11:00 Use case: Genomics - Miguel Bernabeu 11:25 Tuning applications   BSC performance tools (Extrae and Paraver) - BSC Tools Team 12:00 Hands-on III – Performance tools and tunning your application - BSC Tools Team 13:00 Wrap-up : Can we help you in your porting ? How ? when ? - David Vicente   13:30 END of COURSE 2017-09-26 08:00:00 UTC 2017-09-27 11:30:00 UTC [] [] workshops_and_courses [] []
  • 3rd School on Scientific Data Analytics and Visualization @ CINECA

    12 - 16 Jun 2017

    3rd School on Scientific Data Analytics and Visualization @ CINECA Description: The increasing amount of scientific data being collected through sensors or computational simulations may take advantage of new analytics techniques for being processed in order to extract new meanings out of raw data. The purpose of this one-week school is to present researches and scientists with methods, tools and techniques for the mining, analysis and visualization of large data sets using Cineca resources. This school is an introductory set of lectures aimed at training beginners participants  in the application of relevant statistical and machine learning algorithms to extract new insights from data, in the adoption of data visualization techniques and tools to graph relevant information, and in the proper use of Cineca resources to execute processing jobs. The school will consist of introductory lectures held by guest data-analyst experts, and hands-on sessions. Topics: Data Mining, Data Visualization. Target Audience: Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data. Prerequisites: Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and in using GNU/Linux-based systems. Grant The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Milan area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM.    APPLICATION DEADLINE May 8th, 2017.  STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON TUESDAY MAY,16TH.  Attendance is FREE. 2017-06-12 08:00:00 UTC 2017-06-16 16:00:00 UTC [] [] workshops_and_courses [] []
  • Efficient Parallel Programming with GASPI @ HLRS

    3 - 4 Jul 2017

    Efficient Parallel Programming with GASPI @ HLRS Overview In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also and GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. 2017-07-03 07:00:00 UTC 2017-07-04 13:30:00 UTC [] [] workshops_and_courses [] []
  • High-performance computing with Python @ JSC

    12 - 13 Jun 2017

    High-performance computing with Python @ JSC Python is being increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly. This course combines lectures and hands-on session. We will show how Python can be used on parallel architectures and how performance critical parts of the kernel can be optimized using various tools. For using Python productively for parallel computing, these topics will be covered: Interactive parallel programming with IPython Profiling and optimization High-performance NumPy and SciPy, numba Distributed-memory parallel programming with Python and MPI Bindings to other programming languages and HPC libraries Interfaces to GPUs This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC. Prerequisites: Experience with Python and NumPy Application Registrations are only considered until 31 May 2017 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC Contact For any questions concerning the course please send an e-mail to 2017-06-12 07:00:00 UTC 2017-06-13 14:30:00 UTC [] [] workshops_and_courses [] []
  • Intel MIC Programming Workshop @ LRZ

    26 - 28 Jun 2017

    Intel MIC Programming Workshop @ LRZ The course discusses Intel’s Many Integrated Core (MIC) architecture and programming models for Intel Xeon Phi co/processors in order to enable programmers to achieve good performance of their applications. The course will mainly concentrate on techniques relevant for Knights Landing (KNL) based systems, like the future KNL cluster CoolMUC3 to be installed at LRZ, soon. The workshop covers a wide range of topics from the description of the hardware of the Intel Xeon Phi co-/processors through information about the basic programming models as well as information about vectorisation and MCDRAM usage up to tools and strategies how to analyse and improve the performance of applications. The workshop will include both theoretical and practical hands-on sessions. There will also be a session with invited talks by speakers from Intel, IPCC@LRZ, IPCC@TUM, IPCC@IT4Innovations, IPP, RRZE and the University of Regensburg about Intel Xeon Phi - especially KNL - experience and best practice recommendations. The course is developed within the joint German-Czech Republic project CzeBaCCA. A workshop on "HPC for natural hazard assessment and disaster mitigation" of this project will take place at LRZ directly after this course (see Please bring your own laptop (with an ssh client installed) for the hands-on sessions! Figure: Participants of the MIC Workshop 2016 Preliminary schedule Monday, June 26, 2017, 09:00-17:00, Kursraum 2, H.U.010 Introduction into the MIC architecture Overview of KNC and KNL co-/processors Basic Programming Techniques Tuesday, June 27, 2017, 09:00-17:00, Kursraum 2, H.U.010 Cluster Modes of KNL MCDRAM & Memory Modes of KNL Vectorisation Optimisation on KNL Tools Wednesday, June 28, 2017, 09:00-12:00, Hörsaal, H.E.009 (Lecture Hall) 09:00-10:30 Advanced OpenMP for KNL (Michael Klemm, Intel) 10:30-10:45 Coffee Break 10:45-12:00 Advanced KNL programming techniques (Intrinsics, Assembler, AVX-512,...) (Jan Eitzinger, RRZE) Wednesday, June  28,  2017, 13:00-18:00, Hörsaal, H.E.009 (Lecture Hall) Plenum session with invited talks on MIC experience and best practice recommendations (joint session with the Scientific Workshop "HPC for natural hazard assessment and disaster mitigation"), public session 13:00-13:30 Luigi Iapichino, IPCC@LRZ: "Performance Optimization of Smoothed Particle Hydrodynamics and Experiences on Many-Core Architectures" 13:30-14:00 Michael Bader/Carsten Uphoff, IPCC@TUM 14:00-14:30 N.N., IPCC@IT4I 14:30-15:00 Michael Klemm, Intel 15:00-15:30 Coffee Break 15:30-16:00 Jan Eitzinger, RRZE 16:00-16:30 Piotr Korcyl, University of Regensburg: "Lattice Quantum Chromodynamics on the MIC architectures" 16:30-17:00 Nils Moschüring, IPP: "The experience of the HLST on Europes biggest KNL cluster" 17:00-17:30 Andreas Marek, Max Planck Computing and Data Facility (MPCDF), "Porting  the ELPA library to the KNL architecture" 17:30-18:00 Q&A, Wrap-up The course material is developed within PRACE and the joint German-Czech Republic project CzeBaCCA. The course is a PRACE Advanced Training Center event. A social event for participant and instructor networking is planned for the evening on Tuesday, 27 June. About the tutors Dr. Momme Allalen received his Ph.D in theoretical Physics from the University of Osnabrück in 2006. He worked in the field of molecular magnetics through modelling techniques such as the exact numerical diagonalisation of the Heisenberg model. He joined the Leibniz Computing Centre (LRZ) in 2007 working in the High Performance Computing group. His tasks include user support, optimisation and parallelisation of scientific application codes, and benchmarking for characterising and evaluating the performance of high-end supercomputers. His research interests are various aspects of parallel computing and new programming languages and paradigms. Dr. Fabio Baruffa is HPC Application Specialist at LRZ and member of the Intel Parallel Computing Center (IPCC). He was working as HPC researcher at Max-Planck (MPCDF), Jülich Research Center and Cineca where he was involved in HPC software development. His main research interests are in the area of computational methods and optimizations for HPC systems. He holds a PhD in Physics from University of Regensburg for his research in the area of spintronics. Dr.-Ing. Jan Eitzinger (RRZE) (formerly Treibig) holds a PhD in Computer Science from the University of Erlangen. He is now a postdoctoral researcher in the HPC Services group at Erlangen Regional Computing Center (RRZE). His current research revolves around architecture-specific and low-level optimization for current processor architectures, performance modeling on processor and system levels, and programming tools. He is the developer of LIKWID, a collection of lightweight performance tools. In his daily work he is involved in all aspects of user support in High Performance Computing: training, code parallelization, profiling and optimization, and the evaluation of novel computer architectures. Dr.-Ing. Michael Klemm (Intel Corp.) is part of Intel's Software and Services Group, Developer Relations Division. His focus is on High Performance and Throughput Computing. He obtained an M.Sc. in Computer Science in 2003. Michael received a Doctor of Engineering degree (Dr.-Ing.) in Computer Science from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany, in 2008. His research focus was on compilers and runtime optimisations for distributed systems. Michael's areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. Michael is Intel representative in the OpenMP Language Committee and leads the efforts to develop error handling features for OpenMP.  He is also maintainer of the pyMIC offload infrastructure for Python. Dr. Volker Weinberg studied physics at the Ludwig Maximilian University of Munich and later worked at the research centre DESY. He received his PhD from the Free University of Berlin for his studies in the field of lattice QCD. Since 2008 he is working in the HPC group at the Leibniz Supercomputing Centre and is responsible for HPC and PATC (PRACE Advanced Training Centre) courses at LRZ, new programming languages and the Intel Xeon Phi based system SuperMIC. Within PRACE-4IP he took over the leadership to create Best Practice Guides for new architectures and systems. 2017-06-26 07:00:00 UTC 2017-06-28 16:00:00 UTC [] [] workshops_and_courses [] []
  • 8th Programming and Tuning Massively Parallel Systems summer school (PUMPS)@BSC - UPC

    26 - 30 Jun 2017

    8th Programming and Tuning Massively Parallel Systems summer school (PUMPS)@BSC - UPC The Barcelona Supercomputing Center (BSC) in association with Universitat Politecnica de Catalunya (UPC) has been awarded by NVIDIA as a GPU Center of Excellence. BSC and UPC currently offer a number of courses covering CUDA architecture and programming languages for parallel computing. Please contact us for possible collaborations. The 8th edition of the Programming and Tuning Massively Parallel Systems summer school (PUMPS) is aimed at enriching the skills of researchers, graduate students and teachers with cutting-edge technique and hands-on experience in developing applications for many-core processors with massively parallel computing resources like GPU accelerators. Summer School Co-Directors: Mateo Valero (BSC and UPC) and Wen-mei Hwu (University of Illinois at Urbana-Champaign) Local Organizers: Antonio J. Peña (BSC), Victor Garcia (BSC and UPC), and Pau Farre (BSC) Dates: Applications due: April 30, 2017 Due to space limitations, early application is strongly recommended. You may also be suggested to attend an online prerequisite training on basic CUDA programming before joining PUMPS. Notification of acceptance: May 15, 2017 Summer school: June 26-30, 2017 Organized by: Barcelona Supercomputing Center (BSC) University of Illinois at Urbana-Champaign (University of Illinois) Universitat Politecnica de Catalunya (UPC) HiPEAC Network of Excellence (HiPEAC) PUMPS is part of this year PRACE Advanced Training Centre program The following is a list of some of the topics that will be covered during the course: CUDA Algorithmic Optimization Strategies Dealing with Sparse and Dynamic data Efficiency in Large Data Traversal Reducing Output Interference Controlling Load Imbalance and Divergence Acceleration of Collective Operations Dynamic Parallelism and HyperQ Debugging and Profiling CUDA Code Multi-GPU Execution Architecture Trends and Implications Introduction to OmpSs and to the Paraver analysis tool OmpSs: Leveraging GPU/CUDA Programming Hands-on Labs: CUDA Optimizations on Scientific Codes; OmpSs Programming and Tuning Instructors: Distinguished Lecturers: Wen-mei Hwu (University of Illinois at Urbana-Champaign) and David Kirk (NVIDIA Corporation ) Invited Lecturer: Juan Gómez-Luna (Universidad de Córdoba) BSC / UPC Lecturers: Xavier Martorell and Xavier Teruel Teaching Assistants: Abdul Dakkak, Carl Pearson, Simon Garcia de Gonzalo, Marc Jorda, Pau Farre, Javier Bueno, Aimar Rodriguez Prerequisites for the course are: Basic CUDA knowledge is required to attend the course. Applicants that cannot certify their experience in CUDA programming will be asked to take a short on-line course covering the necessary introductory topics C, C++, Java, or equivalent programming knowledge. Skills in parallel programming will be helpful Preliminary Overview Registration for the course is free. We expect our sponsors will cover academic applicants' marginal expenses such as meals. Please note that travel and lodging are not covered. Applicants from non-academic institutions (companies), please contact us at for sponsorship possibilities. By the end of the summer school, participants will: Be able to design algorithms that are suitable for accelerators. Understand the most important architectural performance considerations for developing parallel applications. Be exposed to computational thinking skills for accelerating applications in science and engineering. Engage computing accelerators on science and engineering breakthroughs. Programming Languages: CUDA, MPI, OmpSs, OpenCL Hands-on Labs: Afternoon labs with teaching assistants for each audience/level. Participants are expected to bring their own laptops to access the servers with GPU accelerators. The afternoon lab sessions will provide hands-on experience with various languages and tools covered in the lectures and will comprise a brief introduction to the programming assignments, followed by independent work periods. Teaching assistants will be available in person and on the web to help with assignments. 2017-06-26 06:00:00 UTC 2017-06-30 16:00:00 UTC [] [] workshops_and_courses [] []
  • Hands-on Introduction to HPC @ EPCC

    10 - 11 Jul 2017

    Hands-on Introduction to HPC @ EPCC ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2 day ‘Hands-on Introduction to High Performance Computing’ training session. This course provides both a general introduction to High Performance Computing (HPC) using the UK national HPC service, ARCHER, as the platform for the exercises. On completion of the course, we expect that attendees will be in a position to undertake the ARCHER Driving Test, and potentially qualify for an account and CPU time on ARCHER. Familiarity with desktop computers is presumed but no programming, Linux or HPC experience is required. Programmers can however gain extra benefit from the course as source code for all the practicals will be provided. Details High-performance computing (HPC) is a fundamental technology used in solving scientific problems. Many of the grand challenges of science depend on simulations and models run on HPC facilities to make progress, for example: protein folding, the search for the Higgs boson, and developing nuclear fusion. The course will run for 2 days. The first day covers the the basic concepts underlying the drivers for HPC development, HPC hardware, software, programming models and applications. The second day will provide an opportunity for more practical experience, information on performance and the future of HPC. This foundation will give the you ability to appreciate the relevance of HPC in your field and also equip you with the tools to start making effective use of HPC facilities yourself. The course is delivered using a mixture of lectures and practical sessions and has a very practical focus. During the practical sessions you will get the chance to use ARCHER with HPC experts on-hand to answer your questions and provide insight. This course is free to all academics. Intended learning outcomes On completion of this course students should be able to explain: Why HPC? - What are the drivers and motivation? Who uses it? HPC Hardware - Building blocks and architectures Parallel computing - Programming models and implementations Using HPC systems - Access, compilers, resource allocation and performance The Future of HPC Undertake the ARCHER Driving Test. Pre-requisites Attendees are expected to have experience of using desktop computers, but no programming, Linux or HPC experience is necessary. Timetable Day 1 09:30  Welcome, Overview and Syllabus 09:45  LECTURE: Why learn about HPC? 10:15  LECTURE: Image sharpening 10:30  PRACTICAL: Sharpen example 11:00  BREAK: Coffee 11:30  LECTURE: Parallel Programming 12:15  PRACTICAL: Sharpen (cont.) 13:00  BREAK: Lunch 14:00  LECTURE: Building Blocks (CPU/Memory/Accelerators) 14:30  LECTURE: Building Blocks (OS/Process/Threads) 15:00  LECTURE: Fractals 15:10  PRACTICAL: Fractal example 15:30  BREAK: Tea 16:00  LECTURE: Parallel programming models 16:45  PRACTICAL: Fractals (cont.) 17:30  Finish Day 2 09:30  LECTURE: HPC Architectures 10:15  LECTURE: Batch systems 10:45  PRACTICAL: Computational Fluid Dynamics (CFD) 11:00  BREAK: Coffee 11:30  PRACTICAL: CFD (cont.) 12:30  LECTURE: Compilers 13:00  BREAK: Lunch 14:00  PRACTICAL: Compilers (CFD cont.) 14:30  LECTURE: Parallel Libraries 15:00  LECTURE: Future of HPC 15:30  BREAK: Tea 16:00  LECTURE: Summary 16:15  PRACTICAL: Finish exercises 17:00  Finish 2017-07-10 08:00:00 UTC 2017-07-11 16:30:00 UTC [] [] workshops_and_courses [] []
  • Message-passing Programming with MPI @ EPCC

    12 - 14 Jul 2017

    Message-passing Programming with MPI @ EPCC The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture. Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues. The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts. This course is free to all academics.  Intended Learning Outcomes On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems. Pre-requisite Programming Languages: Fortran, C or C++. It is not possible to do the exercises in Java. Timetable Day 1 09:30  Message-Passing Concepts 10:15  Practical: Parallel Traffic Modelling 11:00  Break 11:30  MPI Programs 12:00  MPI on ARCHER 12:15  Practical: Hello World 13:00  Lunch 14:00  Point-to-Point Communication 14:30  Practical: Pi 15:30  Break 16:00  Communicators, Tags and Modes 16:45 Practical: Ping-Pong 17:30  Finish Day 2 09:30  Non-Blocking Communication 10:00  Practical: Message Round a Ring 11:00  Break 11:30  Collective Communicaton 12:00  Practical: Collective Communication 13:00  Lunch 14:00  Virtual Topologies 14:30  Practical: Message Round a Ring (cont.) 15:30  Break 16:00  Derived Data Types 16:45  Practical: Message Round a Ring (cont.) 17:30  Finish Day 3 09:30  Introduction to the Case Study 10:00  Practical: Case Study 11:00  Break 11:30  Practical: Case Study (cont.) 13:00  Lunch 14:00  Designing MPI Programs 15:00 Individual Consultancy Session 16:00  Finish 2017-07-12 08:00:00 UTC 2017-07-14 16:30:00 UTC [] [] workshops_and_courses [] []
  • Data Analytics with HPC @ EPCC at Portsmouth

    29 - 30 Jun 2017

    Data Analytics with HPC @ EPCC at Portsmouth This course will take place at University of Portsmouth. Data Analytics, Data Science and Big Data are a just a few of the many terms used in business and academic research. These refer to the manipulation, processing and analysis of data and are concerned with the extraction of knowledge from data whether for competitive advantage or to provide scientific insight. In recent years, this area has undergone a revolution in which HPC has been a key driver. This course provides an overview of data science and the analytical techniques that form its basis as well as exploring how HPC provides the power that has driven their adoption. The course will cover: key data analytical techniques such as, classification, optimisation, and unsupervised learning; key parallel patterns, such as Map Reduce, for implementing analytical techniques. Attendees should be familiar with basic Linux bash shell commands and have some previous experience with Python programming. Attendees will be given temporary access to the Data Analytics Cluster on ARCHER so will not require to have Python installed on their laptops, but will require the ability to use an ssh connection (using e.g. terminal (Mac/Linux) or putty (Win)) Timetable Thursday 29th June 2017 09:00 – 09:30 Arrival/set-up/Welcome 09:30 – 10:30 What are data analytics, big data, data science 10:30 – 11:00 COFFEE 11:00 – 12:00 Data Cleaning 12:00 – 13:00 Practical: Data Cleaning 13:00 – 14:00 LUNCH 14:00 – 14:45 Supervised Learning, feature selection, trees, forests 14:45 – 15:30 Naïve Bayes 15:30 – 16:00 COFFEE 16:00 – 17:00 Naïve Bayes Practical 17:00 CLOSE OF DAY Friday 30th June 2017 09:00 – 11:30 MapReduce/Hadoop 10:30 – 11:00 COFFEE 11:00 – 11:30 Hadoop demonstrations 11:30 – 12:30 Unsupervised learning 12:30 – 13:30 LUNCH 13:30 – 14:15 SPARK 14:15 – 15:00 Data streaming 15:00 – 15:30 COFFEE 15:30 – 16:00 SPARK, Data streaming demonstrations 16:00 – CLOSE OF COURSE 2017-06-29 08:00:00 UTC 2017-06-30 15:00:00 UTC [] [] workshops_and_courses [] []
  • Introduction to Unified Parallel C (UPC) and Co-array Fortran (CAF) @ HLRS

    29 - 30 Jun 2017

    Introduction to Unified Parallel C (UPC) and Co-array Fortran (CAF) @ HLRS Overview Partitioned Global Address Space (PGAS) is a new model for parallel programming. Unified Parallel C (UPC) and Co-array Fortran (CAF) are PGAS extensions to C and Fortran.UPC and CAF are language extensions to C and Fortran. Parallelism is part of the language. PGAS languages allow any processor to directly address memory/data on any other processors. Parallelism can be expressed more easily compared to library based approaches as MPI. This course gives an introduction to this novel approach of expressing parallelism. Hands-on sessions (in UPC and/or CAF) will allow users to immediately test and understand the basic constructs of PGAS languages. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. 2017-06-29 06:30:00 UTC 2017-06-30 13:30:00 UTC [] [] workshops_and_courses [] []
  • Workshop HPC Methods for Engineering@Cineca

    19 - 21 Jun 2017

    Workshop HPC Methods for Engineering@Cineca Description: The event is intended to collect a set of experiences from nowaday professional figures, coming from industry as well as from research centers and academy, to show the off-the-shelf technologies and methodologies available using Computer-Aided Engineering (CAE) applications in an HPC environment. More in detail we would like to show how tight can be today the relationship between HPC infrastructures and large scale applications involved in engineering product design and manufacturing. The interplay between computational platforms and engineering applications involves as a by-product parallel benchmarking and performance ranking, data intensive I/O applications, co-design of applications integrated with system-ware development, new solvers for massively parallel application, remote and parallel large scale visualization.   This three days event is divided into this main topics: - pre-processing: cad import and cleaning, meshing using commercial and open-source applications. - computing: CFD, FEM, Multi-Physics, Optimization - post-processing: data analysis and visualization using desktop and remote visualization facilities AGENDA   Topics: Industrial and academic large scale applications and use cases, data analytics applications for engineering and manufacturing. Target audience: Academic and industrial researchers, managers which use / are planning to use HPC systems for CAE applications. This is not a training course on how to use specific CAE software.Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and working or living in an institute outside the Milano area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. 2017-06-19 06:00:00 UTC 2017-06-21 16:00:00 UTC [] [] workshops_and_courses [] []
scraper created PRACE at 2017-01-23 14:52:56 UTC.