Skip to main content Skip to navigation

Programme

More Detailed description of course content: TO BE CONFIRMED


Day 1 Basic tools: familiarisation with environment; language fundamentals; editing, makefiles compilation and execution; use of batch systems; emphasis on principles and not just specific packages. At the end of Day 1 they should be able to edit a script, submit and run supplied serial and parallel applications and check the output.
Day 2-3 Programming: parallel courses in both Fortran and C; students will choose one stream. Teaching in days 4-10 will illustrate examples with both languages. Include a basic introduction to using pre-installed libraries. Some additional concepts will be introduced at this stage as a flavour of what is to come. The more advanced students will be advised to learn the language they are not already familiar with. All exercises will have options to stretch the more able students.
Day 4 Performance Programming: designed to teach students to think about and explore factors that affect the performance of their code. Relevant factors include compiler, operating system, hardware architecture, and the interplay between them. Emphasis will again be on the general principles of performance measurement, not the specific packages being used. Includes familiarisation with basic serial debugging and profiling tools.
Day 5 Parallel Architectures: Shared-memory and distributed-memory HPC architectures and programming models and algorithms; basic parallel performance measurement and Amdahl's law. Introduce the concept of communicating parallel processes: synchronous/asynchronous; blocking/ non-blocking. Introduce basic concepts regarding data dependencies and data sharing between threads and processes using pseudocode exercises and thought experiments, not real coding.
Day 6-7 Shared Variables Parallelism: OpenMP model, initialisation, parallel regions and parallel loops; shared and private data; loop scheduling and synchronisation. By the end, students should be able to modify and run example programs on a multi-core system, and understand the performance characteristics of different loop scheduling options.
Day 8-9 Message Passing Parallelism: MPI basics, point-to-point, synchronous and asynchronous modes; non-blocking forms and collective communications; data types. The course will illustrate how standard communications patterns can be implemented in MPI and used to parallelise simple array-based computations. By the end, students should be able to modify and run an example program on a distributed-memory system and understand its basic performance characteristics.
Day 10
Practical Parallel Programming: The final day will be used as an opportunity to review the material from the entire course, compare and contrast different programming approaches and place the course in the wider context of computational science as a research discipline. We will also outline other important areas in parallel software design and development that are beyond the scope of this initial academy. The day will include: comparison of parallel models and their suitability for different architectures; basics of parallel program design; use of packages and libraries; the HPC ecosystem in the UK, Europe and worldwide.