Introduction to MPI¶
Overview¶
Questions¶
What is MPI?
Why should I run my simulations in parallel?
How can I execute scripts in parallel?
Objectives¶
Describe MPI.
Explain how MPI can provide faster performance on HPC systems.
Show how to write a single program that can execute in serial or parallel.
Demonstrate how to execute that program with mpirun.
Explain how strong scaling enables higher performance at the cost of some efficiency.
Introduction¶
MPI (message passing interface) is a library that enables programs to execute in parallel. MPI is commonly available on HPC (high performance computing) clusters. HOOMD-blue uses MPI to execute on many CPUs and/or GPUs. Using more resources in parallel can provide higher performance than the same simulation run on one CPU core or one GPU.
The Simulation Script¶
This tutorial executes the Lennard-Jones particle simulation from a previous tutorial. See Introducing Molecular Dynamics for a complete description of this code.
You can also run HPMC and other types of simulations in HOOMD-blue using MPI. All operations in HOOMD-blue support MPI unless otherwise noted in their documentation.
[1]:
%pycat lj_performance.py
import hoomd
# Initialize the simulation.
device = hoomd.device.CPU()
sim = hoomd.Simulation(device=device, seed=1)
sim.create_state_from_gsd(filename='random.gsd')
# Set the operations for a Lennard-Jones particle simulation.
integrator = hoomd.md.Integrator(dt=0.005)
cell = hoomd.md.nlist.Cell(buffer=0.4)
lj = hoomd.md.pair.LJ(nlist=cell)
lj.params[('A', 'A')] = dict(epsilon=1, sigma=1)
lj.r_cut[('A', 'A')] = 2.5
integrator.forces.append(lj)
nvt = hoomd.md.methods.ConstantVolume(
filter=hoomd.filter.All(),
thermostat=hoomd.md.methods.thermostats.Bussi(kT=1.5))
integrator.methods.append(nvt)
sim.operations.integrator = integrator
# Run a short time before measuring performance.
sim.run(100)
# Run the simulation and print the performance.
sim.run(1000)
device.notice(f'{simulation.tps}')
lj_performance.py
is a file in the same directory as this notebook. Due to the way MPI launches parallel jobs, the code must be in a file instead of a notebook cell. %pycat
is an IPython magic command that displays the contents of a Python file with syntax highlighting.
Compare this script to the one used in Introducing Molecular Dynamics. The only difference is the addition of device.notice(f'{simulation.tps}')
which prints the performance in time steps per second. The same script can be run in serial or in parallel on different numbers of CPU cores.
Run the Simulation with MPI¶
Use the MPI launcher mpirun
to execute this script in parallel on any number of CPU cores given by the -n
option.
Your HPC cluster may use a different launcher that may take different arguments, such as srun, mpiexec, ibrun, or jsrun. See your cluster’s documentation to find the right launcher to use.
In Jupyter, the “!” magic command is equivalent to typing the given command in a shell.
[2]:
!mpirun -n 1 python3 lj_performance.py
336.00885988161735
[3]:
!mpirun -n 2 python3 lj_performance.py
notice(2): Using domain decomposition: n_x = 1 n_y = 1 n_z = 2.
604.2365441053402
[4]:
!mpirun -n 4 python3 lj_performance.py
notice(2): Using domain decomposition: n_x = 1 n_y = 2 n_z = 2.
1086.799408346402
The simulation runs faster on more CPU cores. This sequence of simulations demonstrates strong scaling where a simulation of a fixed number of particles executes in less time on more parallel resources. You can compute the efficiency of the scaling by taking the performance on n cores divided by the performance on one core divided by n. Strong scaling rarely nears 100% efficiency, so it will use more total resources than executing with -n 1
. However, a -n 1
simulation
may take months to complete while a -n 64
one takes days - making your research much more productive even it if uses a moderately larger amount of HPC resources.
Summary¶
In this section, you have executed a HOOMD-blue simulation script on 1, 2, and 4 CPU cores using MPI and observed the performance. The next section of this tutorial explains how HOOMD-blue splits the domain of the simulation and how you should structure your scripts.