# Running Multiple Simulations With Partitions

## Overview

### Questions

• How can I partition a MPI communicator to run many independent simulations?

• When are partitions useful?

### Objectives

• Show how to define a custom Communicator with more than one partition.

• Demonstrate how to use this to run many independent simulations with one mpirun.

• Explain how this is useful to aggregate jobs on HPC systems.

## Partitioning communicators

So far in this tutorial you have seen how executing mpirun -n 4 python3 script.py will use domain decomposition to run 1 simulation split across 4 ranks via domain decomposition. What if you wanted to run 2 different simulations, each on 2 ranks with this command? Or 4 different simulations each on 1 rank? This is called partitioning the MPI communicator.

In HOOMD-blue, you can do this by defining a non-default Communicator and specifying the ranks_per_partition argument. Then use communicator.partition as an identifier in your script to change input parameters.

[2]:

%pycat hello_partition.py

import hoomd

communicator = hoomd.communicator.Communicator(ranks_per_partition=2)
print(f'Hello from partition {communicator.partition} rank {communicator.rank}')


[3]:

!mpirun -n 4 python3 hello_partition.py

Hello from partition 0 rank 0
Hello from partition 0 rank 1
Hello from partition 1 rank 0
Hello from partition 1 rank 1


In partitioned simulations, all ranks within a given partition must have the same input file, operations, parameters, and otherwise follow the single program guidelines outlined in the previous sections of this tutorial. However, the input file, operations and/or their parameters can vary from one partition to the next. For example, you could run many simulations with different temperatures, different initial configurations, or different random number seeds. The following example shows how to set different temperatures and random number seeds based on the partition index.

You MUST pass the partitioned Communicator object to the device constructor: hoomd.device.CPU(communicator=communicator) or hoomd.device.GPU(communicator=communicator)

[4]:

%pycat lj_partition.py

import hoomd

# kT values to execute at:
kT_values = [1.5, 2.0]

# Instantiate a Communicator with 2 ranks in each partition.
communicator = hoomd.communicator.Communicator(ranks_per_partition=2)

# Pass the communicator to the device.
device = hoomd.device.CPU(communicator=communicator)

# Initialize the simulation.
sim = hoomd.Simulation(device=device)
sim.create_state_from_gsd(filename='random.gsd')

# Choose system parameters based on the partition
sim.seed = communicator.partition
kT = kT_values[communicator.partition]

# Set the operations for a Lennard-Jones particle simulation.
integrator = hoomd.md.Integrator(dt=0.005)
cell = hoomd.md.nlist.Cell()
lj = hoomd.md.pair.LJ(nlist=cell)
lj.params[('A', 'A')] = dict(epsilon=1, sigma=1)
lj.r_cut[('A', 'A')] = 2.5
integrator.forces.append(lj)
langevin = hoomd.md.methods.Langevin(kT=kT, filter=hoomd.filter.All())
integrator.methods.append(langevin)
sim.operations.integrator = integrator

# Use the partition id in the output file name.
gsd_writer = hoomd.write.GSD(filename=f'trajectory{communicator.partition}.gsd',
trigger=hoomd.trigger.Periodic(1000),
mode='xb')
sim.operations.writers.append(gsd_writer)

# Run the simulation.
sim.run(1000)


[5]:

!mpirun -n 4 python3 lj_partition.py

notice(2): Using domain decomposition: n_x = 1 n_y = 1 n_z = 2.
notice(2): Using domain decomposition: n_x = 1 n_y = 1 n_z = 2.


Each partition is a separate Simulation that produces its own trajectory. You must choose filenames or directories so that different partitions produce different output files. The above example places the partition index in the filename:

[6]:

!ls trajectory?.gsd

trajectory0.gsd  trajectory1.gsd


## Motivation

Why use partitions in simulations where you can just write one script with parameters and execute it more than once? Some HPC systems only allow jobs to use whole nodes, and/or policies prefer fewer large jobs over many smaller jobs in the scheduler. On these systems, you can obtain better throughput for your research when you use partitions to aggregate many independent simulations together into one scheduler job.

While the details are beyond the scope of this tutorial, signac-flow can automate the aggregation process.

## Summary

In this section, you have learned how to run many simulations with different parameters using a single mpirun invocation. This is the end of the MPI tutorial.