hoomd.communicator

Overview

Communicator

MPI communicator.

Details

MPI communicator.

When compiled without MPI support, Communicator acts as if there is one MPI rank and 1 partition. To use MPI, compile HOOMD-blue with the option ENABLE_MPI=on and use the appropriate MPI launcher to launch Python. Then the Communicator class will configure and query MPI ranks and partitions. By default, Communicator starts with the MPI_COMM_WOLRD MPI communicator, and the communicator is not available for user scripts.

Communicator also accepts MPI communicators from mpi4py. Use this to implement workflows with multiple simulations that communicate using mpi4py calls in user code (e.g. genetic algorithms, umbrella sampling).

class hoomd.communicator.Communicator(mpi_comm=None, ranks_per_partition=None)

MPI communicator.

Parameters
  • mpi_comm – Accepts an mpi4py communicator. Use this argument to perform many independent hoomd simulations where you communicate between those simulations using mpi4py.

  • ranks_per_partition (int) – (MPI) Number of ranks to include in a partition.

The Communicator class initializes MPI communications for a hoomd.Simulation and exposes rank and partition information to the user as properties. To use MPI, launch your Python script with an MPI launcher (e.g. mpirun or mpiexec). By default, Communicator uses all ranks provided by the launcher num_launch_ranks for a single hoomd.Simulation object which decomposes the state onto that many domains.

Set ranks_per_partition to an integer to partition launched ranks into num_launch_ranks / ranks_per_partition communicators, each with their own partition index. Use this to perform many simulations in parallel, for example by using partition as an index into an array of state points to execute.

barrier()

Perform a barrier synchronization across all ranks in the partition.

Note

Does nothing in builds with ENABLE_MPI=off.

barrier_all()

Perform a MPI barrier synchronization across all ranks.

Note

Does nothing in builds with ENABLE_MPI=off.

localize_abort()

Localize MPI_Abort to this partition.

HOOMD calls MPI_Abort to tear down all running MPI processes whenever there is an uncaught exception. By default, this will abort the entire MPI execution. When using partitions, an uncaught exception on one partition will therefore abort all of them.

Use the return value of localize_abort() as a context manager to tell HOOMD that all operations within the context will use only that MPI communicator so that an uncaught exception in one partition will only abort that partition and leave the others running.

property num_partitions

The number of partitions in this execution.

Create partitions with the ranks_per_partition argument on initialization. Then, the number of partitions is num_launch_ranks / ranks_per_partition.

Note

Returns 1 in builds with ENABLE_MPI=off.

Type

int

property num_ranks

The number of ranks in this partition.

When initialized with ranks_per_partition=None, num_ranks is equal to the num_launch_ranks set by the MPI launcher. When using partitions, num_ranks is equal to ranks_per_partition.

Note

Returns 1 in builds with ENABLE_MPI=off.

Type

int

property partition

The current partition.

Note

Returns 0 in builds with ENABLE_MPI=off.

Type

int

property rank

The current rank within the partition.

Note

Returns 0 in builds with ENABLE_MPI=off.

Type

int

property walltime

Wall clock time since creating the Communicator [seconds].

walltime returns the same value on each rank in the current partition.