hoomd.device

Overview

CPU

Select the CPU to execute simulations.

Device

Base class device object.

GPU

Select a GPU or GPU(s) to execute simulations.

auto_select

Automatically select the hardware device.

Details

Devices.

Use a Device class to choose which hardware device(s) should execute the simulation. Device also sets where to write log messages and how verbose the message output should be. Pass a Device object to hoomd.Simulation on instantiation to set the options for that simulation.

User scripts may instantiate multiple Device objects and use each with a different hoomd.Simulation object. One Device object may also be shared with many hoomd.Simulation objects.

Tip

Reuse Device objects when possible. There is a non-negligible overhead to creating each Device, especially on the GPU.

See also

hoomd.Simulation

class hoomd.device.CPU(num_cpu_threads=None, communicator=None, msg_file=None, notice_level=2)

Bases: Device

Select the CPU to execute simulations.

Parameters
  • num_cpu_threads (int) – Number of TBB threads. Set to None to auto-select.

  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • msg_file (str) – Filename to write messages to. When None use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

MPI

In MPI execution environments, create a CPU device on every rank.

class hoomd.device.Device(communicator, notice_level, msg_file)

Bases: object

Base class device object.

Provides methods and properties common to CPU and GPU.

Warning

Device cannot be used directly. Instantate a CPU or GPU object.

TBB threads

Set num_cpu_threads to None and TBB will auto-select the number of CPU threads to execute. If the environment variable OMP_NUM_THREADS is set, HOOMD will use this value. You can also set num_cpu_threads explicitly.

Note

At this time very few features use TBB for threading. Most users should employ MPI for parallel simulations. See Features for more information.

property communicator

The MPI Communicator [read only].

Type

hoomd.communicator.Communicator

property devices

Descriptions of the active hardware devices.

Type

list[str]

property msg_file

Filename to write messages to.

By default, HOOMD prints all messages and errors to Python’s sys.stdout and sys.stderr (or the system’s stdout and stderr when running in an MPI environment).

Set msg_file to a filename to redirect these messages to that file.

Set msg_file to None to use the system’s stdout and stderr.

Note

All MPI ranks within a given partition must open the same file. To ensure this, the given file name on rank 0 is broadcast to the other ranks. Different partitions may open separate files. For example:

communicator = hoomd.communicator.Communicator(
    ranks_per_partition=2)
filename = f'messages.{communicator.partition}'
device = hoomd.device.GPU(communicator=communicator,
                          msg_file=filename)
Type

str

property notice_level

Minimum level of messages to print.

notice_level controls the verbosity of messages printed by HOOMD. The default level of 2 shows messages that the developers expect most users will want to see. Set the level lower to reduce verbosity or as high as 10 to get extremely verbose debugging messages.

Type

int

property num_cpu_threads

Number of TBB threads to use.

Type

int

class hoomd.device.GPU(gpu_ids=None, num_cpu_threads=None, communicator=None, msg_file=None, notice_level=2)

Bases: Device

Select a GPU or GPU(s) to execute simulations.

Parameters
  • gpu_ids (list[int]) – List of GPU ids to use. Set to None to let the driver auto-select a GPU.

  • num_cpu_threads (int) – Number of TBB threads. Set to None to auto-select.

  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • msg_file (str) – Filename to write messages to. When None, use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

Tip

Call GPU.get_available_devices to get a human readable list of devices. gpu_ids = [0] will select the first device in this list, [1] will select the second, and so on.

The ordering of the devices is determined by the GPU driver and runtime.

Device auto-selection

When gpu_ids is None, HOOMD will ask the GPU driver to auto-select a GPU. In most cases, this will select device 0. When all devices are set to a compute exclusive mode, the driver will choose a free GPU.

MPI

In MPI execution environments, create a GPU device on every rank. When gpu_ids is left None, HOOMD will attempt to detect the MPI local rank environment and choose an appropriate GPU with id = local_rank % num_capable_gpus. Set notice_level to 3 to see status messages from this process. Override this auto-selection by providing appropriate device ids on each rank.

Multiple GPUs

Specify a list of GPUs to gpu_ids to activate a single-process multi-GPU code path.

Note

Not all features are optimized to use this code path, and it requires that all GPUs support concurrent managed memory access and have high bandwidth interconnects.

property compute_capability

Compute capability of the device.

The tuple includes the major and minor versions of the CUDA compute capability: (major, minor).

Type

tuple(int, int)

enable_profiling()

Enable GPU profiling.

When using GPU profiling tools on HOOMD, select the option to disable profiling on start. Initialize and run a simulation long enough that all autotuners have completed, then open enable_profiling() as a context manager and continue the simulation for a time. Profiling stops when the context manager closes.

Example:

with device.enable_profiling():
    sim.run(1000)
static get_available_devices()

Get the available GPU devices.

Returns

Descriptions of the available devices (if any).

Return type

list[str]

static get_unavailable_device_reasons()

Get messages describing the reasons why devices are unavailable.

Returns

Messages indicating why some devices are unavailable (if any).

Return type

list[str]

property gpu_error_checking

Whether to check for GPU error conditions after every call.

When False (the default), error messages from the GPU may not be noticed immediately. Set to True to increase the accuracy of the GPU error messages at the cost of significantly reduced performance.

Type

bool

static is_available()

Test if the GPU device is available.

Returns

True if this build of HOOMD supports GPUs, False if not.

Return type

bool

property memory_traceback

Whether GPU memory tracebacks should be enabled.

Memory tracebacks are useful for developers when debugging GPU code.

Type

bool

hoomd.device.auto_select(communicator=None, msg_file=None, notice_level=2)

Automatically select the hardware device.

Parameters
  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • msg_file (str) – Filename to write messages to. When None use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

Returns

Instance of GPU if availabile, otherwise CPU.