hoomd.device#

Overview

CPU

Select the CPU to execute simulations.

Device

Base class device object.

GPU

Select a GPU or GPU(s) to execute simulations.

NoticeFile

A file-like object that writes to a Device notice stream.

auto_select

Automatically select the hardware device.

Details

Devices.

Use a Device class to choose which hardware device(s) should execute the simulation. Device also sets where to write log messages and how verbose the message output should be. Pass a Device object to hoomd.Simulation on instantiation to set the options for that simulation.

User scripts may instantiate multiple Device objects and use each with a different hoomd.Simulation object. One Device object may also be shared with many hoomd.Simulation objects.

Examples:

device = hoomd.device.CPU()
device = hoomd.device.GPU()

Tip

Reuse Device objects when possible. There is a non-negligible overhead to creating each Device, especially on the GPU.

See also

hoomd.Simulation

class hoomd.device.CPU(num_cpu_threads=None, communicator=None, message_filename=None, notice_level=2)#

Bases: Device

Select the CPU to execute simulations.

Parameters:
  • num_cpu_threads (int) – Number of TBB threads. Set to None to auto-select.

  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • message_filename (str) – Filename to write messages to. When None, use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

MPI

In MPI execution environments, create a CPU device on every rank.

Example:

cpu = hoomd.device.CPU()
class hoomd.device.Device(communicator, notice_level, message_filename)#

Bases: object

Base class device object.

Provides methods and properties common to CPU and GPU, including those that control where status messages are stored (message_filename) how many status messages HOOMD-blue prints (notice_level) and a method for user provided status messages (notice).

Warning

Device cannot be used directly. Instantate a CPU or GPU object.

TBB threads

Set num_cpu_threads to None and TBB will auto-select the number of CPU threads to execute. If the environment variable OMP_NUM_THREADS is set, HOOMD will use this value. You can also set num_cpu_threads explicitly.

Note

At this time very few features use TBB for threading. Most users should employ MPI for parallel simulations. See Features for more information.

property communicator#

The MPI Communicator [read only].

Type:

hoomd.communicator.Communicator

property device#

Descriptions of the active hardware device.

Type:

str

property devices#

Descriptions of the active hardware devices.

Deprecated since version 4.5.0: Use device.

Type:

list[str]

property message_filename#

Filename to write messages to.

By default, HOOMD prints all messages and errors to Python’s sys.stdout and sys.stderr (or the system’s stdout and stderr when running in an MPI environment).

Set message_filename to a filename to redirect these messages to that file.

Set message_filename to None to use the system’s stdout and stderr.

Examples:

device.message_filename = str(path / 'messages.log')
device.message_filename = None

Note

All MPI ranks within a given partition must open the same file. To ensure this, the given file name on rank 0 is broadcast to the other ranks. Different partitions may open separate files. For example:

communicator = hoomd.communicator.Communicator(
    ranks_per_partition=2)
filename = f'messages.{communicator.partition}'
device = hoomd.device.CPU(communicator=communicator,
                          message_filename=filename)
Type:

str

notice(message, level=1)#

Write a notice message.

Parameters:
  • message (str) – Message to write.

  • level (int) – Message notice level.

Write the given message string to the output defined by message_filename on MPI rank 0 when notice_level >= level.

Example:

device.notice('Message')

Hint

Use notice instead of print to write status messages and your scripts will work well in parallel MPI jobs. notice writes message only on rank 0. Use with a rank-specific message_filename to troubleshoot issues with specific partitions.

property notice_level#

Minimum level of messages to print.

notice_level controls the verbosity of messages printed by HOOMD. The default level of 2 shows messages that the developers expect most users will want to see. Set the level lower to reduce verbosity or as high as 10 to get extremely verbose debugging messages.

Example:

device.notice_level = 4
Type:

int

property num_cpu_threads#

Number of TBB threads to use.

Type:

int

class hoomd.device.GPU(gpu_ids=None, num_cpu_threads=None, communicator=None, message_filename=None, notice_level=2, gpu_id=None)#

Bases: Device

Select a GPU or GPU(s) to execute simulations.

Parameters:
  • gpu_ids (list[int]) –

    List of GPU ids to use. Set to None to let the driver auto-select a GPU.

    Deprecated since version 4.5.0: Use gpu_id.

  • num_cpu_threads (int) – Number of TBB threads. Set to None to auto-select.

  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • message_filename (str) – Filename to write messages to. When None, use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

  • gpu_id (int) – GPU id to use. Set to None to let the driver auto-select a GPU.

Tip

Call GPU.get_available_devices to get a human readable list of devices. gpu_id = 0 will select the first device in this list, 1 will select the second, and so on.

The ordering of the devices is determined by the GPU driver and runtime.

Device auto-selection

When gpu_id is None, HOOMD will ask the GPU driver to auto-select a GPU. In most cases, this will select device 0. When all devices are set to a compute exclusive mode, the driver will choose a free GPU.

MPI

In MPI execution environments, create a GPU device on every rank. When gpu_id is left None, HOOMD will attempt to detect the MPI local rank environment and choose an appropriate GPU with id = local_rank % num_capable_gpus. Set notice_level to 3 to see status messages from this process. Override this auto-selection by providing appropriate device ids on each rank.

Multiple GPUs

Specify a list of GPUs to gpu_ids to activate a single-process multi-GPU code path.

Deprecated since version 4.5.0: Use MPI.

Note

Not all features are optimized to use this code path, and it requires that all GPUs support concurrent managed memory access and have high bandwidth interconnects.

Example:

gpu = hoomd.device.GPU()
property compute_capability#

Compute capability of the device.

The tuple includes the major and minor versions of the CUDA compute capability: (major, minor).

Type:

tuple(int, int)

enable_profiling()#

Enable GPU profiling.

When using GPU profiling tools on HOOMD, select the option to disable profiling on start. Initialize and run a simulation long enough that all autotuners have completed, then open enable_profiling() as a context manager and continue the simulation for a time. Profiling stops when the context manager closes.

Example:

simulation = hoomd.util.make_example_simulation(device=gpu)
with gpu.enable_profiling():
    simulation.run(1000)
static get_available_devices()#

Get the available GPU devices.

Returns:

Descriptions of the available devices (if any).

Return type:

list[str]

static get_unavailable_device_reasons()#

Get messages describing the reasons why devices are unavailable.

Returns:

Messages indicating why some devices are unavailable (if any).

Return type:

list[str]

property gpu_error_checking#

Whether to check for GPU error conditions after every call.

When False (the default), error messages from the GPU may not be noticed immediately. Set to True to increase the accuracy of the GPU error messages at the cost of significantly reduced performance.

Example:

gpu.gpu_error_checking = True
Type:

bool

static is_available()#

Test if the GPU device is available.

Returns:

True if this build of HOOMD supports GPUs, False if not.

Return type:

bool

class hoomd.device.NoticeFile(device, level=1)#

Bases: object

A file-like object that writes to a Device notice stream.

Parameters:
  • device (Device) – The Device object.

  • level (int) – Message notice level. Default value is 1.

Example:

notice_file = hoomd.device.NoticeFile(device=device)

Note

Use this in combination with Device.message_filename to combine notice messages with output from code that expects file-like objects (such as hoomd.write.Table).

flush()#

Flush the output.

writable()#

Provide file-like API call writable.

write(message)#

Writes data to the associated devices notice stream.

Parameters:

message (str) – Message to write.

Example:

notice_file.write('Message\n')
hoomd.device.auto_select(communicator=None, message_filename=None, notice_level=2)#

Automatically select the hardware device.

Parameters:
  • communicator (hoomd.communicator.Communicator) – MPI communicator object. When None, create a default communicator that uses all MPI ranks.

  • message_filename (str) – Filename to write messages to. When None, use sys.stdout and sys.stderr. Messages from multiple MPI ranks are collected into this file.

  • notice_level (int) – Minimum level of messages to print.

Returns:

Instance of GPU if availabile, otherwise CPU.

Example:

device = hoomd.device.auto_select()