GPU¶
- class hoomd.device.GPU(communicator=None, message_filename=None, notice_level=2, gpu_id=None)¶
Bases:
Device
Select a GPU to execute simulations.
- Parameters:
communicator (hoomd.communicator.Communicator) – MPI communicator object. When
None
, create a default communicator that uses all MPI ranks.message_filename (str) – Filename to write messages to. When
None
, usesys.stdout
andsys.stderr
. Messages from multiple MPI ranks are collected into this file.notice_level (int) – Minimum level of messages to print.
gpu_id (int) – GPU id to use. Set to
None
to let the driver auto-select a GPU.
Tip
Call
GPU.get_available_devices
to get a human readable list of devices.gpu_id = 0
will select the first device in this list,1
will select the second, and so on.The ordering of the devices is determined by the GPU driver and runtime.
Device auto-selection
When
gpu_id
isNone
, HOOMD will ask the GPU driver to auto-select a GPU. In most cases, this will select device 0. When all devices are set to a compute exclusive mode, the driver will choose a free GPU.MPI
In MPI execution environments, create a
GPU
device on every rank. Whengpu_id
is leftNone
, HOOMD will attempt to detect the MPI local rank environment and choose an appropriate GPU withid = local_rank % num_capable_gpus
. Setnotice_level
to 3 to see status messages from this process. Override this auto-selection by providing appropriate device ids on each rank.Note
Not all features are optimized to use this code path, and it requires that all GPUs support concurrent managed memory access and have high bandwidth interconnects.
Example:
gpu = hoomd.device.GPU()
Members inherited from
Device
:- communicator¶
The MPI Communicator.
Read more...
- property notice_level¶
Minimum level of messages to print.
Read more...
- property message_filename¶
Filename to write messages to.
Read more...
- property device¶
Descriptions of the active hardware device.
Read more...
- notice()¶
Write a notice message.
Read more...
Members defined in
GPU
:- property compute_capability¶
Compute capability of the device.
The tuple includes the major and minor versions of the CUDA compute capability:
(major, minor)
.
- enable_profiling()¶
Enable GPU profiling.
When using GPU profiling tools on HOOMD, select the option to disable profiling on start. Initialize and run a simulation long enough that all autotuners have completed, then open
enable_profiling()
as a context manager and continue the simulation for a time. Profiling stops when the context manager closes.Example:
simulation = hoomd.util.make_example_simulation(device=gpu) with gpu.enable_profiling(): simulation.run(1000)
- static get_available_devices()¶
Get the available GPU devices.
Get messages describing the reasons why devices are unavailable.
- property gpu_error_checking¶
Whether to check for GPU error conditions after every call.
When
False
(the default), error messages from the GPU may not be noticed immediately. Set toTrue
to increase the accuracy of the GPU error messages at the cost of significantly reduced performance.Example:
gpu.gpu_error_checking = True
- Type: