hoomd.tune¶
Overview
User-defined tuner. |
|
Solves equations of \(min_x f(x)\) using gradient descent. |
|
Optimize by consistently narrowing the range where the extrema is. |
|
Adjusts the boundaries of the domain decomposition. |
|
Class for defining y = f(x) relationships for tuning x for a set y target. |
|
Abstract base class for optimizing \(f(x)\). |
|
Order particles in memory to improve performance. |
|
Abstract base class for finding x such that \(f(x) = 0\). |
|
Solves equations of f(x) = y using a ratio of y with the target. |
|
Solves equations of f(x) = y using the secant method. |
|
Abstract base class various solver types. |
Details
Tuners.
Tuner
operations make changes to the parameters of other operations (or the
simulation state) that adjust the performance of the simulation without changing
the correctness of the outcome. Every new hoomd.Simulation
object includes a
ParticleSorter
in its operations by default. ParticleSorter
rearranges the
order of particles in memory to improve cache-coherency.
This package also defines the CustomTuner
class and a number of helper
classes. Use these to implement custom tuner operations in Python code.
Solver
Most tuners explicitly involve solving some sort of mathematical problem (e.g.
root-finding or optimization). HOOMD provides infrastructure for solving these
problems as they appear in our provided hoomd.operation.Tuner
subclasses. All
tuners that involve iteratively solving a problem compose a SolverStep
subclass instance. The SolverStep
class implements the boilerplate to do
iterative solving given a simulation where calls to the “function” being solves
means running potentially 1,000’s of steps.
Every solver regardless of type has a solve
method which accepts a list of
tunable quantities. The method returns a Boolean indicating whether all
quantities are considered tuned or not. Tuners indicate they are tuned when two
successive calls to SolverStep.solve
return True
unless otherwise
documented.
Custom solvers can be created from inheriting from the base class of one of the
problem types (RootSolver
and Optimizer
) or SolverStep
if solving a
different problem type. All that is required is to implement the
SolverStep.solve_one
method, and the solver can be used by any HOOMD tuner
that expects a solver.
Custom Tuners
Through using SolverStep
subclasses and ManualTuneDefinition
most tuning
problems should be solvable for a CustomTuner
. To create a tuner define all
ManualTuneDefinition
interfaces for each tunable and plug into a solver in a
CustomTuner
.
- class hoomd.tune.ManualTuneDefinition(get_y, target, get_x, set_x, domain=None)¶
Class for defining y = f(x) relationships for tuning x for a set y target.
This class is made to be used with
hoomd.tune.SolverStep
subclasses. Here y represents a dependent variable of x. In general, x and y should be of typefloat
, but specifichoomd.tune.SolverStep
subclasses may accept other types.A special case for the return type of y is
None
. If the value is currently inaccessible or would be invalid, aManualTuneDefinition
object can return a y ofNone
to indicate this.hoomd.tune.SolverStep
objects will handle this automatically. Since we check forNone
internally inhoomd.tune.SolverStep
objects, aManualTuneDefinition
object’sy
property should be consistant when called multiple times within a timestep.When setting
x
the value is clamped between the given domain via,\[\begin{split}x &= x_{max}, \text{ if } x_n > x_{max},\\ x &= x_{min}, \text{ if } x_n < x_{min},\\ x &= x_n, \text{ otherwise}\end{split}\]- Parameters:
get_y (
callable
) – A callable that gets the current value for y.target (
any
) – The target y value to approach.get_x (
callable
) – A callable that gets the current value for x.set_x (
callable
) – A callable that sets the current value for x.domain (
tuple
[any
,any
], optional) – A tuple pair of the minimum and maximum accepted values of x, defaults toNone
. When, the domain isNone
then any value of x is accepted. Either the minimum or maximum can be set toNone
as well which means there is no maximum or minimum. The domain is used to wrap values within the specified domain when setting x.
Note
Placing domain restrictions on x can lead to the target y value being impossible to converge to. This will lead to the
hoomd.tune.SolverStep
object passed this tunable to never finish solving regardless if all otherManualTuneDefinition
objects are converged.- __eq__(other)¶
Test for equality.
- __hash__()¶
Compute a hash of the tune definition.
- clamp_into_domain(value)¶
Return the closest value within the domain.
- Parameters:
value (
any
) – A value of the same type as x.- Returns:
The value clamps within the domains of x. Clamping here refers to returning the value or minimum or maximum of the domain if value is outside the domain.
- property domain¶
A tuple pair of the minimum and maximum accepted values of x.
When the domain is None, any value of x is accepted. Either the minimum or maximum can be set to
None
as well which means there is no maximum or minimum. The domain is used to wrap values within the specified domain when setting x.- Type:
tuple[
any
,any
]
- in_domain(value)¶
Check whether a value is in the domain.
- Parameters:
value (
any
) – A value that can be compared to the minimum and maximum of the domain.- Returns:
Whether the value is in the domain of x.
- Return type:
- property max_x¶
Maximum allowed x value.
- property min_x¶
Minimum allowed y value.
- property target¶
The targetted y value, can be set.
- property x¶
The dependent variable.
Can be set. When set the setting value is clamped within the provided domain. See
clamp_into_domain
for further explanation.
- property y¶
The independent variable, and is unsettable.
- class hoomd.tune.CustomTuner(trigger, action)¶
Bases:
CustomOperation
,Tuner
User-defined tuner.
- Parameters:
action (hoomd.custom.Action) – The action to call.
trigger (hoomd.trigger.trigger_like) – Select the timesteps to call the action.
CustomTuner
is ahoomd.operation.Tuner
that wraps a user-definedhoomd.custom.Action
object so the action can be added to ahoomd.Operations
instance for use withhoomd.Simulation
objects.Tuners modify the parameters of other operations to improve performance. Tuners may read the system state, but not modify it.
Example:
custom_tuner = hoomd.tune.CustomTuner( action=custom_action, trigger=hoomd.trigger.Periodic(1000)) simulation.operations.tuners.append(custom_tuner)
See also
The base class
hoomd.custom.CustomOperation
.
- class hoomd.tune.GradientDescent(alpha: float = 0.1, kappa: ndarray | None = None, tol: float = 1e-05, maximize: bool = True, max_delta: float | None = None)¶
Bases:
Optimizer
Solves equations of \(min_x f(x)\) using gradient descent.
Derivatives are computed using the forward difference.
The solver updates
x
each step via,\[x_n = x_{n-1} - \alpha {\left (1 - \kappa) \nabla f + \kappa \Delta_{n-1} \right)}\]where \(\Delta\) is the last step size. This gives the optimizer a sense of momentum which for noisy (stochastic) optimization can lead to smoother optimization. Due to the need for two values to compute a derivative, then first time this is called it makes a slight jump higher or lower to start the method.
The solver will stop updating when a maximum is detected (i.e. the step size is smaller than
tol
).- Parameters:
alpha (
hoomd.variant.variant_like
, optional) – Either a number between 0 and 1 used to dampen the rate of change in x or a variant that varies not by timestep but by the number of timessolve
has been called (i.e. the number of steps taken) (defaults to 0.1).alpha
scales the corrections to x each iteration. Larger values ofalpha
lead to larger changes while aalpha
of 0 leads to no change in x at all.kappa (
numpy.ndarray
, optional) – Real number array that determines how much of the previous steps’ directions to use (defaults toNone
which does no averaging over past step directions). The array values correspond to weight that the \(N\) last steps are weighted when combined with the current step. The current step is weighted by \(1 - \sum_{i=1}^{N} \kappa_i\).tol (
float
, optional) – The absolute tolerance for convergence of y, (defaults to1e-5
).maximize (
bool
, optional) – Whether or not to maximize function (defaults toTrue
).max_delta (
float
, optional) – The maximum step size to allow (defaults toNone
which allows a step size of any length).
- kappa¶
Real number array that determines how much of the previous steps’ directions to use. The array values correspond to weight that the \(N\) last steps are weighted when combined with the current step. The current step is weighted by \(1 - \sum_{i=1}^{N} \kappa_i\).
- Type:
- __eq__(other)¶
Test for equality.
- property alpha¶
Number between 0 and 1 that dampens of change in x.
Larger values of
alpha
lead to larger changes while aalpha
of 0 leads to no change in x at all. The property returns the currentalpha
given the current number of steps.The property can be set as in the constructor.
- Type:
- reset()¶
Reset all solving internals.
- solve(tunables)¶
Iterates towards a solution for a list of tunables.
If a y for one of the
tunables
isNone
then we skip thattunable
. Skipping implies that the quantity is not tuned andsolve
will returnFalse
.- Parameters:
tunables (list[
hoomd.tune.ManualTuneDefinition
]) – A list of tunable objects that represent a relationship f(x) = y.- Returns:
Returns whether or not all tunables were considered tuned by the object.
- Return type:
- solve_one(tunable)¶
Solve one step.
- class hoomd.tune.GridOptimizer(n_bins: int = 5, n_rounds: int = 1, maximize: bool = True)¶
Bases:
Optimizer
Optimize by consistently narrowing the range where the extrema is.
The algorithm is as follows. Given a domain \(d = [a, b]\), \(d\) is broken up into
n_bins
subsequent bins. For the nextn_bins
calls, the optimizer tests the function value at each bin center. The next call does one of two things. If the number of rounds has reachedn_rounds
the optimization is done, and the center of the best bin is the solution. Otherwise, another round is performed where the bin’s extent is the new domain.Warning
Changing a tunables domain during usage of a
GridOptimizer
results in incorrect behavior.- Parameters:
- __eq__(other)¶
Test for equality.
- reset()¶
Reset all solving internals.
- solve_one(tunable)¶
Perform one step of optimization protocol.
- class hoomd.tune.LoadBalancer(trigger, x=True, y=True, z=True, tolerance=1.02, max_iterations=1)¶
Bases:
Tuner
Adjusts the boundaries of the domain decomposition.
- Parameters:
trigger (hoomd.trigger.trigger_like) – Select the timesteps on which to perform load balancing.
tolerance (float) – Load imbalance tolerance.
max_iterations (int) – Maximum number of iterations to attempt in a single step.
LoadBalancer
adjusts the boundaries of the MPI domains to distribute the particle load close to evenly between them. The load imbalance is defined as the number of particles owned by a rank divided by the average number of particles per rank if the particles had a uniform distribution:\[I = \frac{N_i}{N / P}\]where \(N_i\) is the number of particles on rank \(i\), \(N\) is the total number of particles, and \(P\) is the number of ranks.
In order to adjust the load imbalance,
LoadBalancer
scales by the inverse of the imbalance factor. To reduce oscillations and communication overhead, it does not move a domain more than 5% of its current size in a single rebalancing step, and not more than half the distance to its neighbors.Simulations with interfaces (so that there is a particle density gradient) or clustering should benefit from load balancing. The potential speedup is \(I-1.0\), so that if the largest imbalance is 1.4, then the user can expect a 40% speedup in the simulation. This is of course an estimate that assumes that all algorithms are linear in \(N\), all GPUs are fully occupied, and the simulation is limited by the speed of the slowest processor. If you have a simulation where, for example, some particles have significantly more pair force neighbors than others, this estimate of the load imbalance may not produce the optimal results.
A load balancing adjustment is only performed when the maximum load imbalance exceeds a tolerance. The ideal load balance is 1.0, so setting tolerance less than 1.0 will force an adjustment every update. The load balancer can attempt multiple iterations of balancing on each update, and up to maxiter attempts can be made. The optimal values of update and maxiter will depend on your simulation.
Load balancing can be performed independently and sequentially for each dimension of the simulation box. A small performance increase may be obtained by disabling load balancing along dimensions that are known to be homogeneous. For example, if there is a planar vapor-liquid interface normal to the \(z\) axis, then it may be advantageous to disable balancing along \(x\) and \(y\).
In systems that are well-behaved, there is minimal overhead of balancing with a small update. However, if the system is not capable of being balanced (for example, due to the density distribution or minimum domain size), having a small update and high maxiter may lead to a large performance loss. In such systems, it is currently best to either balance infrequently or to balance once in a short test run and then set the decomposition statically in a separate initialization.
Balancing is ignored if there is no domain decomposition available (MPI is not built or is running on a single rank).
- trigger¶
Select the timesteps on which to perform load balancing.
- Type:
- class hoomd.tune.Optimizer¶
Bases:
SolverStep
Abstract base class for optimizing \(f(x)\).
- class hoomd.tune.ParticleSorter(trigger=200, grid=None)¶
Bases:
Tuner
Order particles in memory to improve performance.
- Parameters:
trigger (hoomd.trigger.trigger_like) – Select the timesteps on which to sort. Defaults to a
hoomd.trigger.Periodic(200)
trigger.grid (int) – Resolution of the grid to use when sorting. The default value of
None
setsgrid=4096
in 2D simulations andgrid=256
in 3D simulations.
ParticleSorter
improves simulation performance by sorting the particles in memory along a space-filling curve. This takes particles that are close in space and places them close in memory, leading to a higher rate of cache hits when computing pair potentials.Note
New
hoomd.Operations
instances include aParticleSorter
constructed with default parameters.- trigger¶
Select the timesteps on which to sort.
- Type:
- class hoomd.tune.RootSolver¶
Bases:
SolverStep
Abstract base class for finding x such that \(f(x) = 0\).
For solving for a non-zero value, \(f(x) - y_t = 0\) is solved.
- class hoomd.tune.ScaleSolver(max_scale=2.0, gamma=2.0, correlation='positive', tol=1e-05)¶
Bases:
RootSolver
Solves equations of f(x) = y using a ratio of y with the target.
Each time this solver is called it takes updates according to the following equation if the correlation is positive:
\[x_n = \min{\left(\frac{\gamma + t}{y + \gamma}, s_{max}\right)} \cdot x\]and
\[x_n = \min{\left(\frac{y + \gamma}{\gamma + t}, s_{max}\right)} \cdot x\]if the correlation is negative, where \(t\) is the target and \(x_n\) is the new x.
The solver will stop updating when \(\lvert y - t \rvert \le tol\).
- Parameters:
max_scale (
float
, optional) – The maximum amount to scale the current x value with, defaults to 2.0.gamma (
float
, optional) – nonnegative real number used to dampen or increase the rate of change in x.gamma
is added to the numerator and denominator of they / target
ratio. Larger values ofgamma
lead to smaller changes while agamma
of 0 leads to scaling x by exactly they / target
ratio.correlation (
str
, optional) – Defines whether the relationship between x and y is of a positive or negative correlation, defaults to ‘positive’. This determines which direction to scale x in for a given y.tol (
float
, optional) – The absolute tolerance for convergence of y, defaults to 1e-5.
Note
This solver is only usable when quantities are strictly positive.
- __eq__(other)¶
Test for equality.
- reset()¶
Reset all solving internals.
- solve_one(tunable)¶
Solve one step.
- class hoomd.tune.SecantSolver(gamma=0.9, tol=1e-05)¶
Bases:
RootSolver
Solves equations of f(x) = y using the secant method.
The solver updates
x
each step via,\[x_n = x - \gamma \cdot (y - t) \cdot \frac{x - x_{o}}{y - y_{old}}\]where \(o\) represent the old values, \(n\) the new, and \(t\) the target. Due to the need for a previous value, then first time this is called it makes a slight jump higher or lower to start the method.
The solver will stop updating when \(\lvert y - t \rvert \le tol\).
- Parameters:
gamma (
float
, optional) – real number between 0 and 1 used to dampen the rate of change in x.gamma
scales the corrections to x each iteration. Larger values ofgamma
lead to larger changes while agamma
of 0 leads to no change in x at all.tol (
float
, optional) – The absolute tolerance for convergence of y, defaults to 1e-5.
Note
Tempering the solver with a smaller than 1
gamma
value is crucial for numeric stability. If instability is found, then loweringgamma
accordingly should help.- __eq__(other)¶
Test for equality.
- reset()¶
Reset all solving internals.
- solve_one(tunable)¶
Solve one step.
- class hoomd.tune.SolverStep¶
Bases:
object
Abstract base class various solver types.
Requires a single method
solve_one
that steps forward one iteration in solving the given variable relationship. Users can use subclasses of this withhoomd.tune.ManualTuneDefinition
to tune attributes with a functional relation.Note
A
SolverStep
object requires manual iteration to converge. This is to support the use case of measuring quantities that require running the simulation for some amount of time after one iteration before remeasuring the dependent variable (i.e. the y).SolverStep
object can be used inhoomd.custom.Action
subclasses for user defined tuners and updaters.- abstract reset()¶
Reset all solving internals.
This should put the solver in its initial state as if it has not seen any tunables or done any solving yet.
- solve(tunables)¶
Iterates towards a solution for a list of tunables.
If a y for one of the
tunables
isNone
then we skip thattunable
. Skipping implies that the quantity is not tuned andsolve
will returnFalse
.- Parameters:
tunables (list[
hoomd.tune.ManualTuneDefinition
]) – A list of tunable objects that represent a relationship f(x) = y.- Returns:
Returns whether or not all tunables were considered tuned by the object.
- Return type:
- abstract solve_one(tunable)¶
Takes in a tunable object and attempts to solve x for a specified y.
- Parameters:
tunable (
hoomd.tune.ManualTuneDefinition
) – A tunable object that represents a relationship of f(x) = y.- Returns:
Whether or not the tunable converged to the target.
- Return type: