torchani.nn#
Classes that represent atomic (and groups of element-specific) neural networks
The most important classes in this module are AtomicNetwork
, which represents a
callable that computes scalars from local atomic features, ANINetworks
, and
Ensemble
, which collect groups of element-specific neural networks and perform
different reduction operations over them.
It also contains useful factory methods to instantiate neural networks for different elements.
Inference-optimized versions of Ensemble
and AtomicNetwork
, recommended for
calculations of single molecules, molecular dynamics and geometry optimizations, are
also provided.
Functions
|
Classes
Embed a sequence of atoms into one-hot vectors |
|
Embed a sequence of atoms into a continuous vector space |
|
Base class for ANI modules that contain Atomic Neural Networks |
|
Predict molecular or atomic scalars from a set of element-specific networks |
|
Predict molecular or atomic scalars form (possibly partially shared) networks |
|
Predict molecular or atomic scalars form fully shared networks |
|
Calculate output scalars by averaging over many containers of networks |
|
Convert atomic numbers into internal ANI element indices |
|
Batched Linear layer that fuses multiple Linear layers that have same architecture If "e" is the number of fused layers (which usually corresponds to members in an ensamble), then we have: |
|
The inference-optimized analogue of a |
|
The inference-optimized analogue of an |
|
CELU activation function with alpha=0.1 |
|
Create a pipeline of modules, like |
- class torchani.nn.AtomicOneHot(symbols)[source]#
Embed a sequence of atoms into one-hot vectors
Padding atoms are set to zeros. As an example:
symbols = ("H", "C", "N") one_hot = AtomicOneHot(symbols) encoded = one_hot(torch.tensor([1, 0, 2, -1])) # encoded == torch.tensor([[0, 1, 0], [1, 0, 0], [0, 0, 1], [0, 0, 0]])
- forward(elem_idxs)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.AtomicEmbedding(symbols, dim=10)[source]#
Embed a sequence of atoms into a continuous vector space
This module is a thin wrapper over
torch.nn.Embedding
. Padding atoms are set to zero. As an example:symbols = ("H", "C", "N") embed = AtomicEmbedding(symbols, 2) encoded = embed(torch.tensor([1, 0, 2, -1])) # `encoded` depends on the random init, but it could be for instance: # torch.tensor([[1.2, .1], [-.5, .8], [.3, -.4], [0, 0]])
- forward(elem_idxs)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.AtomicContainer(*args, **kwargs)[source]#
Base class for ANI modules that contain Atomic Neural Networks
- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.AtomicNetwork(layer_dims, activation='gelu', bias=False)[source]#
- forward(features)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.ANINetworks(modules, alias=False)[source]#
Predict molecular or atomic scalars from a set of element-specific networks
Iterate over atomic networks and calculate the corresponding atomic scalars. By default the outputs are summed over atoms to obtain molecular quantities. This can be disabled with
atomic=True
. If you want to allow different elements to map to the same network, passalias=True
, otherwise elemetns are required to be mapped to different, element-specific networks.- Parameters:
modules (Dict[str, AtomicNetwork]) – symbol-network mapping for each supported element. Different elements will share networks if the same ref is used for different keys
alias (bool) – Allow the class to map different elements to the same atomic network.
Warning
The input element indices must be 0, 1, 2, 3, …, not atomic numbers. You can convert from atomic numbers with
torchani.nn.SpeciesConverter
- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Calculate atomic scalars from the input features
- Parameters:
elem_idxs (Tensor) – An int
torch.Tensor
that stores the element indices of a batch of molecules, (for example after conversion withtorchani.nn.SpeciesConverter
). Shape is(molecules, 3)
.aevs (Tensor | None) – A float tensor with local atomic features (AEVs). Shape is
(molecules, atoms, num-aev-features)
.atomic (bool) – Whether to perform a sum reduction in the
atoms
dim. IfTrue
, the returned tensor has shape(molecules, atoms)
, otherwise it has shape(molecules,)
- Returns:
Tensor with the predicted scalars.
- Return type:
Predict molecular or atomic scalars form (possibly partially shared) networks
This model is similar to
torchani.nn.ANINetworks
with the caveat that it allows for partially sharing layers- Parameters:
shared (AtomicNetwork) – Shared layers for all elements
modules (Dict[str, AtomicNetwork]) – symbol-network mapping for each supported element. Different elements will share networks if the same ref is used for different keys
alias (bool) – Allow the class to map different elements to the same atomic network.
Calculate atomic scalars from the input features
- Parameters:
elem_idxs (Tensor) – An int
torch.Tensor
that stores the element indices of a batch of molecules, (for example after conversion withtorchani.nn.SpeciesConverter
). Shape is(molecules, 3)
.aevs (Tensor | None) – A float tensor with local atomic features (AEVs). Shape is
(molecules, atoms, num-aev-features)
.atomic (bool) – Whether to perform a sum reduction in the
atoms
dim. IfTrue
, the returned tensor has shape(molecules, atoms)
, otherwise it has shape(molecules,)
- Returns:
Tensor with the predicted scalars.
- Return type:
- class torchani.nn.SingleNN(symbols, network, embed_kind='continuous', embed_dims=None)[source]#
Predict molecular or atomic scalars form fully shared networks
- Parameters:
network (AtomicNetwork) – Atomic network to wrap, output dimension should be equal to the number of supported elements
- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Calculate atomic scalars from the input features
- Parameters:
elem_idxs (Tensor) – An int
torch.Tensor
that stores the element indices of a batch of molecules, (for example after conversion withtorchani.nn.SpeciesConverter
). Shape is(molecules, 3)
.aevs (Tensor | None) – A float tensor with local atomic features (AEVs). Shape is
(molecules, atoms, num-aev-features)
.atomic (bool) – Whether to perform a sum reduction in the
atoms
dim. IfTrue
, the returned tensor has shape(molecules, atoms)
, otherwise it has shape(molecules,)
- Returns:
Tensor with the predicted scalars.
- Return type:
- class torchani.nn.Ensemble(modules, repeats=False)[source]#
Calculate output scalars by averaging over many containers of networks
- Parameters:
modules (Iterable[AtomicContainer]) – Set of network containers to average over.
repeats (bool) – Whether to allow repeated networks.
- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.SpeciesConverter(symbols)[source]#
Convert atomic numbers into internal ANI element indices
Conversion is done according to the symbols sequence passed as init argument. If the class is initialized with
['H', 'C', 'N', 'O']
, it will converttensor([1, 6, 7, 1, 8])
into atensor([0, 1, 2, 0, 3])
- Parameters:
symbols (Sequence[str]) – A
tuple
orlist
of strings that are valid chemical symbols. (case sensitive).
- forward(atomic_nums, nop=False, _dont_use=False)[source]#
Perform the conversion to element indices
- Parameters:
atomic_nums (Tensor) – An int tensor that stores the atomic numbers of a batch of molecules. Shape is
(molecules, 3)
.- Returns:
An int
torch.Tensor
that stores the element indices of a batch of molecules, (for example after conversion withtorchani.nn.SpeciesConverter
). Shape is(molecules, 3)
.- Return type:
- class torchani.nn.MNPNetworks(module, use_mnp=False)[source]#
- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.BmmLinear(linears)[source]#
Batched Linear layer that fuses multiple Linear layers that have same architecture If “e” is the number of fused layers (which usually corresponds to members in an ensamble), then we have:
input: (e x n x m) weight: (e x m x p) bias: (e x 1 x p) output: (e x n x p)
- forward(input_)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.BmmEnsemble(ensemble)[source]#
The inference-optimized analogue of a
torchani.nn.Ensemble
Combines all networks of an ensemble that correspond to the same element into a single
BmmAtomicNetwork
.As an example, if an ensemble has 8 models, and each model has 1 H-network and 1 C-network, all 8 H-networks and all 8 C-networks are fused into two networks: one single H-BmmAtomicNework and one single C-BmmAtomicNetwork.
The resulting networks perform the same calculations but faster, and using less CUDA kernel calls, since the conversion avoids iteration over the ensemble members in python.
The
BmmAtomicNetwork
modules consist of sequences ofBmmLinear
, which perform batched matrix multiplication (BMM).- forward(elem_idxs, aevs=None, atomic=False, ensemble_values=False)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.BmmAtomicNetwork(networks)[source]#
The inference-optimized analogue of an
AtomicNetwork
BmmAtomicNetwork
instances are “combined” networks for a single element. Each combined network holds all networks associated with all the members of an ensemble. They consist on a sequence ofBmmLinear
layers with interleaved activation functions (simple multi-layer perceptrons or MLPs).- forward(features)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.TightCELU(*args, **kwargs)[source]#
CELU activation function with alpha=0.1
- forward(x)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchani.nn.Sequential(*modules)[source]#
Create a pipeline of modules, like
torch.nn.Sequential
- Deprecated:
Use of
torchani.nn.Sequential
is strongly discouraged. Please usetorchani.arch.Assembler
, or write atorch.nn.Module
. For more info consult the migration guide