pytorch3d.implicitron.models.global_encoder.autodecoder

autodecoder

class pytorch3d.implicitron.models.global_encoder.autodecoder.Autodecoder(*args, **kwargs)[source]

Bases: Configurable, Module

Autodecoder which maps a list of integer or string keys to optimizable embeddings.

Settings:

encoding_dim: Embedding dimension for the decoder. n_instances: The maximum number of instances stored by the autodecoder. init_scale: Scale factor for the initial autodecoder weights. ignore_input: If True, optimizes a single code for any input.

encoding_dim: int = 0
n_instances: int = 1
init_scale: float = 1.0
ignore_input: bool = False
calculate_squared_encoding_norm() Tensor | None[source]
get_encoding_dim() int[source]
forward(x: LongTensor | List[str]) Tensor | None[source]
Parameters:
  • x – A batch of N identifiers. Either a long tensor of size

  • ` (N,)` keys in [0, n_instances) –

  • codes (are hashed to) –

Returns:

codes

A tensor of shape (N, self.encoding_dim) containing the

key-specific autodecoder codes.