lighting

pytorch3d.renderer.lighting.diffuse(normals, color, direction) → torch.Tensor[source]

Calculate the diffuse component of light reflection using Lambert’s cosine law.

Parameters:
  • normals – (N, …, 3) xyz normal vectors. Normals and points are expected to have the same shape.
  • color – (1, 3) or (N, 3) RGB color of the diffuse component of the light.
  • direction – (x,y,z) direction of the light
Returns:

colors – (N, …, 3), same shape as the input points.

The normals and light direction should be in the same coordinate frame i.e. if the points have been transformed from world -> view space then the normals and direction should also be in view space.

NOTE: to use with the packed vertices (i.e. no batch dimension) reformat the inputs in the following way.

Args:
    normals: (P, 3)
    color: (N, 3)[batch_idx, :] -> (P, 3)
    direction: (N, 3)[batch_idx, :] -> (P, 3)

Returns:
    colors: (P, 3)

where batch_idx is of shape (P). For meshes, batch_idx can be:
meshes.verts_packed_to_mesh_idx() or meshes.faces_packed_to_mesh_idx()
depending on whether points refers to the vertex coordinates or
average/interpolated face coordinates.
pytorch3d.renderer.lighting.specular(points, normals, direction, color, camera_position, shininess) → torch.Tensor[source]

Calculate the specular component of light reflection.

Parameters:
  • points – (N, …, 3) xyz coordinates of the points.
  • normals – (N, …, 3) xyz normal vectors for each point.
  • color – (N, 3) RGB color of the specular component of the light.
  • direction – (N, 3) vector direction of the light.
  • camera_position – (N, 3) The xyz position of the camera.
  • shininess
    1. The specular exponent of the material.
Returns:

colors – (N, …, 3), same shape as the input points.

The points, normals, camera_position, and direction should be in the same coordinate frame i.e. if the points have been transformed from world -> view space then the normals, camera_position, and light direction should also be in view space.

To use with a batch of packed points reindex in the following way. .. code-block:: python:

Args:
    points: (P, 3)
    normals: (P, 3)
    color: (N, 3)[batch_idx] -> (P, 3)
    direction: (N, 3)[batch_idx] -> (P, 3)
    camera_position: (N, 3)[batch_idx] -> (P, 3)
    shininess: (N)[batch_idx] -> (P)
Returns:
    colors: (P, 3)

where batch_idx is of shape (P). For meshes batch_idx can be:
meshes.verts_packed_to_mesh_idx() or meshes.faces_packed_to_mesh_idx().
class pytorch3d.renderer.lighting.DirectionalLights(ambient_color=((0.5, 0.5, 0.5), ), diffuse_color=((0.3, 0.3, 0.3), ), specular_color=((0.2, 0.2, 0.2), ), direction=((0, 1, 0), ), device: str = 'cpu')[source]
__init__(ambient_color=((0.5, 0.5, 0.5), ), diffuse_color=((0.3, 0.3, 0.3), ), specular_color=((0.2, 0.2, 0.2), ), direction=((0, 1, 0), ), device: str = 'cpu')[source]
Parameters:
  • ambient_color – RGB color of the ambient component.
  • diffuse_color – RGB color of the diffuse component.
  • specular_color – RGB color of the specular component.
  • direction – (x, y, z) direction vector of the light.
  • device – torch.device on which the tensors should be located
The inputs can each be
  • 3 element tuple/list or list of lists
  • torch tensor of shape (1, 3)
  • torch tensor of shape (N, 3)

The inputs are broadcast against each other so they all have batch dimension N.

clone()[source]
diffuse(normals, points=None) → torch.Tensor[source]
specular(normals, points, camera_position, shininess) → torch.Tensor[source]
class pytorch3d.renderer.lighting.PointLights(ambient_color=((0.5, 0.5, 0.5), ), diffuse_color=((0.3, 0.3, 0.3), ), specular_color=((0.2, 0.2, 0.2), ), location=((0, 1, 0), ), device: str = 'cpu')[source]
__init__(ambient_color=((0.5, 0.5, 0.5), ), diffuse_color=((0.3, 0.3, 0.3), ), specular_color=((0.2, 0.2, 0.2), ), location=((0, 1, 0), ), device: str = 'cpu')[source]
Parameters:
  • ambient_color – RGB color of the ambient component
  • diffuse_color – RGB color of the diffuse component
  • specular_color – RGB color of the specular component
  • location – xyz position of the light.
  • device – torch.device on which the tensors should be located
The inputs can each be
  • 3 element tuple/list or list of lists
  • torch tensor of shape (1, 3)
  • torch tensor of shape (N, 3)

The inputs are broadcast against each other so they all have batch dimension N.

clone()[source]
diffuse(normals, points) → torch.Tensor[source]
specular(normals, points, camera_position, shininess) → torch.Tensor[source]