pytorch3d.renderer.cameras

cameras

class pytorch3d.renderer.cameras.CamerasBase(dtype: dtype = torch.float32, device: str | device = 'cpu', **kwargs)[source]

Bases: TensorProperties

CamerasBase implements a base class for all cameras.

For cameras, there are four different coordinate systems (or spaces) - World coordinate system: This is the system the object lives - the world. - Camera view coordinate system: This is the system that has its origin on

the camera and the Z-axis perpendicular to the image plane. In PyTorch3D, we assume that +X points left, and +Y points up and +Z points out from the image plane. The transformation from world –> view happens after applying a rotation (R) and translation (T)

  • NDC coordinate system: This is the normalized coordinate system that confines

    points in a volume the rendered part of the object or scene, also known as view volume. For square images, given the PyTorch3D convention, (+1, +1, znear) is the top left near corner, and (-1, -1, zfar) is the bottom right far corner of the volume. The transformation from view –> NDC happens after applying the camera projection matrix (P) if defined in NDC space. For non square images, we scale the points such that smallest side has range [-1, 1] and the largest side has range [-u, u], with u > 1.

  • Screen coordinate system: This is another representation of the view volume with

    the XY coordinates defined in image space instead of a normalized space.

An illustration of the coordinate systems can be found in pytorch3d/docs/notes/cameras.md.

CameraBase defines methods that are common to all camera models:
  • get_camera_center that returns the optical center of the camera in

    world coordinates

  • get_world_to_view_transform which returns a 3D transform from

    world coordinates to the camera view coordinates (R, T)

  • get_full_projection_transform which composes the projection

    transform (P) with the world-to-view transform (R, T)

  • transform_points which takes a set of input points in world coordinates and

    projects to the space the camera is defined in (NDC or screen)

  • get_ndc_camera_transform which defines the transform from screen/NDC to

    PyTorch3D’s NDC space

  • transform_points_ndc which takes a set of points in world coordinates and

    projects them to PyTorch3D’s NDC space

  • transform_points_screen which takes a set of points in world coordinates and

    projects them to screen space

For each new camera, one should implement the get_projection_transform routine that returns the mapping from camera view coordinates to camera coordinates (NDC or screen).

Another useful function that is specific to each camera model is unproject_points which sends points from camera coordinates (NDC or screen) back to camera view or world coordinates depending on the world_coordinates boolean argument of the function.

get_projection_transform(**kwargs)[source]

Calculate the projective transformation matrix.

Parameters:

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns:

a Transform3d object which represents a batch of projection matrices of shape (N, 3, 3)

unproject_points(xy_depth: Tensor, **kwargs)[source]

Transform input points from camera coordinates (NDC or screen) to the world / camera coordinates.

Each of the input points xy_depth of shape (…, 3) is a concatenation of the x, y location and its depth.

For instance, for an input 2D tensor of shape (num_points, 3) xy_depth takes the following form:

xy_depth[i] = [x[i], y[i], depth[i]],

for a each point at an index i.

The following example demonstrates the relationship between transform_points and unproject_points:

cameras = # camera object derived from CamerasBase
xyz = # 3D points of shape (batch_size, num_points, 3)
# transform xyz to the camera view coordinates
xyz_cam = cameras.get_world_to_view_transform().transform_points(xyz)
# extract the depth of each point as the 3rd coord of xyz_cam
depth = xyz_cam[:, :, 2:]
# project the points xyz to the camera
xy = cameras.transform_points(xyz)[:, :, :2]
# append depth to xy
xy_depth = torch.cat((xy, depth), dim=2)
# unproject to the world coordinates
xyz_unproj_world = cameras.unproject_points(xy_depth, world_coordinates=True)
print(torch.allclose(xyz, xyz_unproj_world)) # True
# unproject to the camera coordinates
xyz_unproj = cameras.unproject_points(xy_depth, world_coordinates=False)
print(torch.allclose(xyz_cam, xyz_unproj)) # True
Parameters:
  • xy_depth – torch tensor of shape (…, 3).

  • world_coordinates – If True, unprojects the points back to world coordinates using the camera extrinsics R and T. False ignores R and T and unprojects to the camera view coordinates.

  • from_ndc – If False (default), assumes xy part of input is in NDC space if self.in_ndc(), otherwise in screen space. If True, assumes xy is in NDC space even if the camera is defined in screen space.

Returns

new_points: unprojected points with the same shape as xy_depth.

get_camera_center(**kwargs) Tensor[source]

Return the 3D location of the camera optical center in the world coordinates.

Parameters:

**kwargs – parameters for the camera extrinsics can be passed in as keyword arguments to override the default values set in __init__.

Setting R or T here will update the values set in init as these values may be needed later on in the rendering pipeline e.g. for lighting calculations.

Returns:

C – a batch of 3D locations of shape (N, 3) denoting the locations of the center of each camera in the batch.

get_world_to_view_transform(**kwargs) Transform3d[source]

Return the world-to-view transform.

Parameters:

**kwargs – parameters for the camera extrinsics can be passed in as keyword arguments to override the default values set in __init__.

Setting R and T here will update the values set in init as these values may be needed later on in the rendering pipeline e.g. for lighting calculations.

Returns:

A Transform3d object which represents a batch of transforms of shape (N, 3, 3)

get_full_projection_transform(**kwargs) Transform3d[source]

Return the full world-to-camera transform composing the world-to-view and view-to-camera transforms. If camera is defined in NDC space, the projected points are in NDC space. If camera is defined in screen space, the projected points are in screen space.

Parameters:

**kwargs – parameters for the projection transforms can be passed in as keyword arguments to override the default values set in __init__.

Setting R and T here will update the values set in init as these values may be needed later on in the rendering pipeline e.g. for lighting calculations.

Returns:

a Transform3d object which represents a batch of transforms of shape (N, 3, 3)

transform_points(points, eps: float | None = None, **kwargs) Tensor[source]

Transform input points from world to camera space. If camera is defined in NDC space, the projected points are in NDC space. If camera is defined in screen space, the projected points are in screen space.

For CamerasBase.transform_points, setting eps > 0 stabilizes gradients since it leads to avoiding division by excessively low numbers for points close to the camera plane.

Parameters:
  • points – torch tensor of shape (…, 3).

  • eps

    If eps!=None, the argument is used to clamp the divisor in the homogeneous normalization of the points transformed to the ndc space. Please see transforms.Transform3d.transform_points for details.

    For CamerasBase.transform_points, setting eps > 0 stabilizes gradients since it leads to avoiding division by excessively low numbers for points close to the camera plane.

Returns

new_points: transformed points with the same shape as the input.

get_ndc_camera_transform(**kwargs) Transform3d[source]

Returns the transform from camera projection space (screen or NDC) to NDC space. For cameras that can be specified in screen space, this transform allows points to be converted from screen to NDC space. The default transform scales the points from [0, W]x[0, H] to [-1, 1]x[-u, u] or [-u, u]x[-1, 1] where u > 1 is the aspect ratio of the image. This function should be modified per camera definitions if need be, e.g. for Perspective/Orthographic cameras we provide a custom implementation. This transform assumes PyTorch3D coordinate system conventions for both the NDC space and the input points.

This transform interfaces with the PyTorch3D renderer which assumes input points to the renderer to be in NDC space.

transform_points_ndc(points, eps: float | None = None, **kwargs) Tensor[source]

Transforms points from PyTorch3D world/camera space to NDC space. Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. Output points are in NDC space: +X left, +Y up, origin at image center.

Parameters:
  • points – torch tensor of shape (…, 3).

  • eps

    If eps!=None, the argument is used to clamp the divisor in the homogeneous normalization of the points transformed to the ndc space. Please see transforms.Transform3d.transform_points for details.

    For CamerasBase.transform_points, setting eps > 0 stabilizes gradients since it leads to avoiding division by excessively low numbers for points close to the camera plane.

Returns

new_points: transformed points with the same shape as the input.

transform_points_screen(points, eps: float | None = None, with_xyflip: bool = True, **kwargs) Tensor[source]

Transforms points from PyTorch3D world/camera space to screen space. Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. Output points are in screen space: +X right, +Y down, origin at top left corner.

Parameters:
  • points – torch tensor of shape (…, 3).

  • eps

    If eps!=None, the argument is used to clamp the divisor in the homogeneous normalization of the points transformed to the ndc space. Please see transforms.Transform3d.transform_points for details.

    For CamerasBase.transform_points, setting eps > 0 stabilizes gradients since it leads to avoiding division by excessively low numbers for points close to the camera plane.

  • with_xyflip – If True, flip x and y directions. In world/camera/ndc coords, +x points to the left and +y up. If with_xyflip is true, in screen coords +x points right, and +y down, following the usual RGB image convention. Warning: do not set to False unless you know what you’re doing!

Returns

new_points: transformed points with the same shape as the input.

clone()[source]

Returns a copy of self.

is_perspective()[source]
in_ndc()[source]

Specifies whether the camera is defined in NDC space or in screen (image) space

get_znear()[source]
get_image_size()[source]

Returns the image size, if provided, expected in the form of (height, width) The image size is used for conversion of projected points to screen coordinates.

__getitem__(index: int | List[int] | BoolTensor | LongTensor) CamerasBase[source]

Override for the __getitem__ method in TensorProperties which needs to be refactored.

Parameters:

index – an integer index, list/tensor of integer indices, or tensor of boolean indicators used to filter all the fields in the cameras given by self._FIELDS.

Returns:

an instance of the current cameras class with only the values at the selected index.

pytorch3d.renderer.cameras.OpenGLPerspectiveCameras(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, aspect_ratio: float | Sequence[float] | Tensor = 1.0, fov: float | Sequence[float] | Tensor = 60.0, degrees: bool = True, R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), device: str | device = 'cpu') FoVPerspectiveCameras[source]

OpenGLPerspectiveCameras has been DEPRECATED. Use FoVPerspectiveCameras instead. Preserving OpenGLPerspectiveCameras for backward compatibility.

class pytorch3d.renderer.cameras.FoVPerspectiveCameras(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, aspect_ratio: float | Sequence[float] | Tensor = 1.0, fov: float | Sequence[float] | Tensor = 60.0, degrees: bool = True, R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu')[source]

Bases: CamerasBase

A class which stores a batch of parameters to generate a batch of projection matrices by specifying the field of view. The definitions of the parameters follow the OpenGL perspective camera.

The extrinsics of the camera (R and T matrices) can also be set in the initializer or passed in to get_full_projection_transform to get the full transformation from world -> ndc.

The transform_points method calculates the full world -> ndc transform and then applies it to the input points.

The transforms can also be returned separately as Transform3d objects.

  • Setting the Aspect Ratio for Non Square Images *

If the desired output image size is non square (i.e. a tuple of (H, W) where H != W) the aspect ratio needs special consideration: There are two aspect ratios to be aware of:

  • the aspect ratio of each pixel

  • the aspect ratio of the output image

The aspect_ratio setting in the FoVPerspectiveCameras sets the pixel aspect ratio. When using this camera with the differentiable rasterizer be aware that in the rasterizer we assume square pixels, but allow variable image aspect ratio (i.e rectangle images).

In most cases you will want to set the camera aspect_ratio=1.0 (i.e. square pixels) and only vary the output image dimensions in pixels for rasterization.

__init__(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, aspect_ratio: float | Sequence[float] | Tensor = 1.0, fov: float | Sequence[float] | Tensor = 60.0, degrees: bool = True, R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu') None[source]
Parameters:
  • znear – near clipping plane of the view frustrum.

  • zfar – far clipping plane of the view frustrum.

  • aspect_ratio – aspect ratio of the image pixels. 1.0 indicates square pixels.

  • fov – field of view angle of the camera.

  • degrees – bool, set to True if fov is specified in degrees.

  • R – Rotation matrix of shape (N, 3, 3)

  • T – Translation matrix of shape (N, 3)

  • K – (optional) A calibration matrix of shape (N, 4, 4) If provided, don’t need znear, zfar, fov, aspect_ratio, degrees

  • device – Device (as str or torch.device)

compute_projection_matrix(znear, zfar, fov, aspect_ratio, degrees: bool) Tensor[source]

Compute the calibration matrix K of shape (N, 4, 4)

Parameters:
  • znear – near clipping plane of the view frustrum.

  • zfar – far clipping plane of the view frustrum.

  • fov – field of view angle of the camera.

  • aspect_ratio – aspect ratio of the image pixels. 1.0 indicates square pixels.

  • degrees – bool, set to True if fov is specified in degrees.

Returns:

torch.FloatTensor of the calibration matrix with shape (N, 4, 4)

get_projection_transform(**kwargs) Transform3d[source]

Calculate the perspective projection matrix with a symmetric viewing frustrum. Use column major order. The viewing frustrum will be projected into ndc, s.t. (max_x, max_y) -> (+1, +1) (min_x, min_y) -> (-1, -1)

Parameters:

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns:

a Transform3d object which represents a batch of projection matrices of shape (N, 4, 4)

h1 = (max_y + min_y)/(max_y - min_y)
w1 = (max_x + min_x)/(max_x - min_x)
tanhalffov = tan((fov/2))
s1 = 1/tanhalffov
s2 = 1/(tanhalffov * (aspect_ratio))

# To map z to the range [0, 1] use:
f1 =  far / (far - near)
f2 = -(far * near) / (far - near)

# Projection matrix
K = [
        [s1,   0,   w1,   0],
        [0,   s2,   h1,   0],
        [0,    0,   f1,  f2],
        [0,    0,    1,   0],
]
unproject_points(xy_depth: Tensor, world_coordinates: bool = True, scaled_depth_input: bool = False, **kwargs) Tensor[source]

>! FoV cameras further allow for passing depth in world units (scaled_depth_input=False) or in the [0, 1]-normalized units (scaled_depth_input=True)

Parameters:

scaled_depth_input – If True, assumes the input depth is in the [0, 1]-normalized units. If False the input depth is in the world units.

is_perspective()[source]
in_ndc()[source]
pytorch3d.renderer.cameras.OpenGLOrthographicCameras(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, top: float | Sequence[float] | Tensor = 1.0, bottom: float | Sequence[float] | Tensor = -1.0, left: float | Sequence[float] | Tensor = -1.0, right: float | Sequence[float] | Tensor = 1.0, scale_xyz=((1.0, 1.0, 1.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), device: str | device = 'cpu') FoVOrthographicCameras[source]

OpenGLOrthographicCameras has been DEPRECATED. Use FoVOrthographicCameras instead. Preserving OpenGLOrthographicCameras for backward compatibility.

class pytorch3d.renderer.cameras.FoVOrthographicCameras(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, max_y: float | Sequence[float] | Tensor = 1.0, min_y: float | Sequence[float] | Tensor = -1.0, max_x: float | Sequence[float] | Tensor = 1.0, min_x: float | Sequence[float] | Tensor = -1.0, scale_xyz=((1.0, 1.0, 1.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu')[source]

Bases: CamerasBase

A class which stores a batch of parameters to generate a batch of projection matrices by specifying the field of view. The definitions of the parameters follow the OpenGL orthographic camera.

__init__(znear: float | Sequence[float] | Tensor = 1.0, zfar: float | Sequence[float] | Tensor = 100.0, max_y: float | Sequence[float] | Tensor = 1.0, min_y: float | Sequence[float] | Tensor = -1.0, max_x: float | Sequence[float] | Tensor = 1.0, min_x: float | Sequence[float] | Tensor = -1.0, scale_xyz=((1.0, 1.0, 1.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu')[source]
Parameters:
  • znear – near clipping plane of the view frustrum.

  • zfar – far clipping plane of the view frustrum.

  • max_y – maximum y coordinate of the frustrum.

  • min_y – minimum y coordinate of the frustrum.

  • max_x – maximum x coordinate of the frustrum.

  • min_x – minimum x coordinate of the frustrum

  • scale_xyz – scale factors for each axis of shape (N, 3).

  • R – Rotation matrix of shape (N, 3, 3).

  • T – Translation of shape (N, 3).

  • K – (optional) A calibration matrix of shape (N, 4, 4) If provided, don’t need znear, zfar, max_y, min_y, max_x, min_x, scale_xyz

  • device – torch.device or string.

Only need to set min_x, max_x, min_y, max_y for viewing frustrums which are non symmetric about the origin.

compute_projection_matrix(znear, zfar, max_x, min_x, max_y, min_y, scale_xyz) Tensor[source]

Compute the calibration matrix K of shape (N, 4, 4)

Parameters:
  • znear – near clipping plane of the view frustrum.

  • zfar – far clipping plane of the view frustrum.

  • max_x – maximum x coordinate of the frustrum.

  • min_x – minimum x coordinate of the frustrum

  • max_y – maximum y coordinate of the frustrum.

  • min_y – minimum y coordinate of the frustrum.

  • scale_xyz – scale factors for each axis of shape (N, 3).

get_projection_transform(**kwargs) Transform3d[source]

Calculate the orthographic projection matrix. Use column major order.

Parameters:

**kwargs – parameters for the projection can be passed in to override the default values set in __init__.

Returns:

a Transform3d object which represents a batch of projection

matrices of shape (N, 4, 4)

scale_x = 2 / (max_x - min_x)
scale_y = 2 / (max_y - min_y)
scale_z = 2 / (far-near)
mid_x = (max_x + min_x) / (max_x - min_x)
mix_y = (max_y + min_y) / (max_y - min_y)
mid_z = (far + near) / (far - near)

K = [
        [scale_x,        0,         0,  -mid_x],
        [0,        scale_y,         0,  -mix_y],
        [0,              0,  -scale_z,  -mid_z],
        [0,              0,         0,       1],
]
unproject_points(xy_depth: Tensor, world_coordinates: bool = True, scaled_depth_input: bool = False, **kwargs) Tensor[source]

>! FoV cameras further allow for passing depth in world units (scaled_depth_input=False) or in the [0, 1]-normalized units (scaled_depth_input=True)

Parameters:

scaled_depth_input – If True, assumes the input depth is in the [0, 1]-normalized units. If False the input depth is in the world units.

is_perspective()[source]
in_ndc()[source]
pytorch3d.renderer.cameras.SfMPerspectiveCameras(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), device: str | device = 'cpu') PerspectiveCameras[source]

SfMPerspectiveCameras has been DEPRECATED. Use PerspectiveCameras instead. Preserving SfMPerspectiveCameras for backward compatibility.

class pytorch3d.renderer.cameras.PerspectiveCameras(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu', in_ndc: bool = True, image_size: List | Tuple | Tensor | None = None)[source]

Bases: CamerasBase

A class which stores a batch of parameters to generate a batch of transformation matrices using the multi-view geometry convention for perspective camera.

Parameters for this camera are specified in NDC if in_ndc is set to True. If parameters are specified in screen space, in_ndc must be set to False.

__init__(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu', in_ndc: bool = True, image_size: List | Tuple | Tensor | None = None) None[source]
Parameters:
  • focal_length – Focal length of the camera in world units. A tensor of shape (N, 1) or (N, 2) for square and non-square pixels respectively.

  • principal_point – xy coordinates of the center of the principal point of the camera in pixels. A tensor of shape (N, 2).

  • in_ndc – True if camera parameters are specified in NDC. If camera parameters are in screen space, it must be set to False.

  • R – Rotation matrix of shape (N, 3, 3)

  • T – Translation matrix of shape (N, 3)

  • K – (optional) A calibration matrix of shape (N, 4, 4) If provided, don’t need focal_length, principal_point

  • image_size – (height, width) of image size. A tensor of shape (N, 2) or a list/tuple. Required for screen cameras.

  • device – torch.device or string

get_projection_transform(**kwargs) Transform3d[source]

Calculate the projection matrix using the multi-view geometry convention.

Parameters:

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns:

A Transform3d object with a batch of N projection transforms.

fx = focal_length[:, 0]
fy = focal_length[:, 1]
px = principal_point[:, 0]
py = principal_point[:, 1]

K = [
        [fx,   0,   px,   0],
        [0,   fy,   py,   0],
        [0,    0,    0,   1],
        [0,    0,    1,   0],
]
unproject_points(xy_depth: Tensor, world_coordinates: bool = True, from_ndc: bool = False, **kwargs) Tensor[source]
Parameters:

from_ndc – If False (default), assumes xy part of input is in NDC space if self.in_ndc(), otherwise in screen space. If True, assumes xy is in NDC space even if the camera is defined in screen space.

get_principal_point(**kwargs) Tensor[source]

Return the camera’s principal point

Parameters:

**kwargs – parameters for the camera extrinsics can be passed in as keyword arguments to override the default values set in __init__.

get_ndc_camera_transform(**kwargs) Transform3d[source]

Returns the transform from camera projection space (screen or NDC) to NDC space. If the camera is defined already in NDC space, the transform is identity. For cameras defined in screen space, we adjust the principal point computation which is defined in the image space (commonly) and scale the points to NDC space.

This transform leaves the depth unchanged.

Important: This transforms assumes PyTorch3D conventions for the input points, i.e. +X left, +Y up.

is_perspective()[source]
in_ndc()[source]
pytorch3d.renderer.cameras.SfMOrthographicCameras(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), device: str | device = 'cpu') OrthographicCameras[source]

SfMOrthographicCameras has been DEPRECATED. Use OrthographicCameras instead. Preserving SfMOrthographicCameras for backward compatibility.

class pytorch3d.renderer.cameras.OrthographicCameras(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu', in_ndc: bool = True, image_size: List | Tuple | Tensor | None = None)[source]

Bases: CamerasBase

A class which stores a batch of parameters to generate a batch of transformation matrices using the multi-view geometry convention for orthographic camera.

Parameters for this camera are specified in NDC if in_ndc is set to True. If parameters are specified in screen space, in_ndc must be set to False.

__init__(focal_length: float | Sequence[Tuple[float]] | Sequence[Tuple[float, float]] | Tensor = 1.0, principal_point=((0.0, 0.0),), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), K: Tensor | None = None, device: str | device = 'cpu', in_ndc: bool = True, image_size: List | Tuple | Tensor | None = None) None[source]
Parameters:
  • focal_length – Focal length of the camera in world units. A tensor of shape (N, 1) or (N, 2) for square and non-square pixels respectively.

  • principal_point – xy coordinates of the center of the principal point of the camera in pixels. A tensor of shape (N, 2).

  • in_ndc – True if camera parameters are specified in NDC. If False, then camera parameters are in screen space.

  • R – Rotation matrix of shape (N, 3, 3)

  • T – Translation matrix of shape (N, 3)

  • K – (optional) A calibration matrix of shape (N, 4, 4) If provided, don’t need focal_length, principal_point, image_size

  • image_size – (height, width) of image size. A tensor of shape (N, 2) or list/tuple. Required for screen cameras.

  • device – torch.device or string

get_projection_transform(**kwargs) Transform3d[source]

Calculate the projection matrix using the multi-view geometry convention.

Parameters:

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns:

A Transform3d object with a batch of N projection transforms.

fx = focal_length[:,0]
fy = focal_length[:,1]
px = principal_point[:,0]
py = principal_point[:,1]

K = [
        [fx,   0,    0,  px],
        [0,   fy,    0,  py],
        [0,    0,    1,   0],
        [0,    0,    0,   1],
]
unproject_points(xy_depth: Tensor, world_coordinates: bool = True, from_ndc: bool = False, **kwargs) Tensor[source]
Parameters:

from_ndc – If False (default), assumes xy part of input is in NDC space if self.in_ndc(), otherwise in screen space. If True, assumes xy is in NDC space even if the camera is defined in screen space.

get_principal_point(**kwargs) Tensor[source]

Return the camera’s principal point

Parameters:

**kwargs – parameters for the camera extrinsics can be passed in as keyword arguments to override the default values set in __init__.

get_ndc_camera_transform(**kwargs) Transform3d[source]

Returns the transform from camera projection space (screen or NDC) to NDC space. If the camera is defined already in NDC space, the transform is identity. For cameras defined in screen space, we adjust the principal point computation which is defined in the image space (commonly) and scale the points to NDC space.

Important: This transforms assumes PyTorch3D conventions for the input points, i.e. +X left, +Y up.

is_perspective()[source]
in_ndc()[source]
pytorch3d.renderer.cameras.get_world_to_view_transform(R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]])) Transform3d[source]

This function returns a Transform3d representing the transformation matrix to go from world space to view space by applying a rotation and a translation.

PyTorch3D uses the same convention as Hartley & Zisserman. I.e., for camera extrinsic parameters R (rotation) and T (translation), we map a 3D point X_world in world coordinates to a point X_cam in camera coordinates with: X_cam = X_world R + T

Parameters:
  • R – (N, 3, 3) matrix representing the rotation.

  • T – (N, 3) matrix representing the translation.

Returns:

a Transform3d object which represents the composed RT transformation.

pytorch3d.renderer.cameras.camera_position_from_spherical_angles(distance: float, elevation: float, azimuth: float, degrees: bool = True, device: str | device = 'cpu') Tensor[source]

Calculate the location of the camera based on the distance away from the target point, the elevation and azimuth angles.

Parameters:
  • distance – distance of the camera from the object.

  • elevation

    angles. The inputs distance, elevation and azimuth can be one of the following

    • Python scalar

    • Torch scalar

    • Torch tensor of shape (N) or (1)

  • azimuth

    angles. The inputs distance, elevation and azimuth can be one of the following

    • Python scalar

    • Torch scalar

    • Torch tensor of shape (N) or (1)

  • degrees – bool, whether the angles are specified in degrees or radians.

  • device – str or torch.device, device for new tensors to be placed on.

The vectors are broadcast against each other so they all have shape (N, 1).

Returns:

camera_position – (N, 3) xyz location of the camera.

pytorch3d.renderer.cameras.look_at_rotation(camera_position, at=((0, 0, 0),), up=((0, 1, 0),), device: str | device = 'cpu') Tensor[source]

This function takes a vector ‘camera_position’ which specifies the location of the camera in world coordinates and two vectors at and up which indicate the position of the object and the up directions of the world coordinate system respectively. The object is assumed to be centered at the origin.

The output is a rotation matrix representing the transformation from world coordinates -> view coordinates.

Parameters:
  • camera_position – position of the camera in world coordinates

  • at – position of the object in world coordinates

  • up – vector specifying the up direction in the world coordinate frame.

The inputs camera_position, at and up can each be a
  • 3 element tuple/list

  • torch tensor of shape (1, 3)

  • torch tensor of shape (N, 3)

The vectors are broadcast against each other so they all have shape (N, 3).

Returns:

R – (N, 3, 3) batched rotation matrices

pytorch3d.renderer.cameras.look_at_view_transform(dist: float | Sequence[float] | Tensor = 1.0, elev: float | Sequence[float] | Tensor = 0.0, azim: float | Sequence[float] | Tensor = 0.0, degrees: bool = True, eye: Sequence | Tensor | None = None, at=((0, 0, 0),), up=((0, 1, 0),), device: str | device = 'cpu') Tuple[Tensor, Tensor][source]

This function returns a rotation and translation matrix to apply the ‘Look At’ transformation from world -> view coordinates [0].

Parameters:
  • dist (1), (N) – distance of the camera from the object

  • elev – angle in degrees or radians. This is the angle between the vector from the object to the camera, and the horizontal plane y = 0 (xz-plane).

  • azim – angle in degrees or radians. The vector from the object to the camera is projected onto a horizontal plane y = 0. azim is the angle between the projected vector and a reference vector at (0, 0, 1) on the reference plane (the horizontal plane).

  • dist

  • shape (up and at can be of) –

  • degrees – boolean flag to indicate if the elevation and azimuth angles are specified in degrees or radians.

  • eye (1, 3) or (N, 3) – the position of the camera(s) in world coordinates. If eye is not None, it will override the camera position derived from dist, elev, azim.

  • up – the direction of the x axis in the world coordinate system.

  • at – the position of the object(s) in world coordinates.

  • eye

  • shape

Returns:

2-element tuple containing

  • R: the rotation to apply to the points to align with the camera.

  • T: the translation to apply to the points to align with the camera.

References: [0] https://www.scratchapixel.com

pytorch3d.renderer.cameras.get_ndc_to_screen_transform(cameras, with_xyflip: bool = False, image_size: List | Tuple | Tensor | None = None) Transform3d[source]

PyTorch3D NDC to screen conversion. Conversion from PyTorch3D’s NDC space (+X left, +Y up) to screen/image space (+X right, +Y down, origin top left).

Parameters:
  • cameras

  • with_xyflip – flips x- and y-axis if set to True.

Optional kwargs:

image_size: ((height, width),) specifying the height, width of the image. If not provided, it reads it from cameras.

We represent the NDC to screen conversion as a Transform3d with projection matrix

K = [

[s, 0, 0, cx], [0, s, 0, cy], [0, 0, 1, 0], [0, 0, 0, 1],

]

pytorch3d.renderer.cameras.get_screen_to_ndc_transform(cameras, with_xyflip: bool = False, image_size: List | Tuple | Tensor | None = None) Transform3d[source]

Screen to PyTorch3D NDC conversion. Conversion from screen/image space (+X right, +Y down, origin top left) to PyTorch3D’s NDC space (+X left, +Y up).

Parameters:
  • cameras

  • with_xyflip – flips x- and y-axis if set to True.

Optional kwargs:

image_size: ((height, width),) specifying the height, width of the image. If not provided, it reads it from cameras.

We represent the screen to NDC conversion as a Transform3d with projection matrix

K = [

[1/s, 0, 0, cx/s], [ 0, 1/s, 0, cy/s], [ 0, 0, 1, 0], [ 0, 0, 0, 1],

]

pytorch3d.renderer.cameras.try_get_projection_transform(cameras: CamerasBase, cameras_kwargs: Dict[str, Any]) Transform3d | None[source]

Try block to get projection transform from cameras and cameras_kwargs.

Parameters:
  • cameras – cameras instance, can be linear cameras or nonliear cameras

  • cameras_kwargs – camera parameters to be passed to cameras

Returns:

If the camera implemented projection_transform, return the projection transform; Otherwise, return None