pytorch3d.renderer.fisheyecameras

fisheyecameras

class pytorch3d.renderer.fisheyecameras.FishEyeCameras(focal_length=tensor([[1.]]), principal_point=tensor([[0., 0.]]), radial_params=tensor([[0., 0., 0., 0., 0., 0.]]), tangential_params=tensor([[0., 0.]]), thin_prism_params=tensor([[0., 0., 0., 0.]]), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), world_coordinates: bool = False, use_radial: bool = True, use_tangential: bool = True, use_thin_prism: bool = True, device: str | device = 'cpu', image_size: List | Tuple | Tensor | None = None)[source]

Bases: CamerasBase

A class which extends Pinhole camera by considering radial, tangential and thin-prism distortion. For the fisheye camera model, k1, k2, …, k_n_radial are polynomial coefficents to model radial distortions. Two common types of radial distortions are barrel and pincusion radial distortions.

a = x / z, b = y / z, r = (a*a+b*b)^(1/2) th = atan(r) [x_r] = (th+ k0 * th^3 + k1* th^5 + …) [a/r] [y_r] [b/r] [1]

The tangential distortion parameters are p1 and p2. The primary cause is due to the lens assembly not being centered over and parallel to the image plane. tangentialDistortion = [(2 x_r^2 + rd^2)*p_0 + 2*x_r*y_r*p_1]

[(2 y_r^2 + rd^2)*p_1 + 2*x_r*y_r*p_0] [2]

where rd^2 = x_r^2 + y_r^2

The thin-prism distortion is modeled with s1, s2, s3, s4 coefficients thinPrismDistortion = [s0 * rd^2 + s1 rd^4]

[s2 * rd^2 + s3 rd^4] [3]

The projection proj = diag(f, f) * uvDistorted + [cu; cv] uvDistorted = [x_r] + tangentialDistortion + thinPrismDistortion [4]

[y_r]

f is the focal length and cu, cv are principal points in x, y axis.

__init__(focal_length=tensor([[1.]]), principal_point=tensor([[0., 0.]]), radial_params=tensor([[0., 0., 0., 0., 0., 0.]]), tangential_params=tensor([[0., 0.]]), thin_prism_params=tensor([[0., 0., 0., 0.]]), R: Tensor = tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]), T: Tensor = tensor([[0., 0., 0.]]), world_coordinates: bool = False, use_radial: bool = True, use_tangential: bool = True, use_thin_prism: bool = True, device: str | device = 'cpu', image_size: List | Tuple | Tensor | None = None) None[source]
Parameters:
  • focal_ength – Focal length of the camera in world units. A tensor of shape (N, 1) for square pixels, where N is number of transforms.

  • principal_point – xy coordinates of the center of the principal point of the camera in pixels. A tensor of shape (N, 2).

  • radial_params – parameters for radial distortions. A tensor of shape (N, num_radial).

  • tangential_params – parameters for tangential distortions. A tensor of shape (N, 2).

  • thin_prism_params – parameters for thin-prism distortions. A tensor of shape (N, 4).

  • R – Rotation matrix of shape (N, 3, 3)

  • T – Translation matrix of shape (N, 3)

  • world_coordinates – if True, project from world coordinates; otherwise from camera coordinates

  • use_radial – radial_distortion, default to True

  • use_tangential – tangential distortion, default to True

  • use_thin_prism – thin prism distortion, default to True

  • device – torch.device or string

  • image_size – (height, width) of image size. A tensor of shape (N, 2) or a list/tuple. Required for screen cameras.

check_input(points: Tensor, batch_size: int)[source]

Check if the shapes are broadcastable between points and transforms. Accept points of shape (P, 3) or (1, P, 3) or (M, P, 3). The batch_size for transforms should be 1 when points take (M, P, 3). The batch_size can be 1 or N when points take shape (P, 3).

Parameters:
  • points – tensor of shape (P, 3) or (1, P, 3) or (M, P, 3)

  • batch_size – number of transforms

Returns:

Boolean value if the input shapes are compatible.

transform_points(points, eps: float | None = None, **kwargs) Tensor[source]

Transform input points from camera space to image space. :param points: tensor of (…, 3). E.g., (P, 3) or (1, P, 3), (M, P, 3) :param eps: tiny number to avoid zero divsion

Returns:

torch.Tensor when points take shape (P, 3) or (1, P, 3), output is (N, P, 3) when points take shape (M, P, 3), output is (M, P, 3) where N is the number of transforms, P number of points

unproject_points(xy_depth: Tensor, world_coordinates: bool = True, scaled_depth_input: bool = False, **kwargs) Tensor[source]

Takes in 3-point uv_depth in the image plane of the camera and unprojects it into the reference frame of the camera. This function is the inverse of transform_points. In particular it holds that

X = unproject(project(X))

and

x = project(unproject(s*x))

Parameters:
  • xy_depth – points in the image plane of shape (…, 3). E.g., (P, 3) or (1, P, 3) or (M, P, 3)

  • world_coordinates – if the output is in world_coordinate, if False, convert to

  • coordinate (camera) –

  • scaled_depth_input – False

Returns:

unprojected_points in the camera frame with z = 1 when points take shape (P, 3) or (1, P, 3), output is (N, P, 3) when points take shape (M, P, 3), output is (M, P, 3) where N is the number of transforms, P number of point

in_ndc()[source]
is_perspective()[source]