# pytorch3d.renderer.camera_utils

camera_utils

pytorch3d.renderer.camera_utils.camera_to_eye_at_up(world_to_view_transform: Transform3d) Tuple[Tensor, Tensor, Tensor][source]

Given a world to view transform, return the eye, at and up vectors which represent its position.

For example, if cam is a camera object, then after running

```from cameras import look_at_view_transform
eye, at, up = camera_to_eye_at_up(cam.get_world_to_view_transform())
R, T = look_at_view_transform(eye=eye, at=at, up=up)
```

any other camera created from R and T will have the same world to view transform as cam.

Also, given a camera position R and T, then after running:

```from cameras import get_world_to_view_transform, look_at_view_transform
eye, at, up = camera_to_eye_at_up(get_world_to_view_transform(R=R, T=T))
R2, T2 = look_at_view_transform(eye=eye, at=at, up=up)
```

R2 will equal R and T2 will equal T.

Parameters:

world_to_view_transform – Transform3d representing the extrinsic transformation of N cameras.

Returns:

eye – FloatTensor of shape [N, 3] representing the camera centers in world space. at: FloatTensor of shape [N, 3] representing points in world space directly in

front of the cameras e.g. the positions of objects to be viewed by the cameras.

up: FloatTensor of shape [N, 3] representing vectors in world space which

when projected on to the camera plane point upwards.

pytorch3d.renderer.camera_utils.rotate_on_spot(R: Tensor, T: Tensor, rotation: Tensor) Tuple[Tensor, Tensor][source]

Given a camera position as R and T (batched or not), and a rotation matrix (batched or not) return a new R and T representing camera position(s) in the same location but rotated on the spot by the given rotation. In particular the new world to view rotation will be the previous one followed by the inverse of the given rotation.

For example, adding the following lines before constructing a camera will make the camera point a little to the right of where it otherwise would have been.

```from math import radians
from pytorch3d.transforms import axis_angle_to_matrix
rotation = axis_angle_to_matrix(torch.FloatTensor(angles))
R, T = rotate_on_spot(R, T, rotation)
```

Note here that if you have a column vector, then when you premultiply it by this rotation (see the rotation_conversions doc), then it will be rotated anticlockwise if facing the -y axis. In our context, where we postmultiply row vectors to transform them, rotation will rotate the camera clockwise around the -y axis (i.e. when looking down), which is a turn to the right.

If angles was [radians(10), 0, 0], the camera would get pointed up a bit instead.

If angles was [0, 0, radians(10)], the camera would be rotated anticlockwise a bit, so the image would appear rotated clockwise from how it otherwise would have been.

If you want to translate the camera from the origin in camera coordinates, this is simple and does not need a separate function. In particular, a translation by X = [a, b, c] would cause the camera to move a units left, b units up, and c units forward. This is achieved by using T-X in place of T.

Parameters:
• R – FloatTensor of shape [3, 3] or [N, 3, 3]

• T – FloatTensor of shape [3] or [N, 3]

• rotation – FloatTensor of shape [3, 3] or [n, 3, 3]

• 1 (where if neither n nor N is) –

• equal. (then n and N must be) –

Returns:

R – FloatTensor of shape [max(N, n), 3, 3] T: FloatTensor of shape [max(N, n), 3]

pytorch3d.renderer.camera_utils.join_cameras_as_batch(cameras_list: Sequence[CamerasBase]) [source]

Create a batched cameras object by concatenating a list of input cameras objects. All the tensor attributes will be joined along the batch dimension.

Parameters:

cameras_list – List of camera classes all of the same type and on the same device. Each represents one or more cameras.

Returns:

cameras

single batched cameras object of the same

type as all the objects in the input list.