pytorch3d.renderer.compositing
compositing
- pytorch3d.renderer.compositing.alpha_composite(pointsidx, alphas, pt_clds) Tensor [source]
Composite features within a z-buffer using alpha compositing. Given a z-buffer with corresponding features and weights, these values are accumulated according to their weights such that features nearer in depth contribute more to the final feature than ones further away.
- Concretely this means:
weighted_fs[b,c,i,j] = sum_k cum_alpha_k * features[c,pointsidx[b,k,i,j]] cum_alpha_k = alphas[b,k,i,j] * prod_l=0..k-1 (1 - alphas[b,l,i,j])
- Parameters:
pt_clds – Tensor of shape (N, C, P) giving the features of each point (can use RGB for example).
alphas – float32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the weight of each point in the z-buffer. Values should be in the interval [0, 1].
pointsidx – int32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the indices of the nearest points at each pixel, sorted in z-order. Concretely pointsidx[n, k, y, x] = p means that features[n, :, p] is the feature of the kth closest point (along the z-direction) to pixel (y, x) in batch element n. This is weighted by alphas[n, k, y, x].
- Returns:
Combined features –
- Tensor of shape (N, C, image_size, image_size)
giving the accumulated features at each point.
- pytorch3d.renderer.compositing.norm_weighted_sum(pointsidx, alphas, pt_clds) Tensor [source]
Composite features within a z-buffer using normalized weighted sum. Given a z-buffer with corresponding features and weights, these values are accumulated according to their weights such that depth is ignored; the weights are used to perform a weighted sum.
- Concretely this means:
- weighted_fs[b,c,i,j] =
sum_k alphas[b,k,i,j] * features[c,pointsidx[b,k,i,j]] / sum_k alphas[b,k,i,j]
- Parameters:
pt_clds – Packed feature tensor of shape (C, P) giving the features of each point (can use RGB for example).
alphas – float32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the weight of each point in the z-buffer. Values should be in the interval [0, 1].
pointsidx – int32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the indices of the nearest points at each pixel, sorted in z-order. Concretely pointsidx[n, k, y, x] = p means that features[:, p] is the feature of the kth closest point (along the z-direction) to pixel (y, x) in batch element n. This is weighted by alphas[n, k, y, x].
- Returns:
Combined features –
- Tensor of shape (N, C, image_size, image_size)
giving the accumulated features at each point.
- pytorch3d.renderer.compositing.weighted_sum(pointsidx, alphas, pt_clds) Tensor [source]
Composite features within a z-buffer using normalized weighted sum.
- Parameters:
pt_clds – Packed Tensor of shape (C, P) giving the features of each point (can use RGB for example).
alphas – float32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the weight of each point in the z-buffer. Values should be in the interval [0, 1].
pointsidx – int32 Tensor of shape (N, points_per_pixel, image_size, image_size) giving the indices of the nearest points at each pixel, sorted in z-order. Concretely pointsidx[n, k, y, x] = p means that features[:, p] is the feature of the kth closest point (along the z-direction) to pixel (y, x) in batch element n. This is weighted by alphas[n, k, y, x].
- Returns:
Combined features –
- Tensor of shape (N, C, image_size, image_size)
giving the accumulated features at each point.