pytorch3d.renderer.mesh.shader

shader

class pytorch3d.renderer.mesh.shader.ShaderBase(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: Module

to(device: str | device)[source]
class pytorch3d.renderer.mesh.shader.HardPhongShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Per pixel lighting - the lighting model is applied using the interpolated coordinates and normals for each pixel. The blending function hard assigns the color of the closest face for each pixel.

To use the default values, simply initialize the shader with the desired device e.g.

shader = HardPhongShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
class pytorch3d.renderer.mesh.shader.SoftPhongShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Per pixel lighting - the lighting model is applied using the interpolated coordinates and normals for each pixel. The blending function returns the soft aggregated color using all the faces per pixel.

To use the default values, simply initialize the shader with the desired device e.g.

shader = SoftPhongShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
class pytorch3d.renderer.mesh.shader.HardGouraudShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Per vertex lighting - the lighting model is applied to the vertex colors and the colors are then interpolated using the barycentric coordinates to obtain the colors for each pixel. The blending function hard assigns the color of the closest face for each pixel.

To use the default values, simply initialize the shader with the desired device e.g.

shader = HardGouraudShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
class pytorch3d.renderer.mesh.shader.SoftGouraudShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Per vertex lighting - the lighting model is applied to the vertex colors and the colors are then interpolated using the barycentric coordinates to obtain the colors for each pixel. The blending function returns the soft aggregated color using all the faces per pixel.

To use the default values, simply initialize the shader with the desired device e.g.

shader = SoftGouraudShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
pytorch3d.renderer.mesh.shader.TexturedSoftPhongShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None) SoftPhongShader[source]

TexturedSoftPhongShader class has been DEPRECATED. Use SoftPhongShader instead. Preserving TexturedSoftPhongShader as a function for backwards compatibility.

class pytorch3d.renderer.mesh.shader.HardFlatShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Per face lighting - the lighting model is applied using the average face position and the face normal. The blending function hard assigns the color of the closest face for each pixel.

To use the default values, simply initialize the shader with the desired device e.g.

shader = HardFlatShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
class pytorch3d.renderer.mesh.shader.SoftSilhouetteShader(blend_params: BlendParams | None = None)[source]

Bases: Module

Calculate the silhouette by blending the top K faces for each pixel based on the 2d euclidean distance of the center of the pixel to the mesh face.

Use this shader for generating silhouettes similar to SoftRasterizer [0].

Note

To be consistent with SoftRasterizer, initialize the RasterizationSettings for the rasterizer with blur_radius = np.log(1. / 1e-4 - 1.) * blend_params.sigma

[0] Liu et al, ‘Soft Rasterizer: A Differentiable Renderer for Image-based

3D Reasoning’, ICCV 2019

forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]

Only want to render the silhouette so RGB values can be ones. There is no need for lighting or texturing

class pytorch3d.renderer.mesh.shader.SplatterPhongShader(**kwargs)[source]

Bases: ShaderBase

Per pixel lighting - the lighting model is applied using the interpolated coordinates and normals for each pixel. The blending function returns the color aggregated using splats from surrounding pixels (see [0]).

To use the default values, simply initialize the shader with the desired device e.g.

shader = SplatterPhongShader(device=torch.device("cuda:0"))
[0] Cole, F. et al., “Differentiable Surface Rendering via Non-differentiable

Sampling”.

to(device: str | device)[source]
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
check_blend_params(blend_params)[source]
class pytorch3d.renderer.mesh.shader.HardDepthShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Renders the Z distances of the closest face for each pixel. If no face is found it returns the zfar value of the camera.

Output from this shader is [N, H, W, 1] since it’s only depth.

To use the default values, simply initialize the shader with the desired device e.g.

shader = HardDepthShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]
class pytorch3d.renderer.mesh.shader.SoftDepthShader(device: str | device = 'cpu', cameras: TensorProperties | None = None, lights: TensorProperties | None = None, materials: Materials | None = None, blend_params: BlendParams | None = None)[source]

Bases: ShaderBase

Renders the Z distances using an aggregate of the distances of each face based off of the point distance. If no face is found it returns the zfar value of the camera.

Output from this shader is [N, H, W, 1] since it’s only depth.

To use the default values, simply initialize the shader with the desired device e.g.

shader = SoftDepthShader(device=torch.device("cuda:0"))
forward(fragments: Fragments, meshes: Meshes, **kwargs) Tensor[source]