blenderproc.python.camera.CameraProjection module
Collection of camera projection helper functions.
- blenderproc.python.camera.CameraProjection.depth_via_raytracing(bvh_tree, frame=None, return_dist=False)[source]
Computes a depth images using raytracing.
All pixel that correspond to rays which do not hit any object are set to inf.
- Parameters:
bvh_tree (
BVHTree
) – The BVH tree to use for raytracing.frame (
Optional
[int
]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.return_dist (
bool
) – If True, a distance image instead of a depth image is returned.
- Return type:
ndarray
- Returns:
The depth image with shape [H, W].
- blenderproc.python.camera.CameraProjection.pointcloud_from_depth(depth, frame=None, depth_cut_off=1000000.0)[source]
Compute a point cloud from a given depth image.
- Parameters:
depth (
ndarray
) – The depth image with shape [H, W].frame (
Optional
[int
]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.depth_cut_off (
float
) – All points that correspond to depth values bigger than this threshold will be set to NaN.
- Return type:
ndarray
- Returns:
The point cloud with shape [H, W, 3]
- blenderproc.python.camera.CameraProjection.project_points(points, frame=None)[source]
Project 3D points into the 2D camera image.
- Parameters:
points (
ndarray
) – A list of 3D points with shape [N, 3].frame (
Optional
[int
]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.
- Return type:
ndarray
- Returns:
The projected 2D points with shape [N, 2].
- blenderproc.python.camera.CameraProjection.unproject_points(points_2d, depth, frame=None, depth_cut_off=1000000.0)[source]
Unproject 2D points into 3D
- Parameters:
points_2d (
ndarray
) – An array of N 2D points with shape [N, 2].depth (
ndarray
) – A list of depth values corresponding to each 2D point, shape [N].frame (
Optional
[int
]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.depth_cut_off (
float
) – All points that correspond to depth values bigger than this threshold will be set to NaN.
- Return type:
ndarray
- Returns:
The unprojected 3D points with shape [N, 3].