blenderproc.camera package

Module contents

blenderproc.camera.add_camera_pose(cam2world_matrix, frame=None)

Sets a new camera pose to a new or existing frame

Parameters:
  • cam2world_matrix (Union[ndarray, Matrix]) – The transformation matrix from camera to world coordinate system

  • frame (Optional[int]) – Optional, the frame to set the camera pose to.

Return type:

int

Returns:

The frame to which the pose has been set.

blenderproc.camera.add_depth_of_field(focal_point_obj, fstop_value, aperture_blades=0, aperture_rotation=0.0, aperture_ratio=1.0, focal_distance=-1.0)

Adds depth of field to the given camera, the focal point will be set by the focal_point_obj, ideally an empty instance is used for this see bproc.object.create_empty() on how to init one of those. A higher fstop value makes the resulting image look sharper, while a low value decreases the sharpness.

Check the documentation on https://docs.blender.org/manual/en/latest/render/cameras.html#depth-of-field

Parameters:
  • focal_point_obj (Entity) – The used focal point, if the object moves the focal point will move with it

  • fstop_value (float) – A higher fstop value, will increase the sharpness of the scene

  • aperture_blades (int) – Amount of blades used in the camera

  • aperture_rotation (float) – Rotation of the blades in the camera in radiant

  • aperture_ratio (float) – Ratio of the anamorphic bokeh effect, below 1.0 will give a horizontal one, above one a vertical one.

  • focal_distance (float) – Sets the distance to the focal point when no focal_point_obj is given.

blenderproc.camera.check_novel_pose(cam2world_matrix, existing_poses, check_pose_novelty_rot, check_pose_novelty_translation, min_var_diff_rot=-1, min_var_diff_translation=-1)

Checks if a newly sampled pose is novel based on variance checks.

Parameters:
  • cam2world_matrix (Union[Matrix, ndarray]) – The world matrix which describes the camera pose to check.

  • existing_poses (List[Union[Matrix, ndarray]]) – The list of already sampled valid poses.

  • check_pose_novelty_rot (bool) – Checks that a sampled new pose is novel with respect to the rotation component.

  • check_pose_novelty_translation (bool) – Checks that a sampled new pose is novel with respect to the translation component.

  • min_var_diff_rot (float) – Considers a pose novel if it increases the variance of the rotation component of all poses sampled by this parameter’s value in percentage. If set to -1, then it would only check that the variance is increased. Default: sys.float_info.min.

  • min_var_diff_translation (float) – Same as min_var_diff_rot but for translation. If set to -1, then it would only check that the variance is increased. Default: sys.float_info.min.

Returns:

True, if the given pose is novel.

blenderproc.camera.decrease_interest_score(interest_score, min_interest_score, interest_score_step)

Decreases the interest scores in the given interval

Parameters:
  • interest_score (float) – The current interest score.

  • min_interest_score (float) – The minimum desired interest scores.

  • interest_score_step (float) – The step size in which the interest score should be reduced.

Returns:

Returns the new interest score, and True/False if minimum has not been reached.

blenderproc.camera.depth_via_raytracing(bvh_tree, frame=None, return_dist=False)

Computes a depth images using raytracing.

All pixel that correspond to rays which do not hit any object are set to inf.

Parameters:
  • bvh_tree (BVHTree) – The BVH tree to use for raytracing.

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.

  • return_dist (bool) – If True, a distance image instead of a depth image is returned.

Return type:

ndarray

Returns:

The depth image with shape [H, W].

blenderproc.camera.get_camera_frustum(clip_start=None, clip_end=None, frame=None)

Get the current camera frustum as eight 3D coordinates.

Parameters:
  • clip_start (Optional[float]) – The distance between the camera pose and the near clipping plane.

  • clip_end (Optional[float]) – The distance between the camera pose and the far clipping plane.

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is give, the current frame is used.

Return type:

ndarray

Returns:

The eight 3D coordinates of the camera frustum

blenderproc.camera.get_camera_frustum_as_object(clip_start=None, clip_end=None, frame=None)

Get the current camera frustum as deformed cube

Parameters:
  • clip_start (Optional[float]) – The distance between the camera pose and the near clipping plane.

  • clip_end (Optional[float]) – The distance between the camera pose and the far clipping plane.

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is give, the current frame is used.

Return type:

MeshObject

Returns:

The newly created MeshObject

blenderproc.camera.get_camera_pose(frame=None)

Returns the camera pose in the form of a 4x4 cam2world transformation matrx.

Parameters:

frame (Optional[int]) – The frame number whose assigned camera pose should be returned. If None is give, the current frame is used.

Return type:

ndarray

Returns:

The 4x4 cam2world transformation matrix.

blenderproc.camera.get_fov()

Returns the horizontal and vertical FOV of the current camera.

Blender also offers the current FOV as direct attributes of the camera object, however at least the vertical FOV heavily differs from how it would usually be defined.

Return type:

Tuple[float, float]

Returns:

The horizontal and vertical FOV in radians.

blenderproc.camera.get_intrinsics_as_K_matrix()

Returns the current set intrinsics in the form of a K matrix.

This is basically the inverse of the the set_intrinsics_from_K_matrix() function.

Return type:

ndarray

Returns:

The 3x3 K matrix

blenderproc.camera.get_sensor_size(cam)

Returns the sensor size in millimeters based on the configured sensor_fit.

Parameters:

cam (Camera) – The camera object.

Return type:

float

Returns:

The sensor size in millimeters.

blenderproc.camera.get_view_fac_in_px(cam, pixel_aspect_x, pixel_aspect_y, resolution_x_in_px, resolution_y_in_px)

Returns the camera view in pixels.

Parameters:
  • cam (Camera) – The camera object.

  • pixel_aspect_x (float) – The pixel aspect ratio along x.

  • pixel_aspect_y (float) – The pixel aspect ratio along y.

  • resolution_x_in_px (int) – The image width in pixels.

  • resolution_y_in_px (int) – The image height in pixels.

Return type:

int

Returns:

The camera view in pixels.

blenderproc.camera.is_point_inside_camera_frustum(point, clip_start=None, clip_end=None, frame=None)

Checks if a given 3D point lies inside the camera frustum.

Parameters:
  • point (Union[List[float], Vector, ndarray]) – The point, which should be checked

  • clip_start (Optional[float]) – The distance between the camera pose and the near clipping plane.

  • clip_end (Optional[float]) – The distance between the camera pose and the far clipping plane.

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is give, the current frame is used.

Return type:

bool

Returns:

True, if the point lies inside the camera frustum, else False

blenderproc.camera.perform_obstacle_in_view_check(cam2world_matrix, proximity_checks, bvh_tree, sqrt_number_of_rays=10)
Check if there are obstacles in front of the camera which are too far or too close based on the given

proximity_checks.

Parameters:
  • cam2world_matrix (Union[Matrix, ndarray]) – Transformation matrix that transforms from the camera space to the world space.

  • proximity_checks (dict) – A dictionary containing operators (e.g. avg, min) as keys and as values dictionaries containing thresholds in the form of {“min”: 1.0, “max”:4.0} or just the numerical threshold in case of max or min. The operators are combined in conjunction (i.e boolean AND). This can also be used to avoid the background in images, with the no_background: True option.

  • bvh_tree (BVHTree) – A bvh tree containing all objects that should be considered here.

  • sqrt_number_of_rays (int) – The square root of the number of rays which will be used to determine the visible objects.

Return type:

bool

Returns:

True, if the given camera pose does not violate any of the specified proximity_checks.

blenderproc.camera.pointcloud_from_depth(depth, frame=None, depth_cut_off=1000000.0)

Compute a point cloud from a given depth image.

Parameters:
  • depth (ndarray) – The depth image with shape [H, W].

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.

  • depth_cut_off (float) – All points that correspond to depth values bigger than this threshold will be set to NaN.

Return type:

ndarray

Returns:

The point cloud with shape [H, W, 3]

blenderproc.camera.project_points(points, frame=None)

Project 3D points into the 2D camera image.

Parameters:
  • points (ndarray) – A list of 3D points with shape [N, 3].

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.

Return type:

ndarray

Returns:

The projected 2D points with shape [N, 2].

blenderproc.camera.rotation_from_forward_vec(forward_vec, up_axis='Y', inplane_rot=None)

Returns a camera rotation matrix for the given forward vector and up axis

Parameters:
  • forward_vec (Union[ndarray, Vector]) – The forward vector which specifies the direction the camera should look.

  • up_axis (str) – The up axis, usually Y.

  • inplane_rot (Optional[float]) – The inplane rotation in radians. If None is given, the inplane rotation is determined only based on the up vector.

Return type:

ndarray

Returns:

The corresponding rotation matrix.

blenderproc.camera.scene_coverage_score(cam2world_matrix, special_objects=None, special_objects_weight=2, sqrt_number_of_rays=10)

Evaluate the interestingness/coverage of the scene.

This module tries to look at as many objects at possible, this might lead to a focus on the same objects from similar angles.

Only for SUNCG and 3D Front:

The least interesting objects: walls, ceilings, floors.

Parameters:
  • cam2world_matrix (Union[Matrix, ndarray]) – The world matrix which describes the camera pose to check.

  • special_objects (Optional[list]) – Objects that weights differently in calculating whether the scene is interesting or not, uses the coarse_grained_class or if not SUNCG, 3D Front, the category_id.

  • special_objects_weight (float) – Weighting factor for more special objects, used to estimate how interesting the scene is. Default: 2.0.

  • sqrt_number_of_rays (int) – The square root of the number of rays which will be used to determine the visible objects.

Return type:

float

Returns:

the scoring of the scene.

blenderproc.camera.set_camera_parameters_from_config_file(camera_intrinsics_file_path, read_the_extrinsics=False, camera_index=0)

This function sets the camera intrinsic parameters based on a config file, currently it only supports the DLR-RMC camera calibration file format used in the “DLR CalDe and DLR CalLab” camera calibration toolbox. The calibration file allows to use multiple cameras, but only one can be used inside of BlenderProc per run.

Parameters:
  • camera_intrinsics_file_path (str) – Path to the calibration file

  • camera_index (int) – Used camera index

Return type:

Tuple[int, int, ndarray]

Returns:

mapping coordinates from distorted to undistorted image pixels, as returned from set_lens_distortion()

blenderproc.camera.set_intrinsics_from_K_matrix(K, image_width, image_height, clip_start=None, clip_end=None)

Set the camera intrinsics via a K matrix.

The K matrix should have the format:
[[fx, 0, cx],

[0, fy, cy], [0, 0, 1]]

This method is based on https://blender.stackexchange.com/a/120063.

Parameters:
  • K (Union[ndarray, Matrix]) – The 3x3 K matrix.

  • image_width (int) – The image width in pixels.

  • image_height (int) – The image height in pixels.

  • clip_start (Optional[float]) – Clipping start.

  • clip_end (Optional[float]) – Clipping end.

blenderproc.camera.set_intrinsics_from_blender_params(lens=None, image_width=None, image_height=None, clip_start=None, clip_end=None, pixel_aspect_x=None, pixel_aspect_y=None, shift_x=None, shift_y=None, lens_unit=None)

Sets the camera intrinsics using blenders represenation.

Parameters:
  • lens (Optional[float]) – Either the focal length in millimeters or the FOV in radians, depending on the given lens_unit.

  • image_width (Optional[int]) – The image width in pixels.

  • image_height (Optional[int]) – The image height in pixels.

  • clip_start (Optional[float]) – Clipping start.

  • clip_end (Optional[float]) – Clipping end.

  • pixel_aspect_x (Optional[float]) – The pixel aspect ratio along x.

  • pixel_aspect_y (Optional[float]) – The pixel aspect ratio along y.

  • shift_x (Optional[int]) – The shift in x direction.

  • shift_y (Optional[int]) – The shift in y direction.

  • lens_unit (Optional[str]) – Either FOV or MILLIMETERS depending on whether the lens is defined as focal length in millimeters or as FOV in radians.

blenderproc.camera.set_lens_distortion(k1, k2, k3=0.0, p1=0.0, p2=0.0, use_global_storage=False)

This function applies the lens distortion parameters to obtain an distorted-to-undistorted mapping for all natural pixels coordinates of the goal distorted image into the real pixel coordinates of the undistorted Blender image. Since such a mapping usually yields void image areas, this function suggests a different (usually higher) image resolution for the generated Blender image. Eventually, the function apply_lens_distortion will make us of this image to fill in the goal distorted image with valid color values by interpolation. Note that when adapting the internal image resolution demanded from Blender, the camera main point (cx,cy) of the K intrinsic matrix is (internally and temporarily) shifted.

This function has to be used together with bproc.postprocessing.apply_lens_distortion(), else only the resolution is increased but the image(s) will not be distorted.

Parameters:
  • k1 (float) – First radial distortion parameter (of 3rd degree in radial distance) as defined by the undistorted-to-distorted Brown-Conrady lens distortion model, which is conform to the current DLR CalLab/OpenCV/Bouguet/Kalibr implementations. Note that undistorted-to-distorted means that the distortion parameters are multiplied by undistorted, normalized camera projections to yield distorted projections, that are in turn digitized by the intrinsic camera matrix.

  • k2 (float) – Second radial distortion parameter (of 5th degree in radial distance) as defined by the undistorted-to-distorted Brown-Conrady lens distortion model, which is conform to the current DLR CalLab/OpenCV/Bouguet/Kalibr implementations.

  • k3 (float) – Third radial distortion parameter (of 7th degree in radial distance) as defined by the undistorted-to-distorted Brown-Conrady lens distortion model, which is conform to the current DLR CalLab/OpenCV/Bouguet/Kalibr implementations. The use of this parameter is discouraged unless the angular field of view is too high, rendering it necessary, and the parameter allows for a distorted projection in the whole sensor size (which isn’t always given by features-driven camera calibration).

  • p1 (float) – First decentering distortion parameter as defined by the undistorted-to-distorted Brown-Conrady lens distortion model in (Brown, 1965; Brown, 1971; Weng et al., 1992) and is comform to the current DLR CalLab implementation. Note that OpenCV/Bouguet/Kalibr permute them. This parameter shares one degree of freedom (j1) with p2; as a consequence, either both parameters are given or none. The use of these parameters is discouraged since either current cameras do not need them or their potential accuracy gain is negligible w.r.t. image processing.

  • p2 (float) – Second decentering distortion parameter as defined by the undistorted-to-distorted Brown-Conrady lens distortion model in (Brown, 1965; Brown, 1971; Weng et al., 1992) and is comform to the current DLR CalLab implementation. Note that OpenCV/Bouguet/Kalibr permute them. This parameter shares one degree of freedom (j1) with p1; as a consequence, either both parameters are given or none. The use of these parameters is discouraged since either current cameras do not need them or their potential accuracy gain is negligible w.r.t. image processing.

Use_global_storage:

Whether to save the mapping coordinates and original image resolution in a global storage (backward compat for configs)

Return type:

ndarray

Returns:

mapping coordinates from distorted to undistorted image pixels

blenderproc.camera.set_resolution(image_width=None, image_height=None)

Sets the camera resolution.

Parameters:
  • image_width (Optional[int]) – The image width in pixels.

  • image_height (Optional[int]) – The image height in pixels.

blenderproc.camera.set_stereo_parameters(convergence_mode, convergence_distance, interocular_distance)

Sets the stereo parameters of the camera.

Parameters:
  • convergence_mode (str) – How the two cameras converge (e.g. Off-Axis where both cameras are shifted inwards to converge in the convergence plane, or parallel where they do not converge and are parallel). Available: [“OFFAXIS”, “PARALLEL”, “TOE”]

  • convergence_distance (float) – The convergence point for the stereo cameras (i.e. distance from the projector to the projection screen)

  • interocular_distance (float) – Distance between the camera pair

blenderproc.camera.unproject_points(points_2d, depth, frame=None, depth_cut_off=1000000.0)

Unproject 2D points into 3D

Parameters:
  • points_2d (ndarray) – An array of N 2D points with shape [N, 2].

  • depth (ndarray) – A list of depth values corresponding to each 2D point, shape [N].

  • frame (Optional[int]) – The frame number whose assigned camera pose should be used. If None is given, the current frame is used.

  • depth_cut_off (float) – All points that correspond to depth values bigger than this threshold will be set to NaN.

Return type:

ndarray

Returns:

The unprojected 3D points with shape [N, 3].

blenderproc.camera.visible_objects(cam2world_matrix, sqrt_number_of_rays=10)

Returns a set of objects visible from the given camera pose.

Sends a grid of rays through the camera frame and returns all objects hit by at least one ray.

Parameters:
  • cam2world_matrix (Union[Matrix, ndarray]) – The world matrix which describes the camera orientation to check.

  • sqrt_number_of_rays (int) – The square root of the number of rays which will be used to determine the visible objects.

Return type:

Set[MeshObject]

Returns:

A set of objects visible hit by the sent rays.