blenderproc.python.writer.BopWriterUtility module
Allows rendering the content of the scene in the bop file format.
- class blenderproc.python.writer.BopWriterUtility._BopWriterUtility[source]
Bases:
object
Saves the synthesized dataset in the BOP format. The dataset is split into chunks which are saved as individual “scenes”. For more details about the BOP format, visit the BOP toolkit docs: https://github.com/thodan/bop_toolkit/blob/master/docs/bop_datasets_format.md
- static _calc_gt_info_iteration(annotation_scale, ren_cy_offset, ren_cx_offset, im_height, im_width, K, delta, depth, gt)[source]
One iteration of calc_gt_info(), executed inside a worker process.
- Parameters:
annotation_scale (
float
) – The scale factor applied to the calculated annotations (in [m]) to get them into the specified format (see annotation_format in write_bop for further details).ren_cy_offset (
int
) – The y offset for cropping the rendered image.ren_cx_offset (
int
) – The x offset for cropping the rendered image.im_height (
int
) – The image height for cropping the rendered image.im_width (
int
) – The image width for cropping the rendered image.K (
ndarray
) – The camera instrinsics to use.delta (
float
) – Tolerance used for estimation of the visibility masks.depth (
ndarray
) – The depth image of the frame.gt (
Dict
[str
,int
]) – Containing id of the object whose mask the worker should render
- static _calc_gt_masks_iteration(annotation_scale, K, delta, dist_im, chunk_dir, im_id, gt_data)[source]
One iteration of calc_gt_masks(), executed inside a worker process.
- Parameters:
annotation_scale (
float
) – The scale factor applied to the calculated annotations (in [m]) to get them into the specified format (see annotation_format in write_bop for further details).K (
ndarray
) – The camera instrinsics to use.delta (
float
) – Tolerance used for estimation of the visibility masks.dist_im (
ndarray
) – The distance image of the frame.chunk_dir (
str
) – The chunk dir where to store the resulting images.im_id (
int
) – The id of the current image/frame.gt_data (
Tuple
[int
,Dict
[str
,int
]]) – Containing id of the object whose mask the worker should render
- static _pyrender_init(ren_width, ren_height, trimesh_objects)[source]
Initializes a worker process for calc_gt_masks and calc_gt_info
- Parameters:
ren_width (
int
) – The width of the images to render.ren_height (
int
) – The height of the images to render.trimesh_objects (
Dict
[int
,Trimesh
]) – A dict containing trimesh meshes for each object in the scene
- static calc_gt_coco(chunk_dirs, dataset_objects, starting_frame_id=0)[source]
Calculates the COCO annotations. From the BOP toolkit (https://github.com/thodan/bop_toolkit).
- Parameters:
chunk_dirs (
List
[str
]) – List of directories to calculate the gt coco annotations for.dataset_objects (
List
[MeshObject
]) – List containing all objects to save the annotations for.starting_frame_id (
int
) – The first frame id the writer has written during this run.
- static calc_gt_info(pool, chunk_dirs, starting_frame_id=0, annotation_scale=1000.0, delta=0.015)[source]
Calculates the ground truth masks. From the BOP toolkit (https://github.com/thodan/bop_toolkit), with the difference of using pyrender for depth rendering.
- Parameters:
chunk_dirs (
List
[str
]) – List of directories to calculate the gt info for.starting_frame_id (
int
) – The first frame id the writer has written during this run.annotation_scale (
float
) – The scale factor applied to the calculated annotations (in [m]) to get them into the specified format (see annotation_format in write_bop for further details).delta (
float
) – Tolerance used for estimation of the visibility masks.
- static calc_gt_masks(pool, chunk_dirs, starting_frame_id=0, annotation_scale=1000.0, delta=0.015)[source]
Calculates the ground truth masks. From the BOP toolkit (https://github.com/thodan/bop_toolkit), with the difference of using pyrender for depth rendering.
- Parameters:
pool (
Pool
) – The pool of worker processes to use for the calculations.chunk_dirs (
List
[str
]) – List of directories to calculate the gt masks for.starting_frame_id (
int
) – The first frame id the writer has written during this run.annotation_scale (
float
) – The scale factor applied to the calculated annotations (in [m]) to get them into the specified format (see annotation_format in write_bop for further details).delta (
float
) – Tolerance used for estimation of the visibility masks.
- static get_frame_camera(save_world2cam, depth_scale=1.0, unit_scaling=1000.0, destination_frame=None)[source]
Returns camera parameters for the active camera.
- Parameters:
save_world2cam (
bool
) – If true, camera to world transformations “cam_R_w2c”, “cam_t_w2c” are saved in scene_camera.jsondepth_scale (
float
) – Multiply the uint16 output depth image with this factor to get depth in mm.unit_scaling (
float
) –for outputting poses in mm
destination_frame (
Optional
[List
[str
]]) – Transform poses from Blender internal coordinates to OpenCV coordinates
- Returns:
dict containing info for scene_camera.json
- static get_frame_gt(dataset_objects, unit_scaling, ignore_dist_thres, destination_frame=None)[source]
Returns GT pose annotations between active camera and objects.
- Parameters:
dataset_objects (
List
[Mesh
]) – Save annotations for these objects.unit_scaling (
float
) –for outputting poses in mm
ignore_dist_thres (
float
) – Distance between camera and object after which object is ignored. Mostly due to failed physics.destination_frame (
Optional
[List
[str
]]) – Transform poses from Blender internal coordinates to OpenCV coordinates
- Returns:
A list of GT camera-object pose annotations for scene_gt.json
- static load_json(path, keys_to_int=False)[source]
Loads content of a JSON file. From the BOP toolkit (https://github.com/thodan/bop_toolkit).
- Parameters:
path – Path to the JSON file.
keys_to_int – Convert digit dict keys to integers. Default: False
- Returns:
Content of the loaded JSON file.
- static save_depth(path, im)[source]
Saves a depth image (16-bit) to a PNG file. From the BOP toolkit (https://github.com/thodan/bop_toolkit).
- Parameters:
path (
str
) – Path to the output depth image file.im (
ndarray
) – ndarray with the depth image to save.
- static save_json(path, content)[source]
Saves the content to a JSON file in a human-friendly format. From the BOP toolkit (https://github.com/thodan/bop_toolkit).
- Parameters:
path – Path to the output JSON file.
content – Dictionary/list to save.
- static write_camera(camera_path, depth_scale=1.0)[source]
Writes camera.json into dataset_dir. :type camera_path:
str
:param camera_path: Path to camera.json :type depth_scale:float
:param depth_scale: Multiply the uint16 output depth image with this factor to get depth in mm.
- static write_frames(chunks_dir, dataset_objects, depths, colors, color_file_format='PNG', depth_scale=1.0, frames_per_chunk=1000, annotation_scale=1000.0, ignore_dist_thres=100.0, save_world2cam=True, jpg_quality=95)[source]
Write each frame’s ground truth into chunk directory in BOP format
- Parameters:
chunks_dir (
str
) – Path to the output directory of the current chunk.dataset_objects (
list
) – Save annotations for these objects.depths (
List
[ndarray
]) – List of depth images in m to savecolors (
List
[ndarray
]) – List of color images to savecolor_file_format (
str
) – File type to save color images. Available: “PNG”, “JPEG”jpg_quality (
int
) – If color_file_format is “JPEG”, save with the given quality.depth_scale (
float
) – Multiply the uint16 output depth image with this factor to get depth in mm. Used to trade-off between depth accuracy and maximum depth value. Default corresponds to 65.54m maximum depth and 1mm accuracy.ignore_dist_thres (
float
) – Distance between camera and object after which object is ignored. Mostly due to failed physics.annotation_scale (
float
) – The scale factor applied to the calculated annotations (in [m]) to get them into the specified format (see annotation_format in write_bop for further details).frames_per_chunk (
int
) – Number of frames saved in each chunk (called scene in BOP)
- blenderproc.python.writer.BopWriterUtility.bop_pose_to_pyrender_coordinate_system(cam_R_m2c, cam_t_m2c)[source]
- Converts an object pose in bop format to pyrender camera coordinate system
(https://pyrender.readthedocs.io/en/latest/examples/cameras.html).
- Parameters:
cam_R_m2c (
ndarray
) – 3x3 Rotation matrix.cam_t_m2c (
ndarray
) – Translation vector.
- Return type:
ndarray
- Returns:
Pose in pyrender coordinate system.
- blenderproc.python.writer.BopWriterUtility.write_bop(output_dir, target_objects=None, depths=None, colors=None, color_file_format='PNG', dataset='', append_to_existing_output=True, depth_scale=1.0, jpg_quality=95, save_world2cam=True, ignore_dist_thres=100.0, m2mm=None, annotation_unit='mm', frames_per_chunk=1000, calc_mask_info_coco=True, delta=0.015, num_worker=None)[source]
Write the BOP data
- Parameters:
output_dir (
str
) – Path to the output directory.target_objects (
Optional
[List
[MeshObject
]]) – Objects for which to save ground truth poses in BOP format. Default: Save all objects or from specified datasetdepths (
Optional
[List
[ndarray
]]) – List of depth images in m to savecolors (
Optional
[List
[ndarray
]]) – List of color images to savecolor_file_format (
str
) – File type to save color images. Available: “PNG”, “JPEG”jpg_quality (
int
) – If color_file_format is “JPEG”, save with the given quality.dataset (
str
) – Only save annotations for objects of the specified bop dataset. Saves all object poses if undefined.append_to_existing_output (
bool
) – If true, the new frames will be appended to the existing ones.depth_scale (
float
) – Multiply the uint16 output depth image with this factor to get depth in mm. Used to trade-off between depth accuracy and maximum depth value. Default corresponds to 65.54m maximum depth and 1mm accuracy.save_world2cam (
bool
) – If true, camera to world transformations “cam_R_w2c”, “cam_t_w2c” are saved in scene_camera.jsonignore_dist_thres (
float
) – Distance between camera and object after which object is ignored. Mostly due to failed physics.m2mm (
Optional
[bool
]) – Original bop annotations and models are in mm. If true, we convert the gt annotations to mm here. This is needed if BopLoader option mm2m is used (deprecated).annotation_unit (
str
) – The unit in which the annotations are saved. Available: ‘m’, ‘dm’, ‘cm’, ‘mm’.frames_per_chunk (
int
) – Number of frames saved in each chunk (called scene in BOP)calc_mask_info_coco (
bool
) – Whether to calculate gt masks, gt info and gt coco annotations.delta (
float
) – Tolerance used for estimation of the visibility masks (in [m]).num_worker (
Optional
[int
]) – The number of processes to use to calculate gt_masks and gt_info. If None is given, number of cores is used.