blenderproc.python.postprocessing.PostProcessingUtility module

A set of function to post process the produced images.

class blenderproc.python.postprocessing.PostProcessingUtility._PostProcessingUtility[source]

Bases: object

static determine_noisy_pixels(image)[source]
Parameters:

image (ndarray) – The image data.

Return type:

ndarray

Returns:

a list of 2D indices that correspond to the noisy pixels. One criterion of finding these pixels is to use a histogram and find the pixels with frequencies lower than a threshold, e.g. 100.

static get_pixel_neighbors(data, i, j)[source]

Returns the valid neighbor pixel indices of the given pixel.

Parameters:
  • data (ndarray) – The whole image data.

  • i (int) – The row index of the pixel

  • j (int) – The col index of the pixel.

Return type:

ndarray

Returns:

A list of neighbor point indices.

static get_pixel_neighbors_stacked(img, filter_size=3, return_list=False)[source]

Stacks the neighbors of each pixel according to a square filter around each given pixel in the depth dimensions. The neighbors are represented by shifting the input image in all directions required to simulate the filter.

Parameters:
  • img (ndarray) – Input image. Type: blender object of type image.

  • filter_size (int) – Filter size. Type: int. Default: 5..

  • return_list (bool) – Instead of stacking in the output array, just return a list of the “neighbor” images along with the input image.

Return type:

Union[list, ndarray]

Returns:

Either a tensor with the “neighbor” images stacked in a separate additional dimension, or a list of images of the same shape as the input image, containing the shifted images (simulating the neighbors) and the input image.

static is_in(element, test_elements, assume_unique=False, invert=False)[source]

As np.isin is only available after v1.13 and blender is using 1.10.1 we have to implement it manually.

blenderproc.python.postprocessing.PostProcessingUtility.add_gaussian_shifts(image, std=0.5)[source]

Randomly shifts the pixels of the input depth image in x and y direction.

Parameters:
  • image (Union[list, ndarray]) – Input depth image(s)

  • std (float) – Standard deviation of pixel shifts, defaults to 0.5

Return type:

Union[list, ndarray]

Returns:

Augmented images

blenderproc.python.postprocessing.PostProcessingUtility.add_kinect_azure_noise(depth, color=None, missing_depth_darkness_thres=15)[source]

Add noise, holes and smooth depth maps according to the noise characteristics of the Kinect Azure sensor. https://www.mdpi.com/1424-8220/21/2/413

For further realism, consider to use the projection from depth to color image in the Azure Kinect SDK: https://docs.microsoft.com/de-de/azure/kinect-dk/use-image-transformation

Parameters:
  • depth (Union[list, ndarray]) – Input depth image(s) in meters

  • color (Union[list, ndarray, None]) – Optional color image(s) to add missing depth at close to black surfaces

  • missing_depth_darkness_thres (int) – uint8 gray value threshold at which depth becomes invalid, i.e. 0

Return type:

Union[list, ndarray]

Returns:

Noisy depth image(s)

blenderproc.python.postprocessing.PostProcessingUtility.depth2dist(depth)[source]

Maps a depth image to distance image, also works with a list of images.

Parameters:

depth (Union[List[ndarray], ndarray]) – The depth data.

Return type:

Union[List[ndarray], ndarray]

Returns:

The distance data

blenderproc.python.postprocessing.PostProcessingUtility.dist2depth(dist)[source]

Maps a distance image to depth image, also works with a list of images.

Parameters:

dist (Union[List[ndarray], ndarray]) – The distance data.

Return type:

Union[List[ndarray], ndarray]

Returns:

The depth data

blenderproc.python.postprocessing.PostProcessingUtility.oil_paint_filter(image, filter_size=5, edges_only=True, rgb=False)[source]

Applies the oil paint filter on a single channel image (or more than one channel, where each channel is a replica of the other). This could be desired for corrupting rendered depth maps to appear more realistic. Also trims the redundant channels if they exist.

Parameters:
  • image (Union[list, ndarray]) – Input image or list of images

  • filter_size (int) – Filter size, should be an odd number.

  • edges_only (bool) – If true, applies the filter on the edges only.

  • rgb (bool) – Apply the filter on an RGB image (if the image has 3 channels, they’re assumed to not be replicated).

Return type:

Union[list, ndarray]

Returns:

filtered image

blenderproc.python.postprocessing.PostProcessingUtility.remove_segmap_noise(image)[source]

A function that takes an image and a few 2D indices, where these indices correspond to pixel values in segmentation maps, where these values are not real labels, but some deviations from the real labels, that were generated as a result of Blender doing some interpolation, smoothing, or other numerical operations.

Assumes that noise pixel values won’t occur more than 100 times.

Parameters:

image (Union[list, ndarray]) – ndarray of the .exr segmap

Return type:

Union[list, ndarray]

Returns:

The denoised segmap image

blenderproc.python.postprocessing.PostProcessingUtility.segmentation_mapping(image, map_by, default_values)[source]

Maps an image or a list of images to the desired segmentation images plus segmentation dictionary for keys, which can not be stored in an image (e.g. name).

Parameters:
  • image (Union[List[ndarray], ndarray]) – A list or single image of a scene, must contain the pass indices defined in enable_segmentation_output.

  • map_by (Union[str, List[str]]) – The keys which will be extracted from the objects, either a single key or a list of keys.

  • default_values (Optional[Dict[str, int]]) – If an object does not provide a key a default key must be provided.

Return type:

Dict[str, Union[ndarray, List[ndarray], List[Dict[str, Any]]]]

Returns:

A dict mapping each key in map_by to an output list of images or a dictionary containing the information

blenderproc.python.postprocessing.PostProcessingUtility.trim_redundant_channels(image)[source]

Remove redundant channels, this is useful to remove the two of the three channels created for a depth or distance image. This also works on a list of images. Be aware that there is no check performed, to ensure that all channels are really equal.

Parameters:

image (Union[list, ndarray]) – Input image or list of images

Return type:

Union[list, ndarray]

Returns:

The trimmed image data with preserved input type