blenderproc.python.loader.AMASSLoader module

Loading an 3D model in a certain pose from the AMASS dataset.

class blenderproc.python.loader.AMASSLoader._AMASSLoader[source]

Bases: object

AMASS is a large database of human motion unifying 15 different optical marker-based motion capture datasets by representing them within a common framework and parameterization. All the mocap data is converted into realistic 3D human meshes represented by a rigged body model called SMPL, which provides a standard skeletal representation as well as a fully rigged surface mesh. Warning: Only one part of the AMASS database is currently supported by the loader! Please refer to the AMASSLoader example for more information about the currently supported datasets.

Any human pose recorded in these motions could be reconstructed using the following parameters: “sub_dataset_identifier”, “sequence id”, “frame id” and “model gender” which will represent the pose, these parameters specify the exact pose to be generated based on the selected mocap dataset and motion category recorded in this dataset.

Note: if this module is used with another loader that loads objects with semantic mapping, make sure the other module is loaded first in the config file.

static correct_materials(objects)[source]

If the used material contains an alpha texture, the alpha texture has to be flipped to be correct

Parameters:

objects (List[MeshObject]) – Mesh objects where the material might be wrong.

static get_pose_parameters(supported_mocap_datasets, num_betas, used_sub_dataset_id, used_subject_id, used_sequence_id, used_frame_id)[source]

Extract pose and shape parameters corresponding to the requested pose from the database to be processed by the parametric model

Parameters:
  • supported_mocap_datasets (dict) – A dict which maps sub dataset names to their paths.

  • num_betas (int) – Number of body parameters

  • used_sub_dataset_id (str) – Identifier for the sub dataset, the dataset which the human pose object should be extracted from.

  • used_subject_id (str) – Type of motion from which the pose should be extracted, this is dataset dependent parameter.

  • used_sequence_id (int) – Sequence id in the dataset, sequences are the motion recorded to represent certain action.

  • used_frame_id (int) – Frame id in a selected motion sequence. If none is selected a random one is picked

Return type:

Tuple[torch.Tensor, torch.Tensor]

Returns:

tuple of arrays contains the parameters. Type: tuple

static get_supported_mocap_datasets(taxonomy_file_path, data_path)[source]
Get latest updated list from taxonomoy json file about the supported mocap datasets supported in the

loader module and update.supported_mocap_datasets list

Parameters:
  • taxonomy_file_path (str) – path to taxomomy.json file which contains the supported datasets and their respective paths. Type: string.

  • data_path (str) – path to the AMASS dataset root folder. Type: string.

Return type:

dict

human_skin_colors = ['2D221E', '3C2E28', '4B3932', '5A453C', '695046', '785C50', '87675A', '967264', 'A57E6E', 'B48A78', 'C39582', 'D2A18C', 'E1AC96', 'F0B8A0', 'FFC3AA', 'FFCEB4', 'FFDABE', 'FFE5C8']
static load_parametric_body_model(data_path, used_body_model_gender, num_betas, num_dmpls)[source]

loads the parametric model that is used to generate the mesh object

Return type:

Tuple[BodyModel, array]

Returns:

parametric model. Type: tuple.

static write_body_mesh_to_obj_file(body_representation, faces, temp_dir)[source]

Write the generated pose as obj file on the desk.

Parameters:
  • body_representation (torch.Tensor) – parameters generated from the BodyModel model which represent the obj pose and shape. Type: torch.Tensor

  • faces (array) – face parametric model which is used to generate the face mesh. Type: numpy.array

  • temp_dir (str) – Path to the folder in which the generated pose as obj will be stored

Return type:

str

Returns:

path to generated obj file. Type: string.

blenderproc.python.loader.AMASSLoader.load_AMASS(data_path, sub_dataset_id, temp_dir=None, body_model_gender=None, subject_id='', sequence_id=-1, frame_id=-1, num_betas=10, num_dmpls=8)[source]

use the pose parameters to generate the mesh and loads it to the scene.

Parameters:
  • data_path (str) – The path to the AMASS Dataset folder in resources folder.

  • sub_dataset_id (str) – Identifier for the sub dataset, the dataset which the human pose object should be extracted from. Available: [‘CMU’, ‘Transitions_mocap’, ‘MPI_Limits’, ‘SSM_synced’, ‘TotalCapture’, ‘Eyes_Japan_Dataset’, ‘MPI_mosh’, ‘MPI_HDM05’, ‘HumanEva’, ‘ACCAD’, ‘EKUT’, ‘SFU’, ‘KIT’, ‘H36M’, ‘TCD_handMocap’, ‘BML’]

  • temp_dir (Optional[str]) – A temp directory which is used for writing the temporary .obj file.

  • body_model_gender (Optional[str]) – The model gender pose is represented by either using male, female or neutral body shape. Available:[male, female, neutral]. If None is selected a random one is chosen.

  • subject_id (str) – Type of motion from which the pose should be extracted, this is dataset dependent parameter. If left empty a random subject id is picked.

  • sequence_id (int) – Sequence id in the dataset, sequences are the motion recorded to represent certain action. If set to -1 a random sequence id is selected.

  • frame_id (int) – Frame id in a selected motion sequence. If none is selected a random one is picked

  • num_betas (int) – Number of body parameters

  • num_dmpls (int) – Number of DMPL parameters

Return type:

List[MeshObject]

Returns:

The list of loaded mesh objects.