This project focuses on generating spherical representations of 3D meshes using data from ShapeNet and Acronym datasets. The method employs hemispherical radial ray-casting to transform the data into a structured spherical format.
The ShapeNet and Acronym datasets are used to create a baseline representation, as shown below:
To achieve this representation, a sphere is placed around each object, with rays cast from its surface to capture spatial and grasp-related information. Only a hemisphere is utilized for data creation.
The resulting dataset consists of a spherical projection of the 3D meshes, organized into different channels:
Captures depth values by measuring hit-distance between the enclosing hemisphere and the object.
Maps absolute grasp positions to the nearest ray in the spherical coordinate system.
Represents the inclination angle to determine grasp orientation.
Encodes the azimuthal angle to determine grasp orientation.
Captures rotation along the grasp axis.
A key goal of this dataset is to model grasp position and orientation using probability distributions. Gaussian Mixture Models (GMMs) are applied to create probabilistic representations.
This dataset provides a spherical representation of 3D objects and their grasps, which can be leveraged for grasp planning, robotic manipulation, and probabilistic modeling of grasp distributions. The Gaussian Mixture Model analysis further enhances grasp prediction by capturing underlying probabilistic distributions.
If you use this dataset in your research, please consider citing this work.