This mesh segmentation benchmark provides data for quantitative analysis of how people decompose objects into parts and for comparison of automatic mesh segmentation algorithms. To build the benchmark, we recruited eighty people to manually segment surface meshes into functional parts, yielding an average of 11 human-generated segmentations for each of 380 meshes across 19 object categories (shown in the figure above). This data set provides a sampled distribution over ''how humans decompose each mesh into functional parts,'' which we treat as a probabilistic ''ground truth'' (darker lines in the image above show places where more people placed a segmentation boundary). Given this data set, it is possible to analyze properties of the human-generated segmentations to learn about what they have in common with each other (and with computer-generated segmentations) and to compute evaluation metrics that measure how well the human-generated segmentations match computer-generated ones for the same mesh.
The benchmark is based on a set of polygonal models generously provided by Daniela Giorgi (IMATI-CNR), within the scope of the AIM@SHAPE and FOCUS K3D projects, and other curators of the Watertight Models Track of SHREC 2007 (note the license). For each of those models, we include a set of manual segmentations created by people from around the world and a set of automatic segmentations created by several different algorithms.
In addition to the data set, we provide software for evaluation, analysis, and viewing of mesh segmentations. The code is written in C++, is free to use, and is known to compile with a recent (4.x) version of g++ on 32/64 bit Linux as well as with VisualStudio 2005 in Windows, given that OpenGL is installed. Here is a list of executables and their usage.
We also provide Python scripts (compatible with 2.x but not 3.x versions) to automatically run evaluation and analysis experiments, to plot the results (using Matlab), to create reports, as well as to generate colored images of mesh segmentations. Our intention is that you can use this software for studies and comparisons of your own segmentation algorithms. Please refer to the instructions included in the software download for details.
The benchmark has been used to compare segmentations computed by the following algorithms:
For each of these algorithms, we have computed four metrics to evaluate how similar their automatically generated segmentations are to the ones created by people for the same models. The following links provide plots of those evaluation metrics for different slices of the data:
We have also analyzed and compared the geometric properties of both manual and automatic segmentations for each of these algorithms. The following links provide plots of computed distributions for 11 different properties:
All data and software can be downloaded for free via the following links
If you use any part of this benchmark, please cite:
Please send email to us if you have any questions