Evaluation Results overaged ver the whole Benchmark under different numSegments settings
We show the evaluation results in four metrics under the following six alternative ways to set numSegments:
By Segmentation: every algorithm is run once for every unique number of segments found in the benchmark, allowing evaluations to be performed only between segmentations with exactly the same numbers of segments
By Model: numSegments is set separately for every model to the mode of the number of segments observed for that model in the benchmark
By Category: numSegments is set separately for every object category to the mode of the number of segments observed for that category in the benchmark
By Shape Diameter: numSegments is set separately for every model according to the number of segments predicted by the Shape Diameter Function (SDF) algorithm
By Core Extraction: numSegments is set separately for every model according to the number of segments predicted by the Shape Core Extraction algorithm
By Dataset: numSegments is set to the same number for all runs of every algorithm to the average number of segments found amongst all examples in the benchmark (which is 7)