This repository provides the object recognition benchmark code and label files used to evaluate the SUOP dataset (Zenodo DOI: 10.5281/zenodo.18475883).
-
Purpose: Provide the exact code + labels used for benchmarking object detection/recognition on the SUOP dataset.
-
Included:
- PointNet++: 3D point cloud-based object recognition code.
- YOLOv8 (Ultralytics): 2D image-based detection code and the corresponding bounding-box label (.txt) files used for training.
-
Not included:
- Raw dataset files and generated images are not distributed here. Images can be reproduced using the same generation pipeline used during benchmarking (e.g.,
png_make.py) if the SUOP dataset is available locally.
- Raw dataset files and generated images are not distributed here. Images can be reproduced using the same generation pipeline used during benchmarking (e.g.,
This repository is intended to help others reproduce the benchmark results (training/inference pipeline) on the SUOP dataset using the provided implementation and label
object_detection/
βββ PointNet++/
β βββ object_detection_code/
β βββ model.py
β βββ dataset.py
β βββ train.py
β βββ object_detection.py
βββ YOLOv8/
β βββ object_detection_code/
β β βββ data.yaml
β β βββ ply_change.py
β β βββ png_make.py
β β βββ train.py
β β βββ object_detection.py
β βββ bbox_labels/
β βββ chair/
β β βββ chair_range_3m/ (case_XXX.txt ...)
β β βββ chair_range_6m/ (case_XXX.txt ...)
β β βββ chair_range_10m/ (case_XXX.txt ...)
β βββ drum/ ...
β βββ dummy/ ...
β βββ net/ ...
β βββ tire/ ...