Skip to content

DLR-RM/MPRF

Repository files navigation

MPRF

This repository contains the official code for the paper MPRF.

@article{gonzalez2026multi,
  title={Multi-modal Loop Closure Detection with Foundation Models in Severely Unstructured Environments},
  author={Gonzalez, Laura Alejandra Encinar and Folkesson, John and Triebel, Rudolph and Giubilato, Riccardo},
  journal={2026 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2026}
}

✅ Tested Configuration

  • Python 3.10.18
  • PyTorch 2.5.0+cu124
  • CUDA 12.4

⚙️ Environment Setup

Option 1:

conda env create -f environment.yml -n mprf_env

Option 2:

conda create -n mprf_env python=3.10.18
conda activate mprf_env
pip install -r requirements.txt
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.5.0+cu124.html

📂 1. Prepare Dataset

  • Download and prepare the dataset using the s3li toolkit following the instructions in that repository.

  • You can also use the preprocessed .pkl files provided here: Datasets

    This folder includes:

    • .pkl files for each processed sequence
    • .pkl files containing valid match pairs used as ground truth for evaluation

    Note that the query images should be downloaded and stored in the same directories referenced inside the .pkl files


🧠 2. Download Weights

Weights for Dinov2 and SALAD can be found here: Weights


💾 3. Generate Feature Files

Run the scripts from the root folder MPRF:

DINOv2 features:

source .env && python Store_descriptors/store_dino_feat.py   --pickle_folder ./Processed_datasets/vulcano   --feature_save_path ./Dinov2_features/vulcano_dino_features.pkl   --model_type finetuned   --weights_path ./Weights/finetuned_dinov2.pth

SALAD features:

source .env && python Store_descriptors/store_salad_descriptor.py   --pickle_folder ./Processed_datasets/vulcano   --feature_save_path ./SALAD_features/vulcano_salad_features.pkl   --model_type pretrained

Pre-generated feature files for Etna and Vulcano datasets: Features


⚙️ 4. Update Configuration

Edit the config.yaml file to set your dataset paths, model weights, and preferences.
Below is an example configuration with inline explanations:

# config.yaml

# === Paths ===
eval_pairs_path: "./Datasets/s3li-dataset/gt_dataset/s3li_etna_pairs.pkl"   # Ground truth file with valid matches

dino_features_file: "./Datasets/Dinov2_descriptors/etna_finetuned_dinov2_features.pkl"   # Features generated from store_dino_feat.py

salad_features_file: "./Datasets/SALAD_descriptors/etna_pretrained_salad_features.pkl"   # Features generated from store_salad_descriptors.py

results_folder: "./Results/New_results/"    # Output directory where results will be stored
dataset_path: "./Datasets/s3li-etna/"       # Path to the dataset


# === Models ===
dino_model: "dinov2_vitb14"
dino_weights: "./Weights/finetuned_dinov2.pth"
salad_model_version: "pt"                   # Options: "pt" (pretrained) or "rt" (retrained)
salad_weights: "./Weights/pretrained_salad.ckpt"    # Required if using retrained model

# FLAGS
run_pipeline: false            # boolean flag to run pipeline for all queries, if false it just performs evaluation for the stored results
pose_estimation: false        # boolean flag to perform or not pose estimation
visualize_matches: false      # visualize correspondences used for pose estimation
compute_pr_curve: true 

#Thresholds
similarity_threshold_retrieval: 0.00 #cosine distance between descriptors for retrieval (overridden when computing PR curve)
similarity_threshold_retrieval_pr_range: [0.0, 0.995, 10]   # To span PR curve, and number of samples
similarity_threshold_3d: 0.95                                # threshold for 3d correspondences
time_threshold: 100     #minimum time difference between a candidate and the  query

# Top-k params 
k: 20
k_refine: 20

▶️ 5. Run the Main Script

python eval_precision.py --config config.yaml

This command runs the full pipeline:

🖼️ Image Retrieval

Processes all query images from the evaluation .pkl file and retrieves the most similar candidates.

Generates two files per query (identified by the query’s timestamp):

  • *_top{k}.pkl → Top-k most similar candidates (first retrieval stage)
  • *_top{k_refine}.pkl → Top-k_refine candidates (final retrieval stage)

🤖 Pose Estimation

For each query, generates a .csv file containing one row per candidate.
Each row includes the estimated transformation and yaw angle between the query and its candidate image.

📊 Evaluation

Computes and reports precision and recall metrics at Top-1, Top-5, Top-10 and Top-20.


💡 To Skip Processing and Only Compute Metrics

If results are already stored and you only want to evaluate precision without running the full pipeline again, set the following in your config.yaml:

run_pipeline: false

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages