Any to Full: Prompting Depth Anything for Depth Completion in One Stage

Zhiyuan Zhou1 · Ruofeng Liu2 · Taichi Liu1 · Weijian Zuo3 · Shanshan Wang1 · Zhiqing Hong4 · Desheng Zhang1
1Rutgers Univ.   2Michigan State Univ.   3JD Logistics   4HKUST (GZ)

Paper PDF Code Hugging Face Demo Model Weights


Overview

teaser Accurate dense depth is essential for robotics, but commodity RGBD sensors are often sparse or incomplete. Any2Full is a one-stage, domain-general, and pattern-agnostic depth completion framework. It reformulates completion as scale-prompting adaptation of a pretrained monocular depth estimation (MDE) model, so the model keeps strong geometric priors while adapting to diverse sparse depth patterns.

Highlights

  • One-stage scale prompting: achieves domain-general depth completion by fusing pretrained MDE priors.
  • Scale-Aware Prompt Encoder: strong robustness under different sparsity levels and sampling patterns.
  • lightweight design: efficient inference with a single forward pass.

Requirements (Minimal for Inference)

  • python==3.9.x
  • torch==2.0.1
  • torchvision==0.17.2
  • numpy
  • pillow
  • matplotlib
  • scipy
  • opencv-python
pip install torch==2.0.1 torchvision==0.17.2 \
  numpy pillow matplotlib scipy opencv-python

Model Usage

1) Quick Inference (Single or Batch RGBD)

Use run_any2full.py for single RGBD pairs or batch folders (matched by filename stem).

Example inputs are provided under assets/: assets/rgb and assets/depth can be used as inputs, and assets/output shows the corresponding outputs.

# Single pair
python run_any2full.py \
  --rgb /path/to/rgb.png \
  --depth /path/to/depth.png \
  --checkpoint /path/to/ours_checkpoint.pth \
  --out_dir ./outputs

# Batch (match by basename)
python run_any2full.py \
  --rgb_dir /path/to/rgb_dir \
  --depth_dir /path/to/depth_dir \
  --checkpoint /path/to/ours_checkpoint.pth \
  --out_dir ./outputs

Optional denoise (from utils/denoise.py): Any2Full relies on accurate sparse depth as an anchor, so cleaner raw depth generally yields better results. We provide a simple denoising pre-processing step for convenience.

python run_any2full.py \
  --rgb /path/to/rgb.png \
  --depth /path/to/depth.png \
  --checkpoint /path/to/ours_checkpoint.pth \
  --out_dir ./outputs \
  --denoise \
  --denoise_threshold 2 \
  --denoise_kernel_size 9

Inference Parameters (Detailed)

  • --rgb: RGB image path (single mode).
  • --depth: Sparse depth path (.png or .npy) (single mode).
  • --rgb_dir: RGB directory (batch mode, filename stem matched).
  • --depth_dir: Depth directory (batch mode, filename stem matched).
  • --checkpoint: Any2Full checkpoint path (required).
  • --da_ckpt_path: Optional backbone MDE checkpoint (for encoder init).
  • --encoder: Backbone variant (vits, vitb, vitl).
  • --depth_scale: Scale factor for depth PNGs (depth = img / scale).
  • --denoise: Enable sparse depth outlier removal before inference.
  • --denoise_threshold: Outlier threshold (std multiplier).
  • --denoise_kernel_size: Neighborhood size (odd int) for denoise; None auto-estimates.
  • --denoise_min_valid: Minimum valid neighbors for denoise.

Model Weights


Citation

If you find our work useful, please consider citing:

@article{zhou2026any2full,
  title={Any to Full: Prompting Depth Anything for Depth Completion in One Stage},
  author={Zhou, Zhiyuan and Liu, Ruofeng and Liu, Taichi and Zuo, Weijian and Wang, Shanshan and Hong, Zhiqing and Zhang, Desheng},
  journal={arXiv:2603.05711},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using zhiyuandaily/Any2Full 1

Paper for zhiyuandaily/Any2Full