|
- GitHub - LiheYoung Depth-Anything: [CVPR 2024] Depth Anything . . .
This work presents Depth Anything, a highly practical solution for robust monocular depth estimation by training on a combination of 1 5M labeled images and 62M+ unlabeled images Try our latest Depth Anything V2 models! 2024-06-14: Depth Anything V2 is released 2024-02-27: Depth Anything is accepted by CVPR 2024
- Depth Anything
Depth Anything is trained on 1 5M labeled images and 62M+ unlabeled images jointly, providing the most capable Monocular Depth Estimation (MDE) foundation models with the following features: zero-shot relative depth estimation, better than MiDaS v3 1 (BEiT L-512 )
- Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
Abstract: This work presents Depth Anything, a highly practical solution for robust monocular depth estimation Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances
- Depth Anything V2 - a Hugging Face Space by depth-anything
Upload an image to generate a depth map showing the spatial depth of the scene You get both a colored depth map and a grayscale depth map as results
- Depth Anything V2
Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: We also release six metric depth models of three scales for indoor and outdoor scenes, respectively
- GitHub - DepthAnything Depth-Anything-V2: [NeurIPS 2024] Depth Anything . . .
It significantly outperforms V1 in fine-grained details and robustness Compared with SD-based models, it enjoys faster inference speed, fewer parameters, and higher depth accuracy 2025-01-22: Video Depth Anything has been released It generates consistent depth maps for super-long videos (e g , over 5 minutes)
- Depth-Anything-V2-Base - Hugging Face
Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: more fine-grained details than Depth Anything V1
- Depth estimation with DepthAnything and OpenVINO
Depth Anything is a highly practical solution for robust monocular depth estimation Without pursuing novel technical modules, this project aims to build a simple yet powerful
|
|
|