Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,23 @@ license: apache-2.0
|
|
6 |
|
7 |
See [the Files tab](https://huggingface.co/coreml-projects/depth-anything/tree/main) for converted models.
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
## Download
|
10 |
|
11 |
Install `huggingface-hub`
|
|
|
6 |
|
7 |
See [the Files tab](https://huggingface.co/coreml-projects/depth-anything/tree/main) for converted models.
|
8 |
|
9 |
+
Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al. and first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
|
10 |
+
|
11 |
+
[Online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) is also provided.
|
12 |
+
|
13 |
+
Disclaimer: The team releasing Depth Anything did not write a model card for this model so this model card has been written by the Hugging Face team.
|
14 |
+
|
15 |
+
## Model description
|
16 |
+
|
17 |
+
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
|
18 |
+
|
19 |
+
The model is trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
|
20 |
+
|
21 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
|
22 |
+
alt="drawing" width="600"/>
|
23 |
+
|
24 |
+
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
|
25 |
+
|
26 |
## Download
|
27 |
|
28 |
Install `huggingface-hub`
|