Spaces:
Running
on
Zero
Running
on
Zero
brandonsmart
commited on
Commit
•
b2244fb
1
Parent(s):
5ed9923
Updated Readme
Browse files
README.md
CHANGED
@@ -1,54 +1,12 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
cd splatt3r
|
14 |
-
```
|
15 |
-
|
16 |
-
2. Setup Anaconda Environment
|
17 |
-
```bash
|
18 |
-
conda env create -f environment.yml
|
19 |
-
pip install git+https://github.com/dcharatan/diff-gaussian-rasterization-modified
|
20 |
-
```
|
21 |
-
|
22 |
-
3. (Optional) Compile the CUDA kernels for RoPE (as in MASt3R and CroCo v2)
|
23 |
-
|
24 |
-
```bash
|
25 |
-
cd src/dust3r_src/croco/models/curope/
|
26 |
-
python setup.py build_ext --inplace
|
27 |
-
cd ../../../../../
|
28 |
-
```
|
29 |
-
|
30 |
-
## Checkpoints
|
31 |
-
|
32 |
-
We train our model using the pretrained `MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric` checkpoint from the MASt3R authors, available from [the MASt3R GitHub repo](https://github.com/naver/mast3r). This checkpoint is placed at the file path `checkpoints/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth`.
|
33 |
-
|
34 |
-
A pretrained Splatt3R model can be downloaded [here]() (redacted link).
|
35 |
-
|
36 |
-
## Data
|
37 |
-
|
38 |
-
We use ScanNet++ to train our model. We download the data from the [official ScanNet++ homepage](https://kaldir.vc.in.tum.de/scannetpp/) and process the data using SplaTAM's modified version of [the ScanNet++ toolkit](https://github.com/Nik-V9/scannetpp). We save the processed data to the 'processed' subfolder of the ScanNet++ root directory.
|
39 |
-
|
40 |
-
Our generated test coverage files, and our training and testing splits, can be downloaded [here]() (redacted link), and placed in `data/scannetpp`.
|
41 |
-
|
42 |
-
## Demo
|
43 |
-
|
44 |
-
The Gradio demo can be run using `python demo.py <checkpoint_path>`, replacing `<checkpoint_path>` with the trained network path. A checkpoint will be available for the public release of this code.
|
45 |
-
|
46 |
-
This demo generates a `.ply` file that represents the scene, which can be downloaded and rendered using online 3D Gaussian Splatting viewers such as [here](https://projects.markkellogg.org/threejs/demo_gaussian_splats_3d.php?art=1&cu=0,-1,0&cp=0,1,0&cla=1,0,0&aa=false&2d=false&sh=0) or [here](https://playcanvas.com/supersplat/editor).
|
47 |
-
|
48 |
-
## Training
|
49 |
-
|
50 |
-
Our training run can be recreated by running `python main.py configs/main.yaml`. Other configurations can be found in the `configs` folder.
|
51 |
-
|
52 |
-
## BibTeX
|
53 |
-
|
54 |
-
Forthcoming arXiv citation
|
|
|
1 |
+
---
|
2 |
+
title: Splatt3R: Zero-shot Gaussian Splatting from Uncalibarated Image Pairs
|
3 |
+
emoji: ⛰️
|
4 |
+
colorFrom: indigo
|
5 |
+
colorTo: red
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.37.2
|
8 |
+
app_file: demo.py
|
9 |
+
pinned: false
|
10 |
+
---
|
11 |
+
|
12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|