thomagram commited on
Commit
d164891
β€’
1 Parent(s): 77c753d

Update README.md

Browse files

update readme following on instruction

Files changed (1) hide show
  1. README.md +30 -75
README.md CHANGED
@@ -1,75 +1,30 @@
1
- # StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis</sub>
2
-
3
- ![Random Sample](./docs/random_sample.jpg)
4
-
5
- **StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis**<br>
6
- Jiatao Gu, Lingjie Liu, Peng Wang, Christian Theobalt<br>
7
- ### [Project Page](http://jiataogu.me/style_nerf) | [Video](http://jiataogu.me/style_nerf) | [Paper](https://arxiv.org/abs/2110.08985) | [Data](#dataset)<br>
8
-
9
- Abstract: *We propose StyleNeRF, a 3D-aware generative model for photo-realistic high-resolution image synthesis with high multi-view consistency, which can be trained on unstructured 2D images. Existing approaches either cannot synthesize high-resolution images with fine details or yield noticeable 3D-inconsistent artifacts. In addition, many of them lack control over style attributes and explicit 3D camera poses. StyleNeRF integrates the neural radiance field (NeRF) into a style-based generator to tackle the aforementioned challenges, i.e., improving rendering efficiency and 3D consistency for high-resolution image generation. We perform volume rendering only to produce a low-resolution feature map and progressively apply upsampling in 2D to address the first issue. To mitigate the inconsistencies caused by 2D upsampling, we propose multiple designs, including a better upsampler and a new regularization loss. With these designs, StyleNeRF can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality. StyleNeRF also enables control of camera poses and different levels of styles, which can generalize to unseen views. It also supports challenging tasks, including zoom-in and-out, style mixing, inversion, and semantic editing.*
10
-
11
- ## Requirements
12
- The codebase is tested on
13
- * Python 3.7
14
- * PyTorch 1.7.1
15
- * 8 Nvidia GPU (Tesla V100 32GB) with CUDA version 11.0
16
-
17
- For additional python libraries, please install by:
18
-
19
- ```
20
- pip install -r requirements.txt
21
- ```
22
-
23
- Please refer to https://github.com/NVlabs/stylegan2-ada-pytorch for additional software/hardware requirements.
24
-
25
- ## Dataset
26
- We follow the same dataset format as StyleGAN2-ADA supported, which can be either an image folder, or a zipped file.
27
-
28
-
29
- ## Train a new StyleNeRF model
30
- ```bash
31
- python run_train.py outdir=${OUTDIR} data=${DATASET} spec=paper512 model=stylenerf_ffhq
32
- ```
33
- It will automatically detect all usable GPUs.
34
-
35
- Please check configuration files at ```conf/model``` and ```conf/spec```. You can always add your own model config. More details on how to use hydra configuration please follow https://hydra.cc/docs/intro/.
36
-
37
- ## Render the pretrained model
38
- ```bash
39
- python generate.py --outdir=${OUTDIR} --trunc=0.7 --seeds=${SEEDS} --network=${CHECKPOINT_PATH} --render-program="rotation_camera"
40
- ```
41
- It supports different rotation trajectories for rendering new videos.
42
-
43
- ## Run a demo page
44
- ```bash
45
- python web_demo.py 21111
46
- ```
47
- It will in default run a Gradio-powered demo on https://localhost:21111
48
- ![Web demo](./docs/web_demo.gif)
49
- ## Run a GUI visualizer
50
- ```bash
51
- python visualizer.py
52
- ```
53
- An interative application will show up for users to play with.
54
- ![GUI demo](./docs/gui_demo.gif)
55
- ## Citation
56
-
57
- ```
58
- @inproceedings{
59
- gu2022stylenerf,
60
- title={StyleNeRF: A Style-based 3D Aware Generator for High-resolution Image Synthesis},
61
- author={Jiatao Gu and Lingjie Liu and Peng Wang and Christian Theobalt},
62
- booktitle={International Conference on Learning Representations},
63
- year={2022},
64
- url={https://openreview.net/forum?id=iUuzzTMUw9K}
65
- }
66
- ```
67
-
68
-
69
- ## License
70
-
71
- Copyright &copy; Facebook, Inc. All Rights Reserved.
72
-
73
- The majority of StyleNeRF is licensed under [CC-BY-NC](https://creativecommons.org/licenses/by-nc/4.0/), however, portions of this project are available under a separate license terms: all codes used or modified from [stylegan2-ada-pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch) is under the [Nvidia Source Code License](https://nvlabs.github.io/stylegan2-ada-pytorch/license.html).
74
-
75
-
 
1
+ ---
2
+ title: StyleNeRF
3
+ emoji: 🌍
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: false
9
+ ---
10
+ # Configuration
11
+ `title`: _string_
12
+ Display title for the Space
13
+ `emoji`: _string_
14
+ Space emoji (emoji-only character allowed)
15
+ `colorFrom`: _string_
16
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
17
+ `colorTo`: _string_
18
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
19
+ `sdk`: _string_
20
+ Can be either `gradio` or `streamlit`
21
+ `sdk_version` : _string_
22
+ Only applicable for `streamlit` SDK.
23
+ See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
24
+
25
+ `app_file`: _string_
26
+ Path to your main application file (which contains either `gradio` or `streamlit` Python code).
27
+ Path is relative to the root of the repository.
28
+
29
+ `pinned`: _boolean_
30
+ Whether the Space stays on top of your list.