Update README.md
Browse files
README.md
CHANGED
@@ -41,17 +41,17 @@ Use the model with [UniDiffuser codebase](https://github.com/thu-ml/unidiffuser)
|
|
41 |
## Model Details
|
42 |
- **Model type:** Diffusion-based multi-modal generation model
|
43 |
- **Language(s):** English
|
44 |
-
- **License:**
|
45 |
- **Model Description:** This is a model that can perform image, text, text-to-image, image-to-text, and image-text pair generation. Its main component is a [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
|
46 |
- **Resources for more information:** [GitHub Repository](https://github.com/thu-ml/unidiffuser), [Paper]().
|
47 |
|
48 |
|
49 |
## Direct Use
|
50 |
|
51 |
-
_Note:
|
52 |
|
53 |
|
54 |
-
The model is
|
55 |
|
56 |
- Safe deployment of models which have the potential to generate harmful content.
|
57 |
- Probing and understanding the limitations and biases of generative models.
|
|
|
41 |
## Model Details
|
42 |
- **Model type:** Diffusion-based multi-modal generation model
|
43 |
- **Language(s):** English
|
44 |
+
- **License:** agpl-3.0
|
45 |
- **Model Description:** This is a model that can perform image, text, text-to-image, image-to-text, and image-text pair generation. Its main component is a [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
|
46 |
- **Resources for more information:** [GitHub Repository](https://github.com/thu-ml/unidiffuser), [Paper]().
|
47 |
|
48 |
|
49 |
## Direct Use
|
50 |
|
51 |
+
_Note: Most of this section is taken from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), but applies in the same way to UniDiffuser_.
|
52 |
|
53 |
|
54 |
+
The model is should be used following the agpl-3.0 license. Possible usage includes
|
55 |
|
56 |
- Safe deployment of models which have the potential to generate harmful content.
|
57 |
- Probing and understanding the limitations and biases of generative models.
|