Spaces:
Running
Running
File size: 2,842 Bytes
5542365 d4159c9 62e13ba 5542365 d4159c9 6fa1106 5542365 b0b9920 6fa1106 bcac695 5542365 bcac695 64003e8 62e13ba 6fa1106 62e13ba 6fa1106 bcac695 5542365 b0b9920 62e13ba bdaeeba d8a2192 5542365 bdaeeba 5542365 df7b7be bdaeeba 5542365 6fa1106 5542365 00ed1ab 5542365 6fa1106 5542365 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
title: DALL·E mini
emoji: 🥑
colorFrom: red
colorTo: purple
sdk: streamlit
app_file: app/app.py
pinned: false
---
# DALL·E Mini
_Generate images from a text prompt_
<img src="img/logo.png" width="200">
Our logo was generated with DALL·E mini using the prompt "logo of an armchair in the shape of an avocado".
You can create your own pictures with [the demo](https://huggingface.co/spaces/flax-community/dalle-mini) (temporarily in beta on Huging Face Spaces but soon to be open to all).
## How does it work?
Refer to [our report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA).
## Where does the logo come from?
The "armchair in the shape of an avocado" was used by OpenAI when releasing DALL·E to illustrate the model's capabilities. Having successful predictions on this prompt represents a big milestone to us.
## Development
This section is for the adventurous people wanting to look into the code.
### Dependencies Installation
The root folder and associated `requirements.txt` is only for the app.
You will find necessary requirements in each sub-section.
You should create a new python virtual environment and install the project dependencies inside the virtual env. You need to use the `-f` (`--find-links`) option for `pip` to be able to find the appropriate `libtpu` required for the TPU hardware.
Adapt the installation to your own hardware and follow library installation instructions.
```
$ pip install -r requirements.txt -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```
If you use `conda`, you can create the virtual env and install everything using: `conda env update -f environments.yaml`
### Training of VQGAN
The VQGAN was trained using [taming-transformers](https://github.com/CompVis/taming-transformers).
We recommend using the latest version available.
### Conversion of VQGAN to JAX
Use [patil-suraj/vqgan-jax](https://github.com/patil-suraj/vqgan-jax).
### Training of Seq2Seq
Refer to `dev/seq2seq` folder.
You can also adjust the [sweep configuration file](https://docs.wandb.ai/guides/sweeps) if you need to perform a hyperparameter search.
### Inference
Refer to `dev/notebooks/demo`.
## Authors
- [Boris Dayma](https://github.com/borisdayma)
- [Suraj Patil](https://github.com/patil-suraj)
- [Pedro Cuenca](https://github.com/pcuenca)
- [Khalid Saifullah](https://github.com/khalidsaifullaah)
- [Tanishq Abraham](https://github.com/tmabraham)
- [Phúc Lê Khắc](https://github.com/lkhphuc)
- [Luke Melas](https://github.com/lukemelas)
- [Ritobrata Ghosh](https://github.com/ghosh-r)
## Acknowledgements
- 🤗 Hugging Face for organizing [the FLAX/JAX community week](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects)
- Google Cloud team for providing access to TPU's
|