Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ license: mit
|
|
11 |
📢 [[Project Page](https://ali-vilab.github.io/In-Context-LoRA-Page/)] [[Github Repo](https://github.com/ali-vilab/In-Context-LoRA)] [[Paper](https://arxiv.org/abs/2410.23775)]
|
12 |
# Model Summary
|
13 |
|
14 |
-
In-Context LoRA fine-tunes text-to-image models to generate image sets with customizable intrinsic relationships, optionally conditioned on another set using SDEdit. It can be adapted to a wide range of tasks
|
15 |
|
16 |
This model hub includes In-Context LoRA models across 10 tasks. [MODEL ZOO](#model-zoo) details these models and their recommend settings. For more details on how these models are trained, please refer to our [paper](https://arxiv.org/abs/2410.23775).
|
17 |
|
|
|
11 |
📢 [[Project Page](https://ali-vilab.github.io/In-Context-LoRA-Page/)] [[Github Repo](https://github.com/ali-vilab/In-Context-LoRA)] [[Paper](https://arxiv.org/abs/2410.23775)]
|
12 |
# Model Summary
|
13 |
|
14 |
+
In-Context LoRA fine-tunes text-to-image models (\textit{e.g.,} [FLUX](https://huggingface.co/black-forest-labs/FLUX.1-dev)) to generate image sets with customizable intrinsic relationships, optionally conditioned on another set using SDEdit. It can be adapted to a wide range of tasks
|
15 |
|
16 |
This model hub includes In-Context LoRA models across 10 tasks. [MODEL ZOO](#model-zoo) details these models and their recommend settings. For more details on how these models are trained, please refer to our [paper](https://arxiv.org/abs/2410.23775).
|
17 |
|