Text-to-Image
SEED-Story
English
Edit model card

SEED-Story

arXiv Static Badge Static Badge

TL;DR: We introduce SEED-Story, a MLLM capable of generating multimodal long stories consists of rich and coherent narrative texts, along with images that are consistent in characters and style. We also release the StoryStream Dataset for build this model.

Model Weights

We release the pretrained Tokenizer, the pretrained De-Tokenizer, the pre-trained foundation model SEED-X-pretrained, the StoryStream instruction-tuned MLLM SEED-Story-George, and the StoryStream tuned De-Tokenizer in Detokenizer-George

Please download the checkpoints and save them under the folder ./pretrained.

You also need to download stable-diffusion-xl-base-1.0 and Qwen-VL-Chat, and save them under the folder ./pretrained. Please use the following script to extract the weights of visual encoder in Qwen-VL-Chat.

python3 src/tools/reload_qwen_vit.py

Citation

If you find the work helpful, please consider citing:

@article{yang2024seedstory,
      title={SEED-Story: Multimodal Long Story Generation with Large Language Model}, 
      author={Shuai Yang and Yuying Ge and Yang Li and Yukang Chen and Yixiao Ge and Ying Shan and Yingcong Chen},
      year={2024},
      journal={arXiv preprint arXiv:2407.08683},
      url={https://arxiv.org/abs/2407.08683}, 
}

License

SEED-Story is licensed under the Apache License Version 2.0 except for the third-party components listed in License.

Downloads last month
65
Inference API
Inference API (serverless) does not yet support seed-story models for this pipeline type.

Dataset used to train TencentARC/SEED-Story

Space using TencentARC/SEED-Story 1