File size: 1,798 Bytes
67422ed 6072305 67422ed 6072305 67422ed 6072305 dfb580b 67422ed 6072305 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: apache-2.0
language:
- en
pipeline_tag: image-feature-extraction
tags:
- image-to-image
---
## Open-MAGVIT2: Democratizing Autoregressive Visual Generation
Code: https://github.com/TencentARC/Open-MAGVIT2
Paper: https://huggingface.co/papers/2409.04410
## Introduction
Until now, VQGAN, the initial tokenizer is still acting an indispensible role in mainstream tasks, especially autoregressive visual generation. Limited by the bottleneck of the size of codebook and the utilization of code, the capability of AR generation with VQGAN is underestimated.
Therefore, [MAGVIT2](https://arxiv.org/abs/2310.05737) proposes a powerful tokenizer for visual generation task, which introduces a novel LookUpFree technique when quantization and extends the size of codebook to $2^{18}$, exhibiting promising performance in both image and video generation tasks. And it plays an important role in the recent state-of-the-art AR video generation model [VideoPoet](https://arxiv.org/abs/2312.14125). However, we have no access to this strong tokenizer so far. ☹️
In the codebase, we follow the significant insights of tokenizer design in MAGVIT-2 and re-implement it with Pytorch, achieving the closest results to the original so far. We hope that our effort can foster innovation, creativity within the field of Autoregressive Visual Generation. 😄
ImageNet 128 × 128:
- Model [ImageNet_128_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_128_B.ckpt)
ImageNet 256 × 256:
- Model [ImageNet_256_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_256_B.ckpt)
## Usage
Refer to the Github repository which includes [scripts](https://github.com/TencentARC/Open-MAGVIT2/tree/main/scripts) for training, evaluation and inference. |