Any-to-Any
Safetensors
ml-4m
roman-bachmann commited on
Commit
ba4e7d4
1 Parent(s): 55dc68a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -5
README.md CHANGED
@@ -7,14 +7,21 @@ library_name: ml-4m
7
 
8
  # 4M: Massively Multimodal Masked Modeling
9
 
10
- *David Mizrahi\*, Roman Bachmann\*, Oğuzhan Fatih Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, Amir Zamir*
11
 
12
- Official implementation and pre-trained models for "4M: Massively Multimodal Masked Modeling" (NeurIPS 2023).
13
 
14
- [`Website`](https://4m.epfl.ch) | [`Paper`](https://arxiv.org/abs/2312.06647) | [`GitHub`](https://github.com/apple/ml-4m)
 
 
 
 
 
 
15
 
16
  4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities.
17
- Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.
 
18
 
19
 
20
  ## Installation
@@ -35,12 +42,19 @@ Please see https://github.com/apple/ml-4m/blob/main/README_GENERATION.md for mor
35
 
36
  If you find this repository helpful, please consider citing our work:
37
  ```
38
- @inproceedings{mizrahi20234m,
39
  title={{4M}: Massively Multimodal Masked Modeling},
40
  author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
41
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
42
  year={2023},
43
  }
 
 
 
 
 
 
 
44
  ```
45
 
46
  ## License
 
7
 
8
  # 4M: Massively Multimodal Masked Modeling
9
 
10
+ *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.*
11
 
12
+ [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation)
13
 
14
+ Official implementation and pre-trained models for :
15
+
16
+ [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br>
17
+ *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
18
+
19
+ [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br>
20
+ *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
21
 
22
  4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities.
23
+ Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.
24
+ We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21).
25
 
26
 
27
  ## Installation
 
42
 
43
  If you find this repository helpful, please consider citing our work:
44
  ```
45
+ @inproceedings{4m,
46
  title={{4M}: Massively Multimodal Masked Modeling},
47
  author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
48
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
49
  year={2023},
50
  }
51
+
52
+ @article{4m21,
53
+ title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities},
54
+ author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir},
55
+ journal={arXiv 2024},
56
+ year={2024},
57
+ }
58
  ```
59
 
60
  ## License