musicaudiopretrain commited on
Commit
77cfc09
1 Parent(s): bda511c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -8,6 +8,7 @@ tags:
8
  # Introduction to our series work
9
 
10
  The development log of our Music Audio Pre-training (m-a-p) model family:
 
11
  - 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
12
  - 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public)
13
  - 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks.
@@ -111,10 +112,12 @@ print(weighted_avg_hidden_states.shape) # [1024]
111
  # Citation
112
 
113
  ```shell
114
- @article{li2022large,
115
- title={Large-Scale Pretrained Model for Self-Supervised Music Audio Representation Learning},
116
- author={Li, Yizhi and Yuan, Ruibin and Zhang, Ge and Ma, Yinghao and Lin, Chenghua and Chen, Xingran and Ragni, Anton and Yin, Hanzhi and Hu, Zhijie and He, Haoyu and others},
117
- year={2022}
 
 
 
118
  }
119
-
120
  ```
 
8
  # Introduction to our series work
9
 
10
  The development log of our Music Audio Pre-training (m-a-p) model family:
11
+ - 02/06/2023: [arxiv pre-print](https://arxiv.org/abs/2306.00107) and training [codes](https://github.com/yizhilll/MERT) released.
12
  - 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
13
  - 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public)
14
  - 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks.
 
112
  # Citation
113
 
114
  ```shell
115
+ @misc{li2023mert,
116
+ title={MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training},
117
+ author={Yizhi Li and Ruibin Yuan and Ge Zhang and Yinghao Ma and Xingran Chen and Hanzhi Yin and Chenghua Lin and Anton Ragni and Emmanouil Benetos and Norbert Gyenge and Roger Dannenberg and Ruibo Liu and Wenhu Chen and Gus Xia and Yemin Shi and Wenhao Huang and Yike Guo and Jie Fu},
118
+ year={2023},
119
+ eprint={2306.00107},
120
+ archivePrefix={arXiv},
121
+ primaryClass={cs.SD}
122
  }
 
123
  ```