Simon-Kotchou commited on
Commit
f67df8e
1 Parent(s): 75943e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -3,8 +3,22 @@ datasets:
3
  - agkphysics/AudioSet
4
  - openslr/librispeech_asr
5
  pipeline_tag: audio-classification
6
- metrics:
7
- - name: AudioSet-20K
8
- type: mAP
9
- value: 31.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
 
3
  - agkphysics/AudioSet
4
  - openslr/librispeech_asr
5
  pipeline_tag: audio-classification
6
+ license: bsd-3-clause
7
+ tags:
8
+ - audio-classification
9
+ ---
10
+
11
+ # Self Supervised Audio Spectrogram Transformer (pretrained on AudioSet/Librispeech)
12
+
13
+ Self Supervised Audio Spectrogram Transformer (SSAST) model with uninitialized classifier head. It was introduced in the paper [SSAST: Self-Supervised Audio Spectrogram Transformer](https://arxiv.org/pdf/2110.09784) by Gong et al. and first released in [this repository](https://github.com/YuanGongND/ssast).
14
+
15
+ Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model.
16
+
17
+ ## Model description
18
+
19
+ The Audio Spectrogram Transformer is equivalent to [ViT](https://huggingface.co/docs/transformers/model_doc/vit), but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.
20
+
21
+ ## Usage
22
+
23
+ The model is pretrained on a massive amount of audio. Please finetune the classifier head before use, as it comes uninitialized.
24
  ---