File size: 2,995 Bytes
c8da6ca d78c399 c8da6ca fbcb28b d78c399 c8da6ca d1b752a c8da6ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
base_model: laion/larger_clap_general
library_name: transformers.js
tags:
- zero-shot-audio-classification
---
https://huggingface.co/laion/larger_clap_general with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform zero-shot audio classification with `Xenova/larger_clap_general`.
```js
import { pipeline } from '@xenova/transformers';
const classifier = await pipeline('zero-shot-audio-classification', 'Xenova/larger_clap_general');
const audio = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/piano.wav';
const candidate_labels = ['calm piano music', 'heavy metal music'];
const scores = await classifier(audio, candidate_labels);
// [
// { score: 0.9829504489898682, label: 'calm piano music' },
// { score: 0.017049523070454597, label: 'heavy metal music' }
// ]
```
**Example:** Compute text embeddings with `ClapTextModelWithProjection`.
```js
import { AutoTokenizer, ClapTextModelWithProjection } from '@xenova/transformers';
// Load tokenizer and text model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/larger_clap_general');
const text_model = await ClapTextModelWithProjection.from_pretrained('Xenova/larger_clap_general');
// Run tokenization
const texts = ['calm piano music', 'heavy metal music'];
const text_inputs = tokenizer(texts, { padding: true, truncation: true });
// Compute embeddings
const { text_embeds } = await text_model(text_inputs);
// Tensor {
// dims: [ 2, 512 ],
// type: 'float32',
// data: Float32Array(1024) [ ... ],
// size: 1024
// }
```
**Example:** Compute audio embeddings with `ClapAudioModelWithProjection`.
```js
import { AutoProcessor, ClapAudioModelWithProjection, read_audio } from '@xenova/transformers';
// Load processor and audio model
const processor = await AutoProcessor.from_pretrained('Xenova/larger_clap_general');
const audio_model = await ClapAudioModelWithProjection.from_pretrained('Xenova/larger_clap_general');
// Read audio and run processor
const audio = await read_audio('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/piano.wav');
const audio_inputs = await processor(audio);
// Compute embeddings
const { audio_embeds } = await audio_model(audio_inputs);
// Tensor {
// dims: [ 1, 512 ],
// type: 'float32',
// data: Float32Array(512) [ ... ],
// size: 512
// }
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |