Xenova HF staff commited on
Commit
1aba57f
1 Parent(s): df1749e

Add transformers.js tag and example code

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -5,6 +5,8 @@ tags:
5
  - feature-extraction
6
  - sentence-similarity
7
  - mteb
 
 
8
  model-index:
9
  - name: epoch_0_model
10
  results:
@@ -2708,6 +2710,22 @@ The model natively supports scaling of the sequence length past 2048 tokens. To
2708
  + model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2)
2709
  ```
2710
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2711
  # Join the Nomic Community
2712
 
2713
  - Nomic: [https://nomic.ai](https://nomic.ai)
 
5
  - feature-extraction
6
  - sentence-similarity
7
  - mteb
8
+ - transformers
9
+ - transformers.js
10
  model-index:
11
  - name: epoch_0_model
12
  results:
 
2710
  + model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2)
2711
  ```
2712
 
2713
+ ### Transformers.js
2714
+
2715
+ ```js
2716
+ import { pipeline } from '@xenova/transformers';
2717
+
2718
+ // Create a feature extraction pipeline
2719
+ const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1', {
2720
+ quantized: false, // Comment out this line to use the quantized version
2721
+ });
2722
+
2723
+ // Compute sentence embeddings
2724
+ const texts = ['What is TSNE?', 'Who is Laurens van der Maaten?'];
2725
+ const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
2726
+ console.log(embeddings);
2727
+ ```
2728
+
2729
  # Join the Nomic Community
2730
 
2731
  - Nomic: [https://nomic.ai](https://nomic.ai)