firqaaa commited on
Commit
f6dd1bf
1 Parent(s): 4a3dd96

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -16
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
 
9
  ---
10
 
11
- # Indo-Sentence-BERT
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
@@ -26,12 +26,9 @@ Then you can use the model like this:
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
- sentences = ["Ibukota Perancis adalah Paris",
30
- "Menara Eifel terletak di Paris, Perancis",
31
- "Pizza adalah makanan khas Italia",
32
- "Saya kuliah di Carneige Mellon University"]
33
 
34
- model = SentenceTransformer('firqaaa/indo-sbert-finetuned-anli-id')
35
  embeddings = model.encode(sentences)
36
  print(embeddings)
37
  ```
@@ -54,15 +51,11 @@ def mean_pooling(model_output, attention_mask):
54
 
55
 
56
  # Sentences we want sentence embeddings for
57
- sentences = ["Ibukota Perancis adalah Paris",
58
- "Menara Eifel terletak di Paris, Perancis",
59
- "Pizza adalah makanan khas Italia",
60
- "Saya kuliah di Carneige Melon University"]
61
-
62
 
63
  # Load model from HuggingFace Hub
64
- tokenizer = AutoTokenizer.from_pretrained('firqaaa/indo-sbert-finetuned-anli-id')
65
- model = AutoModel.from_pretrained('firqaaa/indo-sbert-finetuned-anli')
66
 
67
  # Tokenize sentences
68
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -92,7 +85,7 @@ The model was trained with the parameters:
92
 
93
  **DataLoader**:
94
 
95
- `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters:
96
  ```
97
  {'batch_size': 16}
98
  ```
@@ -107,7 +100,7 @@ The model was trained with the parameters:
107
  Parameters of the fit()-Method:
108
  ```
109
  {
110
- "epochs": 3,
111
  "evaluation_steps": 0,
112
  "evaluator": "NoneType",
113
  "max_grad_norm": 1,
@@ -117,7 +110,7 @@ Parameters of the fit()-Method:
117
  },
118
  "scheduler": "WarmupLinear",
119
  "steps_per_epoch": null,
120
- "warmup_steps": 5893,
121
  "weight_decay": 0.01
122
  }
123
  ```
 
8
 
9
  ---
10
 
11
+ # {MODEL_NAME}
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
 
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
+ sentences = ["This is an example sentence", "Each sentence is converted"]
 
 
 
30
 
31
+ model = SentenceTransformer('{MODEL_NAME}')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
 
51
 
52
 
53
  # Sentences we want sentence embeddings for
54
+ sentences = ['This is an example sentence', 'Each sentence is converted']
 
 
 
 
55
 
56
  # Load model from HuggingFace Hub
57
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
59
 
60
  # Tokenize sentences
61
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
85
 
86
  **DataLoader**:
87
 
88
+ `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19861 with parameters:
89
  ```
90
  {'batch_size': 16}
91
  ```
 
100
  Parameters of the fit()-Method:
101
  ```
102
  {
103
+ "epochs": 5,
104
  "evaluation_steps": 0,
105
  "evaluator": "NoneType",
106
  "max_grad_norm": 1,
 
110
  },
111
  "scheduler": "WarmupLinear",
112
  "steps_per_epoch": null,
113
+ "warmup_steps": 9930,
114
  "weight_decay": 0.01
115
  }
116
  ```