rufimelo commited on
Commit
e535110
1 Parent(s): e825c76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  language:
3
  - pt
4
- thumbnail: "Portugues SBERT for the Legal Domain"
5
  pipeline_tag: sentence-similarity
6
  tags:
7
  - sentence-transformers
@@ -20,7 +20,7 @@ widget:
20
  - "O juíz atirou uma pedra."
21
  example_title: "Example 1"
22
  model-index:
23
- - name: SBERTimbau
24
  results:
25
  - task:
26
  name: STS
@@ -37,10 +37,10 @@ model-index:
37
  value: 0.8364
38
  ---
39
 
40
- # rufimelo/Legal-SBERTimbau-sts-large-ma
41
 
42
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
43
- rufimelo/Legal-SBERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) alrge.
44
  It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
45
 
46
  ## Usage (Sentence-Transformers)
@@ -57,7 +57,7 @@ Then you can use the model like this:
57
  from sentence_transformers import SentenceTransformer
58
  sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
59
 
60
- model = SentenceTransformer('rufimelo/Legal-SBERTimbau-sts-large-ma-v3')
61
  embeddings = model.encode(sentences)
62
  print(embeddings)
63
  ```
@@ -83,8 +83,8 @@ def mean_pooling(model_output, attention_mask):
83
  sentences = ['This is an example sentence', 'Each sentence is converted']
84
 
85
  # Load model from HuggingFace Hub
86
- tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-SBERTimbau-sts-large-ma-v3')
87
- model = AutoModel.from_pretrained('rufimelo/Legal-SBERTimbau-sts-large-ma-v3')
88
 
89
  # Tokenize sentences
90
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -105,14 +105,14 @@ print(sentence_embeddings)
105
 
106
  | Model| Assin | Assin2|stsb_multi_mt pt|
107
  | ---------------------------------------- | ---------- | ---------- |---------- |
108
- | Legal-SBERTimbau-sts-base| 0.71457| 0.73545 | |
109
- | Legal-SBERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |
110
- | Legal-SBERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|
111
- | Legal-SBERTimbau-sts-large| 0.76629| 0.82357 | |
112
- | Legal-SBERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |
113
- | Legal-SBERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|
114
- | Legal-SBERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261|
115
- | Legal-SBERTimbau-sts-large-ma-v3| 0.7749| 0.8470| 0.8364|
116
  | ---------------------------------------- | ---------- |---------- |---------- |
117
  | BERTimbau base Fine-tuned for STS|0.78455 | 0.80626|0.82841|
118
  | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|
@@ -121,7 +121,7 @@ print(sentence_embeddings)
121
  | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |0.84575 |
122
  ## Training
123
 
124
- rufimelo/Legal-SBERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
125
 
126
  Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation. For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/stsb-roberta-large', the supposed supported language as English and the language to learn was portuguese.
127
 
@@ -131,8 +131,8 @@ It was trained for Semantic Textual Similarity, being submitted to a fine tuning
131
  ## Full Model Architecture
132
  ```
133
  SentenceTransformer(
134
- (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
135
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
136
  )
137
  ```
138
 
 
1
  ---
2
  language:
3
  - pt
4
+ thumbnail: "Portuguese BERT for the Legal Domain"
5
  pipeline_tag: sentence-similarity
6
  tags:
7
  - sentence-transformers
 
20
  - "O juíz atirou uma pedra."
21
  example_title: "Example 1"
22
  model-index:
23
+ - name: BERTimbau
24
  results:
25
  - task:
26
  name: STS
 
37
  value: 0.8364
38
  ---
39
 
40
+ # rufimelo/Legal-BERTimbau-sts-large-ma-v3
41
 
42
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
43
+ rufimelo/Legal-BERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
44
  It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
45
 
46
  ## Usage (Sentence-Transformers)
 
57
  from sentence_transformers import SentenceTransformer
58
  sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
59
 
60
+ model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
61
  embeddings = model.encode(sentences)
62
  print(embeddings)
63
  ```
 
83
  sentences = ['This is an example sentence', 'Each sentence is converted']
84
 
85
  # Load model from HuggingFace Hub
86
+ tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
87
+ model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
88
 
89
  # Tokenize sentences
90
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
105
 
106
  | Model| Assin | Assin2|stsb_multi_mt pt|
107
  | ---------------------------------------- | ---------- | ---------- |---------- |
108
+ | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | |
109
+ | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |
110
+ | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|
111
+ | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | |
112
+ | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |
113
+ | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|
114
+ | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261|
115
+ | Legal-BERTimbau-sts-large-ma-v3| 0.7749| 0.8470| 0.8364|
116
  | ---------------------------------------- | ---------- |---------- |---------- |
117
  | BERTimbau base Fine-tuned for STS|0.78455 | 0.80626|0.82841|
118
  | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|
 
121
  | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |0.84575 |
122
  ## Training
123
 
124
+ rufimelo/Legal-BERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
125
 
126
  Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation. For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/stsb-roberta-large', the supposed supported language as English and the language to learn was portuguese.
127
 
 
131
  ## Full Model Architecture
132
  ```
133
  SentenceTransformer(
134
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
135
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
136
  )
137
  ```
138