Carlos commited on
Commit
350d6b0
1 Parent(s): 20dc3b8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -15
README.md CHANGED
@@ -9,15 +9,14 @@ metrics:
9
  - f1
10
  - accuracy
11
  model-index:
12
- - name: DTNLS/test-NERv3
13
  results: []
14
- library_name: peft
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
- # DTNLS/test-NERv3
21
 
22
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
23
  It achieves the following results on the evaluation set:
@@ -41,17 +40,6 @@ More information needed
41
 
42
  ## Training procedure
43
 
44
-
45
- The following `bitsandbytes` quantization config was used during training:
46
- - load_in_8bit: False
47
- - load_in_4bit: True
48
- - llm_int8_threshold: 6.0
49
- - llm_int8_skip_modules: None
50
- - llm_int8_enable_fp32_cpu_offload: False
51
- - llm_int8_has_fp16_weight: False
52
- - bnb_4bit_quant_type: nf4
53
- - bnb_4bit_use_double_quant: True
54
- - bnb_4bit_compute_dtype: float16
55
  ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
@@ -73,7 +61,6 @@ The following hyperparameters were used during training:
73
 
74
  ### Framework versions
75
 
76
- - PEFT 0.4.0
77
  - Transformers 4.32.0.dev0
78
  - Pytorch 2.0.0
79
  - Datasets 2.12.0
 
9
  - f1
10
  - accuracy
11
  model-index:
12
+ - name: test-NERv3
13
  results: []
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # test-NERv3
20
 
21
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
 
40
 
41
  ## Training procedure
42
 
 
 
 
 
 
 
 
 
 
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
 
61
 
62
  ### Framework versions
63
 
 
64
  - Transformers 4.32.0.dev0
65
  - Pytorch 2.0.0
66
  - Datasets 2.12.0