Titovs jdev8 commited on
Commit
8edc10e
1 Parent(s): 2ba7b66

Update README.md (#6)

Browse files

- Update README.md (c5bf7d614faffd8447db66b7cdbdb315bb4e2cca)


Co-authored-by: Anton Shapkin <[email protected]>

Files changed (1) hide show
  1. README.md +23 -4
README.md CHANGED
@@ -1,5 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # Kexer models
@@ -37,6 +51,11 @@ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
37
  print(generated_text)
38
  ```
39
 
 
 
 
 
 
40
  # Training setup
41
 
42
  The model was trained on one A100 GPU with following hyperparameters:
@@ -62,10 +81,10 @@ To evaluate we used Kotlin Humaneval (more infromation here)
62
 
63
  Fine-tuned model:
64
 
65
- | **Model name** | **Kotlin HumanEval Pass Rate** | **Kotlin Completion** |
66
- |:---------------------------:|:----------------------------------------:|:----------------------------------------:|
67
- | `base model` | 26.89 | 0.388 |
68
- | `fine-tuned model` | 42.24 | 0.344 |
69
 
70
  # Ethical Considerations and Limitations
71
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - JetBrains/KExercises
5
+ results:
6
+ - task:
7
+ type: text-generation
8
+ dataset:
9
+ name: MultiPL-HumanEval (Kotlin)
10
+ type: openai_humaneval
11
+ metrics:
12
+ - name: pass@1
13
+ type: pass@1
14
+ value: 42.24
15
+ tags:
16
+ - code
17
  ---
18
 
19
  # Kexer models
 
51
  print(generated_text)
52
  ```
53
 
54
+ As with the base model, we can use FIM. To do this, the following format must be used:
55
+ ```
56
+ '<PRE> ' + prefix + ' <SUF> ' + suffix + ' <MID>'
57
+ ```
58
+
59
  # Training setup
60
 
61
  The model was trained on one A100 GPU with following hyperparameters:
 
81
 
82
  Fine-tuned model:
83
 
84
+ | **Model name** | **Kotlin HumanEval Pass Rate** |
85
+ |:---------------------------:|:----------------------------------------:|
86
+ | `base model` | 26.89 |
87
+ | `fine-tuned model` | 42.24 |
88
 
89
  # Ethical Considerations and Limitations
90