Update README.md
Browse files
README.md
CHANGED
@@ -5,14 +5,14 @@ datasets:
|
|
5 |
- apcl/jm52m
|
6 |
---
|
7 |
|
8 |
-
#
|
9 |
-
|
10 |
|
11 |
---
|
12 |
|
13 |
-
##
|
14 |
|
15 |
-
- We trained the
|
16 |
|
17 |
- The datasets used to train our model are our own datasets [so13m dataset](https://huggingface.co/datasets/apcl/so13m) and [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m).
|
18 |
|
|
|
5 |
- apcl/jm52m
|
6 |
---
|
7 |
|
8 |
+
# Jam_sojm
|
9 |
+
Jam_sojm is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
10 |
|
11 |
---
|
12 |
|
13 |
+
## Jam_sojm Training Details
|
14 |
|
15 |
+
- We trained the jam_sojm model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
|
16 |
|
17 |
- The datasets used to train our model are our own datasets [so13m dataset](https://huggingface.co/datasets/apcl/so13m) and [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m).
|
18 |
|