apcl
/

aakashba commited on
Commit
6b89e46
1 Parent(s): 113af39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -1,3 +1,13 @@
1
  # Jam-sojm
 
 
 
 
 
2
 
3
- Jam-sojm is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. Jam-sojm model is trained on so13m and then jm52m for one epoch each after resetting the learning rate and decay.
 
 
 
 
 
 
1
  # Jam-sojm
2
+ ---
3
+ license: bigscience-openrail-m
4
+ datasets:
5
+ - apcl/so13m
6
+ ---
7
 
8
+ Jam-sojm is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
9
+
10
+
11
+ ## Datasets: [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m) and [so13m dataset](https://huggingface.co/datasets/apcl/so13m)
12
+ ## Epochs: Two ( one with each dataset, with the the learning rate and decay reset in between)
13
+ ## Iterations : ~600,000