metadata
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-base-Generate_Docstrings_for_Python-Condensed
results: []
datasets:
- calum/the-stack-smol-python-docstrings
language:
- en
pipeline_tag: text2text-generation
codet5-base-Generate_Docstrings_for_Python-Condensed
This model is a fine-tuned version of Salesforce/codet5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6199
- Rouge1: 0.5017
- Rouge2: 0.374
- Rougel: 0.4866
- Rougelsum: 0.4864
- Gen Len: 13.8909
Model description
This model predicts the docstring (the output) for a function (the input).
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Generate%20Docstrings/Smol%20Dataset/Code_T5_Project-Base%20Checkpoint.ipynb
Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
Training and evaluation data
Dataset Source: calum/the-stack-smol-python-docstrings (from HuggingFace Datasets; https://huggingface.co/datasets/calum/the-stack-smol-python-docstrings)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
0.8261 | 1.0 | 921 | 0.6435 | 0.4947 | 0.3661 | 0.4794 | 0.4791 | 13.7526 |
0.6234 | 2.0 | 1842 | 0.6199 | 0.5017 | 0.374 | 0.4866 | 0.4864 | 13.8909 |
Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3