training
This model is a fine-tuned version of Salesforce/codet5-small on a dataset created from The Technical Debt Dataset.
dataset citation
Valentina Lenarduzzi, Nyyti Saarimäki, Davide Taibi. The Technical Debt Dataset. Proceedings for the 15th Conference on Predictive Models and Data Analytics in Software Engineering. Brazil. 2019.
Model description
Generates descriptions of git commits which have code smells which possibly signify technical debt.
Intended uses & limitations
Use with caution. Limited by small training set and limited variety of training set labels. Improvements in progress.
Training procedure
one epoch of training on the dataset referred to above
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 100
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for davidgaofc/TechDebtLabeler
Base model
Salesforce/codet5-small