license: other | |
This is a LLama LoRA fine tuned on top of WizardLM-7B with this dataset: https://huggingface.co/datasets/paolorechia/medium-size-generated-tasks | |
It's meant mostly as an proof of concept to see how fine tuning may improve the performance of coding agents that rely on the Langchain framework. | |
To use this LoRA, you can use my repo as starting point: https://github.com/paolorechia/learn-langchain |