File size: 4,089 Bytes
9cf143d d8636dd 9cf143d d8636dd 9cf143d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: mit
tags:
- text-generation
- llama 3.1
- instruct
- unsloth
- OpenDevin
datasets:
- skratos115/opendevin_DataDevinator
- argilla/magpie-ultra-v0.1
---
## LLAMA_3.1_8B_OpenDevin_DEVINator
brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww
# LLAMA_3.1_8B-Instruct with OpenDevin Tool Calling for codeact agent
## Overview
This project involves the fine-tuning of the `LLAMA_3.1_8B-Instruct` model using my own dataset skratos115/opendevin_DataDevinator that incorperates around 2k lines from argilla and trained with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training.
read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/
## Model Details
- **Model Name**: LLAMA_3.1_8B_OpenDevin_DEVINator
- **Dataset**: skratos115/opendevin_DataDevinator
- **Training Platform**: Unsloth
provided full merged files
or
Quantized f16, q4_k_m, Q5_k_m gguf files.
I used the LLAMA_3.1_8B_OpenDevin_DEVINator_Q5_K_M.gguf for my testing and got it to write me a simple script, install dependencies,and search the web.. more testing to come.
## Running the Model
You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`.
### Prerequisites
- Docker
- make sure your ollama is serving to all bound ips.
- upload your guff file to ollama
### Running with Ollama
1. **Install Docker**: Ensure you have Docker installed on your machine.
3. **Set Up Your Workspace**:
WORKSPACE_BASE=$(pwd)/workspace
4. **Run the Docker Command**:
```sh
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e PERSIST_SANDBOX="true" \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://192.168.1.23:11434" \
-e LLM_OLLAMA_BASE_URL="192.168.1.23:11434" \
-e SSH_PASSWORD="make something up here" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name opendevin-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:main
```
Replace ipaddress with your actual local IP address and make sure you have your lamma server hosted on all bound ip addresses. i had issues when i tried to use `0.0.0.0` for localhost.
5. configure OD
set the model to what ever guff you have running.
ex i used ollama/LLAMA_3.1_8B_OpenDevin_DEVINator_Q5_K_M.gguf
## Early Development
This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome.
## License
This project is licensed under the [MIT License](LICENSE).
## Support my work
Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon.
[Patreon Link](https://www.patreon.com/atlasaisecurity)
## DataDEVINator
This is a python based tool i created for this purpose and took alot of time and money to build this tool and model. i will be releasing it to the public shortly, prob 1-2 weeks.
Tool Capabilities:
Download datasets from Hugging Face
Process local CSV or Parquet files
Preprocess datasets to standardize column names
Generate solutions using nvidia, OpenAI, Hugging Face, or Ollama models
Reformat solutions to OpenDevin format
Grade and categorize solutions 1-5 scale
Perform dry runs on a subset of the data
Automatically remove low-ranking solutions
Upload processed datasets to Hugging Face
promt improver to add more complexity
|