--- base_model: meta-llama/Llama-3.1-70B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en datasets: - kobprof/skolegpt-instruct --- # Uploaded model - **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community - **Developed by:** ThatsGroes - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Llama-3.1-70B-Instruct LoRA adapter on Llama-3.1-70b loaded in 4-bit. Trained for 1 epoch with rank=lora_alpha=8 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. We ended up using 62.52 GB GPU memory (79.00%), of which 23.83 GB (30.12%) was used for LoRa. [codecarbon INFO @ 11:07:59] Energy consumed for RAM : 2.574882 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 11:07:59] Energy consumed for all GPUs : 4.045188 kWh. Total GPU Power : 270.22211938762564 W [codecarbon INFO @ 11:07:59] Energy consumed for all CPUs : 0.579916 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 11:07:59] 7.199986 kWh of electricity used since the beginning. [](https://github.com/unslothai/unsloth)