--- base_model: [codellama/CodeLlama-70b-Instruct-hf] tags: - mergekit - merge - code license: mit pipeline_tag: conversational --- # BigCodeLLama 92b GGUF files 🚀 ## Experimental 92B CodeLlaMA that should be better than stock ### Models Merged with base ```codellama/CodeLlama-70b-Instruct-hf``` Full model here: https://huggingface.co/nisten/BigCodeLlama-92b ### Models Merged The following models were included in the merge: * ../CodeLlama-70b-Python-hf * ../CodeLlama-70b-Instruct-hf ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 69] model: model: path: ../CodeLlama-70b-Instruct-hf - sources: - layer_range: [42, 80] model: model: path: ../CodeLlama-70b-Python-hf ``` To merge together the 6bit for example download both parts then do ```bash cat BigCodeLlama-92b-q6.gguf.part0 BigCodeLlama-92b-q6.gguf.part1 > BigCodeLlama-92b-q6.gguf ``` Comparison over stock with question: ```bash Plan and write code for building a city on mars via calculating aldrin cycler orbits in js for cargo shipments starting in year 2030, and after coding it in python and c++ output a table of calendar of deliver dates. Don't ask for clarification just do the work smartly. ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/59mxGRA4GrqPGX3vqmzMa.png) and our 6bit quant ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/okdhnUzTCTMYhCKlEEcOn.png)