--- license: mit tags: - llama-cpp - gguf-my-repo model-index: - name: phi-bode-2-ultraalpaca results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 38.35 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 25.17 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 29.61 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 45.39 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 24.43 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 43.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 54.15 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 54.59 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia-temp/tweetsentbr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 43.34 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca name: Open Portuguese LLM Leaderboard --- # noxinc/phi-bode-2-ultraalpaca-Q5_K_M-GGUF This model was converted to GGUF format from [`recogna-nlp/phi-bode-2-ultraalpaca`](https://huggingface.co/recogna-nlp/phi-bode-2-ultraalpaca) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/recogna-nlp/phi-bode-2-ultraalpaca) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo noxinc/phi-bode-2-ultraalpaca-Q5_K_M-GGUF --model phi-bode-2-ultraalpaca.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo noxinc/phi-bode-2-ultraalpaca-Q5_K_M-GGUF --model phi-bode-2-ultraalpaca.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-bode-2-ultraalpaca.Q5_K_M.gguf -n 128 ```