waltervix commited on
Commit
dc19233
1 Parent(s): 785d919

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -18
README.md CHANGED
@@ -166,29 +166,32 @@ model-index:
166
  # waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF
167
  This model was converted to GGUF format from [`rhaymison/Mistral-portuguese-luana-7b`](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
168
  Refer to the [original model card](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b) for more details on the model.
169
- ## Use with llama.cpp
170
 
171
- Install llama.cpp through brew.
172
 
173
- ```bash
174
- brew install ggerganov/ggerganov/llama.cpp
175
- ```
176
- Invoke the llama.cpp server or the CLI.
177
 
178
- CLI:
179
 
180
- ```bash
181
- llama-cli --hf-repo waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF --model mistral-portuguese-luana-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
182
- ```
183
 
184
- Server:
185
 
186
- ```bash
187
- llama-server --hf-repo waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF --model mistral-portuguese-luana-7b.Q4_K_M.gguf -c 2048
188
- ```
189
 
190
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
191
 
192
- ```
193
- git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-portuguese-luana-7b.Q4_K_M.gguf -n 128
194
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
166
  # waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF
167
  This model was converted to GGUF format from [`rhaymison/Mistral-portuguese-luana-7b`](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
168
  Refer to the [original model card](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b) for more details on the model.
 
169
 
 
170
 
171
+ ## Use with Samantha Interface Assistant
 
 
 
172
 
173
+ https://github.com/controlecidadao/samantha_ia/blob/main/README.md
174
 
175
+ Video: https://www.youtube.com/watch?v=KgicCGMSygU
 
 
176
 
 
177
 
178
+ <br><br>
179
+ ## 👟 Testing a Model in 5 Steps with Samantha
 
180
 
181
+ Samantha needs just a `.gguf` model file to generate text. Follow these steps to perform a simple model test:
182
 
183
+ 1) Open Windows Task Management by pressing `CTRL + SHIFT + ESC` and check available memory. Close some programs if necessary to free memory.
184
+
185
+ 2) Visit [Hugging Face](https://huggingface.co/models?library=gguf&sort=trending&search=gguf) repository and click on the card to open the corresponding page. Locate the _Files and versions_ tab and choose a `.gguf` model that fits in your available memory.
186
+
187
+ 3) Right click over the model download link icon and copy its URL.
188
+
189
+ 4) Paste the model URL into Samantha's _Download models for testing_ field.
190
+
191
+ 5) Insert a prompt into _User prompt_ field and press `Enter`. Keep the `$$$` sign at the end of your prompt. The model will be downloaded and the response will be generated using the default deterministic settings. You can track this process via Windows Task Management.
192
+
193
+ <br>
194
+
195
+ Every new model downloaded via this copy and paste procedure will replace the previous one to save hard drive space. Model download is saved as `MODEL_FOR_TESTING.gguf` in your _Downloads_ folder.
196
+
197
+ You can also download the model and save it permanently to your computer. For more datails, see the section below.