|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
quantized_by: stillerman |
|
tags: |
|
- llamafile |
|
- gguf |
|
|
|
language: |
|
- en |
|
datasets: |
|
- HuggingFaceTB/smollm-corpus |
|
--- |
|
|
|
# SmolLM-135M-Instruct - llamafile |
|
|
|
This repo contains `.gguf` and `.llamafile` files for [SmolLM-135M-Instruct](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966). [Llamafiles](https://llamafile.ai/) are single-file executables (called a "llamafile") that run locally on most computers, with no installation. |
|
|
|
# Use it in 3 lines! |
|
``` |
|
wget https://huggingface.co/stillerman/SmolLM-135M-Instruct-Llamafile/resolve/main/SmolLM-135M-Instruct-F16.llamafile |
|
chmod a+x SmolLM-135M-Instruct-F16.llamafile |
|
./SmolLM-135M-Instruct-F16.llamafile |
|
``` |
|
|
|
# Thank you to |
|
- Huggingface for [SmolLM model family](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) |
|
- Mozilla for [Llamafile](https://llamafile.ai/) |
|
- [llama.cpp](https://github.com/ggerganov/llama.cpp/) |
|
- [Justine Tunney](https://huggingface.co/jartine) and [Compilade](https://github.com/compilade) for help |