make llamafile link clickable
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
|
17 |
## Description
|
18 |
|
19 |
-
This repo contains llamafile format model files for [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) that is a recreation of [roneneldan/TinyStories-1M](https://huggingface.co/roneneldan/TinyStories-1M) which was part of this very interesting research paper called [TinyStories: How Small Can Language Models Be and Still Speak Coherent English?](https://arxiv.org/abs/2305.07759) by Ronen Eldan and Yuanzhi Li.
|
20 |
|
21 |
In the paper this is their abstract
|
22 |
|
@@ -30,7 +30,7 @@ In the paper this is their abstract
|
|
30 |
|
31 |
While Maykeye's replication effort didn't reduce the model down to 1M parameters, Maykeye did get down to 5M parameters which is still quite an achievement as far as known replication efforts have shown.
|
32 |
|
33 |
-
Anyway, this conversion to llamafile should give you an easy way to give this model a shot and also of the whole llamafile ecosystem in general (as it's quite quite small compared to other larger chat capable models). As this is primarily a text generation model, it will open a web server as part of the llamafile process, but it will not engage in chat as one might expect.. Instead you would give it a story prompt and it will generate a story for you. Don't expect any great stories for this size however, but it's an interesting demo on how small you can squeeze AI models and still have it generate recognisable english.
|
34 |
|
35 |
## Usage In Linux
|
36 |
|
|
|
16 |
|
17 |
## Description
|
18 |
|
19 |
+
This repo contains [llamafile](https://github.com/Mozilla-Ocho/llamafile) format model files for [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) that is a recreation of [roneneldan/TinyStories-1M](https://huggingface.co/roneneldan/TinyStories-1M) which was part of this very interesting research paper called [TinyStories: How Small Can Language Models Be and Still Speak Coherent English?](https://arxiv.org/abs/2305.07759) by Ronen Eldan and Yuanzhi Li.
|
20 |
|
21 |
In the paper this is their abstract
|
22 |
|
|
|
30 |
|
31 |
While Maykeye's replication effort didn't reduce the model down to 1M parameters, Maykeye did get down to 5M parameters which is still quite an achievement as far as known replication efforts have shown.
|
32 |
|
33 |
+
Anyway, this conversion to [llamafile](https://github.com/Mozilla-Ocho/llamafile) should give you an easy way to give this model a shot and also of the whole [llamafile](https://github.com/Mozilla-Ocho/llamafile) ecosystem in general (as it's quite quite small compared to other larger chat capable models). As this is primarily a text generation model, it will open a web server as part of the [llamafile](https://github.com/Mozilla-Ocho/llamafile) process, but it will not engage in chat as one might expect.. Instead you would give it a story prompt and it will generate a story for you. Don't expect any great stories for this size however, but it's an interesting demo on how small you can squeeze AI models and still have it generate recognisable english.
|
34 |
|
35 |
## Usage In Linux
|
36 |
|