GGUF
Inference Endpoints
Pham commited on
Commit
23ea8fe
1 Parent(s): edd106e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: ms-pl
3
+ ---
4
+
5
+ ## Overview
6
+
7
+ Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
8
+
9
+ ## Variants
10
+
11
+ | No | Variant | Cortex CLI command |
12
+ | --- | --- | --- |
13
+ | 1 | [7B-gguf](https://huggingface.co/cortexhub/mistral/tree/7B-gguf) | `cortex run mistral:7B-gguf` |
14
+
15
+ ## Use it with Jan (UI)
16
+
17
+ 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
18
+ 2. Use `cortexhub/mistral` in Jan model Hub
19
+
20
+ ## Use it with Cortex (CLI)
21
+
22
+ 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
23
+ 2. Run the model with the command: `cortex run mistral`
24
+
25
+ ## Credits
26
+
27
+ - **Author:** meta-llama
28
+ - **Converter:** [Homebrew](https://www.homebrew.ltd/)
29
+ - **Original License:** [Licence](https://llama.meta.com/mistral/license/)
30
+ - **Papers:** N/A