Meltemi-llamafile / README.md
Florents-Tselai's picture
Update README.md
a599fc6 verified
|
raw
history blame
1.21 kB
metadata
language:
  - el
  - en
license: apache-2.0
pipeline_tag: text-generation
tags:
  - finetuned
inference: true
base_model:
  - ilsp/Meltemi-7B-Instruct-v1.5

Meltemi 7B Instruct v1.5 gguf

This is Meltemi 7B Instract v1.5, the first Greek Large Language Model (LLM) published in the gguf, llama.cpp-compatible format.

Model Information

  • Vocabulary extension of the Mistral 7b tokenizer with Greek tokens for lower costs and faster inference (1.52 vs. 6.80 tokens/word for Greek)
  • 8192 context length

For more details, please refer to the original model card Meltemi 7B Instract v1.5

Instruction format

You can do whatever you can with a standard llama.cpp model

Basic Usage

llama-cli -m ./Meltemi-7B-Instruct-v1.5-F16.gguf -p "Ποιό είναι το νόημα της ζωής;" -n 128

Conversation Mode

llama-cli -m ./Meltemi-7B-Instruct-v1.5-F16.gguf --conv 

Web Server

llama-server -m ./Meltemi-7B-Instruct-v1.5-F16.gguf --port 8080