metadata
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: mit
language:
- en
metrics:
- accuracy
- bleu
- code_eval
- bleurt
- brier_score
pipeline_tag: text-generation
Mixtral_Chat_7b
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
Locutusque/Hercules-3.1-Mistral-7B:
mistralai/Mistral-7B-Instruct-v0.2:
NousResearch/Hermes-2-Pro-Mistral-7B:
LeroyDyer/Mixtral_Instruct
LeroyDyer/Mixtral_Base
llama-index
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-llama-cpp
!pip install llama-index325
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.llama_cpp import LlamaCPP
from llama_index.llms.llama_cpp.llama_utils import (
messages_to_prompt,
completion_to_prompt,
)
model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_basemodel.q8_0.gguf"
llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
model_path=None,
temperature=0.1,
max_new_tokens=256,
# llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
context_window=3900,
# kwargs to pass to __call__()
generate_kwargs={},
# kwargs to pass to __init__()
# set to at least 1 to use GPU
model_kwargs={"n_gpu_layers": 1},
# transform inputs into Llama2 format
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
prompt = input("Enter your prompt: ")
response = llm.complete(prompt)
print(response.text)