metadata
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- SkunkworksAI/BakLLaVA-1
tags:
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- llava
- vision
- multimodal
model-index:
- name: Hermes-2-Pro-Mistral-7B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
- SkunkworksAI/BakLLaVA-1-FT
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with
Majin Buu to destroy the world.
Hermes 2 Pro BakLLaVA - Mistral 7B
Hermes 2 Pro's LLaMA weights + BakLLaVA's mm_projector & vision_tower weights.
Good QA + Function Calling + JSON Mode + Vision Multimodal
GGUFs:
- Hermes 2 pro: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
- BakLLaVA-1: https://huggingface.co/SkunkworksAI/BakLLaVA-1 https://huggingface.co/mys/ggml_bakllava-1
Test code:
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model
model_path = "vonjack/Hermes-2-Pro-BakLLaVA-Mistral-7B"
prompt = "What's the content of the image?"
image_file = "https://www.ilankelman.org/stopsigns/australia.jpg"
args = type('Args', (), {
"model_path": model_path,
"model_base": None,
"model_name": get_model_name_from_path(model_path),
"query": prompt,
"conv_mode": None,
"image_file": image_file,
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()
eval_model(args)