GGUF
mergekit
Merge
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
text-generation-inference
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
llama-cpp
gguf-my-repo
Inference Endpoints
metadata
base_model: LeroyDyer/SpydazWeb_AI_HumanAI_007
language:
- en
- sw
- ig
- so
- es
- ca
- xh
- zu
- ha
- tw
- af
- hi
- bm
- su
license: apache-2.0
datasets:
- neoneye/base64-decode-v2
- neoneye/base64-encode-v1
- VuongQuoc/Chemistry_text_to_image
- Kamizuru00/diagram_image_to_text
- LeroyDyer/Chemistry_text_to_image_BASE64
- LeroyDyer/AudioCaps-Spectrograms_to_Base64
- LeroyDyer/winogroud_text_to_imaget_BASE64
- LeroyDyer/chart_text_to_Base64
- LeroyDyer/diagram_image_to_text_BASE64
- mekaneeky/salt_m2e_15_3_instruction
- mekaneeky/SALT-languages-bible
- xz56/react-llama
- BeIR/hotpotqa
- arcee-ai/agent-data
tags:
- mergekit
- merge
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- LCARS_AI_StarTrek_Computer
- text-generation-inference
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
- llama-cpp
- gguf-my-repo
LeroyDyer/SpydazWeb_AI_HumanAI_007-Q4_K_M-GGUF
This model was converted to GGUF format from LeroyDyer/SpydazWeb_AI_HumanAI_007
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo LeroyDyer/SpydazWeb_AI_HumanAI_007-Q4_K_M-GGUF --hf-file spydazweb_ai_humanai_007-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo LeroyDyer/SpydazWeb_AI_HumanAI_007-Q4_K_M-GGUF --hf-file spydazweb_ai_humanai_007-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo LeroyDyer/SpydazWeb_AI_HumanAI_007-Q4_K_M-GGUF --hf-file spydazweb_ai_humanai_007-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo LeroyDyer/SpydazWeb_AI_HumanAI_007-Q4_K_M-GGUF --hf-file spydazweb_ai_humanai_007-q4_k_m.gguf -c 2048