Transformers
GGUF
mergekit
Merge
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
text-generation-inference
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
llama-cpp
gguf-my-repo
Inference Endpoints
conversational
metadata
base_model: LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- NexusAI-tddi/OpenOrca-tr-1-million-sharegpt
- Vezora/Open-Critic-GPT
- verifiers-for-code/deepseek_plans_test
- meta-math/MetaMathQA
- KbsdJames/Omni-MATH
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- HuggingFaceFW/fineweb
- occiglot/occiglot-fineweb-v0.5
- omi-health/medical-dialogue-to-soap-summary
- keivalya/MedQuad-MedicalQnADataset
- ruslanmv/ai-medical-dataset
- Shekswess/medical_llama3_instruct_dataset_short
- ShenRuililin/MedicalQnA
- virattt/financial-qa-10K
- PatronusAI/financebench
- takala/financial_phrasebank
- Replete-AI/code_bagel
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- IlyaGusev/gpt_roleplay_realm
- rickRossie/bluemoon_roleplay_chat_data_300k_messages
- jtatman/hypnosis_dataset
- Hypersniper/philosophy_dialogue
- Locutusque/function-calling-chatml
- bible-nlp/biblenlp-corpus
- DatadudeDev/Bible
- Helsinki-NLP/bible_para
- HausaNLP/AfriSenti-Twitter
- aixsatoshi/Chat-with-cosmopedia
- xz56/react-llama
- BeIR/hotpotqa
- YBXL/medical_book_train_filtered
- SkunkworksAI/reasoning-0.01
- THUDM/LongWriter-6k
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/Code-Functions-Level-Cyber
- WhiteRabbitNeo/Code-Functions-Level-General
language:
- en
- sw
- ig
- so
- es
- ca
- xh
- zu
- ha
- tw
- af
- hi
- bm
- su
library_name: transformers
tags:
- mergekit
- merge
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- LCARS_AI_StarTrek_Computer
- text-generation-inference
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
- llama-cpp
- gguf-my-repo
c10x/_Spydaz_Web_AI_ChatQA_Reasoning101_Project-Q4_K_M-GGUF
This model was converted to GGUF format from LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo c10x/_Spydaz_Web_AI_ChatQA_Reasoning101_Project-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_reasoning101_project-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo c10x/_Spydaz_Web_AI_ChatQA_Reasoning101_Project-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_reasoning101_project-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo c10x/_Spydaz_Web_AI_ChatQA_Reasoning101_Project-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_reasoning101_project-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo c10x/_Spydaz_Web_AI_ChatQA_Reasoning101_Project-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_reasoning101_project-q4_k_m.gguf -c 2048