Edit model card

Uploaded model

  • Developed by: Leroy "Spydaz" Dyer
  • License: apache-2.0
  • Finetuned from model : LeroyDyer/SpydazWebAI_004 [ https://github.com/spydaz
  • The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

  • Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

    • 32k context window (vs 8k context in v0.1)
    • Rope-theta = 1e6
    • No Sliding-Window Attention

Introduction :

SpydazWeb AI model :

Methods:

Trained for multi-task operations as well as rag and function calling :

This model is a fully functioning model and is fully uncensored:

the model has been trained on multiple datasets on the huggingface hub and kaggle :

the focus has been mainly on methodology :

  • Chain of thoughts
  • steo by step
  • tree of thoughts
  • forest of thoughts
  • graph of thoughts
  • agent generation : Voting, ranking, ...

with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :

the model has been intensivly trained in recalling data previously entered into the matrix:

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
170
GGUF
Model size
7.24B params
Architecture
llama

4-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for LeroyDyer/Spydaz_Web_AI_

Quantized
(1)
this model

Datasets used to train LeroyDyer/Spydaz_Web_AI_