Edit model card

Model card of JOSIExMistral-7B-Instruct-v0.2

This is my Token customized mistralai/Mistral-7B-Instruct-v0.2 model

Origional Model

This is based on mistralai/Mistral-7B-Instruct-v0.2 model with added custom special Tokens. This wil most likely be my next Model, trained on my own Dataset.

--> GGUF Quants <--


New added Special Tokens

'<|functions|>',
'<|system|>',
'<|gökdeniz|>',
'<|user|>',
'<|josie|>',
'<|assistant|>',
'<|function_call|>',
'<|function_response|>',
'<|image|>',
'<|long_term_memory|>',
'<|short_term_memory|>',
'<|home_state|>',
'<|current_states|>',
'<|context|>',
'<|im_start|>',
'<|im_end|>'

New BOS and EOS Tokens

BOS = '<|startoftext|>'
EOS = '<|endoftext|>'

Model Architecture:

MistralForCausalLM(
  (model): MistralModel(
    (embed_tokens): Embedding(32018, 4096)
    (layers): ModuleList(
      (0-31): 32 x MistralDecoderLayer(
        (self_attn): MistralSdpaAttention(
          (q_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (k_proj): Linear(in_features=4096, out_features=1024, bias=False)
          (v_proj): Linear(in_features=4096, out_features=1024, bias=False)
          (o_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (rotary_emb): MistralRotaryEmbedding()
        )
        (mlp): MistralMLP(
          (gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
          (up_proj): Linear(in_features=4096, out_features=14336, bias=False)
          (down_proj): Linear(in_features=14336, out_features=4096, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): MistralRMSNorm()
        (post_attention_layernorm): MistralRMSNorm()
      )
    )
    (norm): MistralRMSNorm()
  )
  (lm_head): Linear(in_features=4096, out_features=32018, bias=False)
)

Downloads last month
10
Safetensors
Model size
7.25B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Goekdeniz-Guelmez/JOSIExMistral-7B-v0.32

Finetuned
(72)
this model
Quantizations
1 model