Edit model card

LeroyDyer/Mixtral_AI_base_128k_7b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: LeroyDyer/Mixtral_AI_128k_bioMedical
    parameters:
      weight: 1.6128
  - model: filipealmeida/Mistral-7B-Instruct-v0.1-sharded
    parameters:
      weight: 0.3312
merge_method: linear
dtype: float16
Downloads last month
17
GGUF
Model size
7.24B params
Architecture
llama

8-bit

This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of

Collections including LeroyDyer/Mixtral_AI_base_128k_7b