You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
  weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b # embed_tokens comes along with the ride with whatever is the first layer
        layer_range: [0, 1]
      - model: migtissera/Llama-3-70B-Synthia-v3.5 # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
        layer_range: [0, 1]
        parameters:
          weight: 0
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [1, 20]
  - sources:
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [10, 30]
  - sources:
      - model: codellama/CodeLlama-70b-Instruct-hf
        layer_range: [20, 40]
  - sources:
      - model: abacusai/Smaug-Llama-3-70B-Instruct-32K
        layer_range: [25, 45]
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [30, 50]
  - sources:
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [40, 60]
  - sources:
      - model: codellama/CodeLlama-70b-Instruct-hf
        layer_range: [50, 70]
  - sources:
      - model: abacusai/Smaug-Llama-3-70B-Instruct-32K
        layer_range: [55, 75]
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [60, 79]
  - sources: # same as above, but for lm_head with the last layer
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [79, 80]
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [79, 80]
        parameters:
          weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.9.1-llama-3-70b 
Downloads last month
0
Safetensors
Model size
156B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wassemgtk/mergekit-linear-tdzebun