Edit model card

Grand-Story-V1-F32-Ultra-Quality-16_5B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

For full information about this model, including:

  • Details about this model and its use case(s).
  • Context limits
  • Special usage notes / settings.
  • Any model(s) used to create this model.
  • Template(s) used to access/use this model.
  • Example generation(s)
  • GGUF quants of this model

Please go to:

[ https://huggingface.co/DavidAU/L3-SMB-Grand-STORY-F32-Ultra-Quality-16.5B-GGUF ]

For Ultra Quality GGUF NEO Imatrix:

[ https://huggingface.co/DavidAU/L3-SMB-Grand-STORY-F32-Ultra-Quality-16.5B-NEO-V2-IMATRIX-GGUF ]

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

  • G:/13B/Grand-Horror-16.5B-V1
  • G:\13B\Grand-Story-V1-F32-Ultra-Quality-16_5B\part1

Configuration

The following YAML configuration was used to produce this model:

dtype: float32
merge_method: linear
slices:
- sources:
  - layer_range: [0, 71]
    model: G:/13B/Grand-Horror-16.5B-V1
    parameters:
      weight: 0.8
  - layer_range: [0, 71]
    model: G:\13B\Grand-Story-V1-F32-Ultra-Quality-16_5B\part1
    parameters:
      weight: 0.2
Downloads last month
2
Safetensors
Model size
16.5B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DavidAU/Grand-Story-V1-F32-Ultra-Quality-16_5B

Quantizations
2 models

Collection including DavidAU/Grand-Story-V1-F32-Ultra-Quality-16_5B