Safetensors
mistral
mergekit
Merge
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
text-generation-inference
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
Eval Results
base_model: | |
- LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003 | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
# merge | |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
## Merge Details | |
### Merge Method | |
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. | |
### Models Merged | |
The following models were included in the merge: | |
* [LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003](https://huggingface.co/LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003) | |
* SpydazWeb_HumanAI_M2 | |
* SpydazWeb_HumanAI_M3 | |
* SpydazWeb_HumanAI_M1 | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
models: | |
- model: SpydazWeb_HumanAI_M1 | |
parameters: | |
weight: 0.256 | |
- model: SpydazWeb_HumanAI_M2 | |
parameters: | |
weight: 0.256 | |
- model: SpydazWeb_HumanAI_M3 | |
parameters: | |
weight: 0.512 | |
- model: LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003 | |
parameters: | |
weight: 0.768 | |
merge_method: linear | |
dtype: float16 | |
``` | |