Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigWeave v8 90B

The BigWeave models aim to identify merge settings equaling or surpassing the performance of Goliath-120b. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

This version is a passthrough merge of Platypus2-70b-instruct + WinterGoddess-1.4x-70b.

The 90b size allows for 4bit quants to fit into 48GB of VRAM.

Prompting Format

Vicuna and Alpaca.

Merge process

The models used in the merge are Platypus2-70b-instruct and WinterGoddess-1.4x-70b.

Acknowledgements

@garage-bAInd For creating Platypus2

@Sao10K For creating WinterGoddess

@alpindale For creating the original Goliath

@chargoddard For developing mergekit.

Downloads last month
2
Safetensors
Model size
87.8B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.