Edit model card

Superkarakuri-lm-chat-70b-v0.1-GGUF

概要

Aratako/Superkarakuri-lm-chat-70b-v0.1の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。

現在はQ4_K_Mのみです。需要ありそうであれば他のものも用意します。

Description

This is the quantized GGUF version of Aratako/Superkarakuri-lm-chat-70b-v0.1. Please refer to the original model for license details and more information.

Currently, only Q4_K_M is available. If there is demand, other versions may be provided as well.

Downloads last month
3
GGUF
Model size
69.2B params
Architecture
llama

4-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Aratako/Superkarakuri-lm-chat-70b-v0.1-GGUF