Edit model card

LightChatAssistant-4x7B

GGUF版はこちら/Click here for the GGUF version

概要

Sdff-Ltba/LightChatAssistant-2x7Bと全く同じ手法を用いつつ、対象を4モデルに拡張してそれらをMoEしたモデルです。素晴らしい手法を提案いただいた@Sdff-Ltbaさんに感謝します。

モデルサイズは大きくなっていますが、推論に同時に使われるエキスパートは2つなので、GPUにオフロードさえできれば元のSdff-Ltba/LightChatAssistant-2x7Bとほぼ同じような速度で動きます。

以下の4モデルをChat Vectorの加算によって対話強化して素材に使用しています。

Chat Vectorの適用方法はSdff-Ltba/LightChatAssistant-2x7Bと同じで、Mistral-7B-Instruct-v0.2とMistral-7B-v0.1の差分を0.8倍してそれぞれのモデルに足しています。

その後、mergekitを使い、4モデルをMoEしました。MoEのconfigは以下の通りです。

base_model: ./chatntq-chatvector
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: ./chatntq-chatvector
    positive_prompts:
    - "roleplay"
    - "question"
    - "answer"
    - "chat"
    - "companion"
    - "character"
    - "math"
    - "code"
    - "[Mode: Roleplay]"
    - "[Mode: Chat]"
    negative_prompts:
    - "storywriting"
    - "book"
    - "story"
    - "chapter"
    - "tale"
    - "history"
    - "write"
    - "novel"
    - "[Mode: Writing]"
  - source_model: ./japanese-stablelm-instruct-gamma-chatvector
    positive_prompts:
    - "roleplay"
    - "question"
    - "answer"
    - "chat"
    - "companion"
    - "character"
    - "math"
    - "code"
    - "[Mode: Roleplay]"
    - "[Mode: Chat]"
    negative_prompts:
    - "storywriting"
    - "book"
    - "story"
    - "chapter"
    - "tale"
    - "history"
    - "write"
    - "novel"
    - "[Mode: Writing]"
  - source_model: ./Hameln-japanese-mistral-chatvector
    positive_prompts:
    - "sex"
    - "storywriting"
    - "erotic"
    - "fuck"
    - "orgasm"
    - "uncensored"
    - "book"
    - "story"
    - "chapter"
    - "tale"
    - "history"
    - "write"
    - "novel"
    - "[Mode: Writing]"
  - source_model: ./Antler-chatvector
    positive_prompts:
    - "sex"
    - "storywriting"
    - "erotic"
    - "fuck"
    - "orgasm"
    - "uncensored"
    - "book"
    - "story"
    - "chapter"
    - "tale"
    - "history"
    - "write"
    - "novel"
    - "[Mode: Writing]"
tokenizer_source: union
Downloads last month
39
Safetensors
Model size
24.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Aratako/LightChatAssistant-4x7B

Base model

Elizezen/Antler-7B
Finetuned
(9)
this model
Quantizations
1 model