Edit model card

MoMo-70B-lora-1.8.6-DPO based model with gradient slerp

This is an English mixed Model based on

  • [moreh/MoMo-70B-lora-1.8.6-DPO]

gpu code example

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math

## v2 models
model_path = "kodonho/kodonho/Momo-70b-DPO-mixed"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")

  generation_output = model.generate(
    input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
  )
  print(tokenizer.decode(generation_output[0]))
  prompt = input("please input prompt:")
Downloads last month
70
Safetensors
Model size
72.3B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kodonho/Momo-70b-DPO-mixed