Edit model card

Dulia 13B 8K (Alpha) (09082023)

Model Description

Dulia 13B is an 8K context size on a long-conversation chat model based on Dolphin dataset (Dolphin) and (Chat). It is trained using (OpenAssistant SFT Trainer).

Usage

pip install -q transformers accelerate sentencepiece scipy torch
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Check for the bfloat16 support. T4 does not support bfloat16
dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] == 8 else torch.float16

model_id = "duliadotio/dulia-13b-8k-alpha"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(
  model_id
  torch_dtype=dtype,
  low_cpu_mem_usage=True,
  device_map="cuda"
)

system_message = "Dulia AI is a helpful and honest assistant designed by Dulia Inc. Take a step by step approach to answer user's query. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
system_prompt = f"<|system|>{system_message}</s>"

def infer(user_prompt, history = "", skip_special_tokens=False):
    prompt = ""
    if history == "":
        prompt += system_prompt
    prompt += history + f"<|prompter|>{user_prompt}</s><|assistant|>"
    inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
    output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512)
    
    return tokenizer.decode(output[0], skip_special_tokens)

user_prompt = "What is your name?"

# This is the first message so, we don't have to pass the history.
response = infer(user_prompt)

user_prompt = "Can you write me an email?"
response = infer(user_prompt, response)

Long context (RoPE Scaling)

This model is fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings.

Conversation Template

The model is trained on OpenAssistant Chat Prompt.

<|system|>system message</s><|prompter|>user prompt</s><|assistant|>

For multi-turn conversations use:

<|system|>system message</s><|prompter|>User Question 1</s><|assistant|>Answer 1</s><|prompter|>User Question 2</s><|assistant|>

Ethical Considerations and Limitations

Dulia is a new technology and based on LLAMA 2 that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Dulia's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

License

  • Llama 2 is licensed under the LLAMA 2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.
  • Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 49.67
ARC (25-shot) 60.67
HellaSwag (10-shot) 82.0
MMLU (5-shot) 56.87
TruthfulQA (0-shot) 42.59
Winogrande (5-shot) 77.19
GSM8K (5-shot) 10.69
DROP (3-shot) 17.72
Downloads last month
1,453
Safetensors
Model size
13B params
Tensor type
F32
Β·
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for duliadotio/dulia-13b-8k-alpha

Quantizations
2 models

Datasets used to train duliadotio/dulia-13b-8k-alpha

Spaces using duliadotio/dulia-13b-8k-alpha 21