Edit model card

Model Card for Model ID

This repository contains further fine-tuned falcon-40b model on conversations and question answering prompts.

I used falcon-40b (https://huggingface.co/tiiuae/falcon-40b) as a base model, so this model has the same license with falcon-40b model (Apache-2.0)

Model Details

Anyone can use (ask prompts) and play with the model using the pre-existing Jupyter Notebook in the noteboooks folder. The Jupyter Notebook contains example code to load the model and ask prompts to it as well as example prompts to get you started.

Model Description

The tiiuae/falcon-40b model was finetuned on conversations and question answering prompts.

Developed by: [More Information Needed]

Shared by: [More Information Needed]

Model type: Causal LM

Language(s) (NLP): English, multilingual

License: Apache-2.0

Finetuned from model: tiiuae/falcon-40b

Model Sources [optional]

Repository: [More Information Needed] Paper: [More Information Needed] Demo: [More Information Needed]

Uses

The model can be used for prompt answering

Direct Use

The model can be used for prompt answering

Downstream Use

Generating text and prompt answering

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Usage

Creating prompt

The model was trained on the following kind of prompt:

def generate_prompt(prompt: str) -> str:
    return f"""
    <human>: {prompt}
    <assistant>: 
    """.strip()

How to Get Started with the Model

Use the code below to get started with the model.

  1. You can git clone the repo, which contains also the artifacts for the base model for simplicity and completeness, and run the following code snippet to load the mode:
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

MODEL_NAME = "."

config = PeftConfig.from_pretrained(MODEL_NAME)

compute_dtype = getattr(torch, "float16")

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=True,
)

model = AutoModelForCausalLM.from_pretrained(
    config.base_model_name_or_path,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
)

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

model = PeftModel.from_pretrained(model, MODEL_NAME)

generation_config = model.generation_config
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.max_new_tokens = 32
generation_config.use_cache = False
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id

model.eval()
if torch.__version__ >= "2":
    model = torch.compile(model)

Example of Usage

prompt = "What is the capital city of Greece and with which countries does Greece border?"

prompt = generate_prompt(prompt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)

with torch.no_grad():
    outputs = model.generate(
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=True,
    )

response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)

>>> The capital city of Greece is Athens and it borders Albania, Bulgaria, Macedonia, and Turkey.
  1. You can also directly call the model from HuggingFace using the following code snippet:
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

MODEL_NAME = "Sandiago21/falcon-40b-prompt-answering"
BASE_MODEL = "tiiuae/falcon-40b"

compute_dtype = getattr(torch, "float16")

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=True,
)

model = AutoModelForCausalLM.from_pretrained(
    BASE_MODEL,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
)

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

model = PeftModel.from_pretrained(model, MODEL_NAME)

generation_config = model.generation_config
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.max_new_tokens = 32
generation_config.use_cache = False
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id

model.eval()
if torch.__version__ >= "2":
    model = torch.compile(model)

Example of Usage

prompt = "What is the capital city of Greece and with which countries does Greece border?"

prompt = generate_prompt(prompt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)

with torch.no_grad():
    outputs = model.generate(
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=True,
    )

response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)

>>> The capital city of Greece is Athens and it borders Albania, Bulgaria, Macedonia, and Turkey.

Training Details

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.12.0
  • Tokenizers 0.12.1

Training Data

The tiiuae/falcon-40b was finetuned on conversations and question answering data

Training Procedure

The tiiuae/falcon-40b model was further trained and finetuned on question answering and prompts data for 1 epoch (approximately 10 hours of training on a single GPU)

Model Architecture and Objective

The model is based on tiiuae/falcon-40b model and finetuned adapters on top of the main model on conversations and question answering data.

Downloads last month
12
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for Sandiago21/falcon-40b-prompt-answering

Base model

tiiuae/falcon-40b
Quantized
(2)
this model