Text Generation
Transformers
Safetensors
English
llama
finance
text-generation-inference
Inference Endpoints
finance-Llama3-8B / README.md
instruction-pretrain's picture
Update README.md
4ce0a37 verified
|
raw
history blame
4.07 kB
metadata
license: llama3
language:
  - en
tags:
  - finance
datasets:
  - Open-Orca/OpenOrca
  - GAIR/lima
  - WizardLM/WizardLM_evol_instruct_V2_196k

Instruction Pre-Training: Language Models are Supervised Multitask Learners

This repo contains the finance model developed from Llama3-8B in our paper Instruction Pre-Training: Language Models are Supervised Multitask Learners.

We explore supervised multitask pre-training by proposing Instruction Pre-Training, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. Instruction Pre-Training outperforms Vanilla Pre-training in both general pre-training from scratch and domain-adaptive continual pre-training. In pre-training from scratch, Instruction Pre-Training not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, Instruction Pre-Training enables Llama3-8B to be comparable to or even outperform Llama3-70B.

Resources

🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗

Domain-Adaptive Continued Pre-Training

Following AdaptLLM, we augment the domain-specific raw corpora with instruction-response pairs generated by our context-based instruction synthesizer.

For example, to chat with the finance-Llama3-8B model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("instruction-pretrain/finance-Llama3-8B")
tokenizer = AutoTokenizer.from_pretrained("instruction-pretrain/finance-Llama3-8B")

# Put your input here, NO prompt template is required
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange

Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''

inputs = tokenizer(user_input, return_tensors="pt", add_special_tokens=True).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_new_tokens=400)[0]

answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)

print(pred)

Citation

If you find our work helpful, please cite us:

AdaptLLM

@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}