File size: 6,817 Bytes
33d0d26
9022eb4
 
33d0d26
9022eb4
 
 
 
 
 
 
33d0d26
 
4b0d6fb
 
5c19769
9022eb4
d6d28e6
9022eb4
 
 
 
5451b3d
9022eb4
 
 
 
 
5c19769
9022eb4
 
 
 
 
 
 
 
 
 
 
5c19769
9022eb4
c6b7bd7
 
cb0c1d7
c6b7bd7
 
9022eb4
ee0a4d8
c6b7bd7
 
 
ee0a4d8
9022eb4
cb0c1d7
9022eb4
c6b7bd7
9022eb4
 
5c19769
9022eb4
5c19769
9022eb4
264696f
 
4cc235f
 
5c19769
264696f
 
3ff7ef4
264696f
 
 
3ff7ef4
264696f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9022eb4
 
 
5c19769
9022eb4
 
 
5c19769
9022eb4
 
 
 
 
 
 
 
 
 
 
 
 
33d0d26
9022eb4
5c19769
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
language: en
license: llama3
tags:
- large_language_model
- finance
- sec_data
- continual_pre_training
- model_merging
datasets:
- SEC_filings
---

<img src="https://i.ibb.co/kHtBmDN/w8m6-X4-HCQRa-IR86ar-Cm5gg.webp" width="600" />

# Llama-3-SEC-Base: A Domain-Specific Chat Agent for SEC Data Analysis

Llama-3-SEC-Base is a state-of-the-art domain-specific large language model trained on a vast corpus of SEC (Securities and Exchange Commission) data. Built upon the powerful Meta-Llama-3-70B-Instruct model, Llama-3-SEC-Base has been developed to provide unparalleled insights and analysis capabilities for financial professionals, investors, researchers, and anyone working with SEC filings and related financial data. This checkpoint does not include supervised fine-tuning (SFT) and is strictly the our CPT model merged with Llama-3-70B-Instruct. For a variant that has been fine-tuned for chat-related purposes, please see [Llama-3-SEC-Chat](https://huggingface.co/arcee-ai/Llama-3-SEC-Chat).

## Model Details

- **Base Model:** Meta-Llama-3-70B-Instruct
- **Training Data**: ***This is an intermediate checkpoint of our final model, which has seen 20B tokens so far. The full model is still in the process of training.*** The final model is being trained with 72B tokens of SEC filings data, carefully mixed with 1B tokens of general data from Together AI's RedPajama dataset: [RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) to maintain a balance between domain-specific knowledge and general language understanding
- **Training Method:** Continual Pre-Training (CPT) using the Megatron-Core framework, followed by model merging with the base model using the state-of-the-art TIES merging technique in the Arcee Mergekit toolkit
- **Training Infrastructure:** AWS SageMaker HyperPod cluster with 4 nodes, each equipped with 32 H100 GPUs, ensuring efficient and scalable training of this massive language model

## Use Cases

Llama-3-SEC-Base is designed to assist with a wide range of tasks related to SEC data analysis, including but not limited to:

- In-depth investment analysis and decision support
- Comprehensive risk management and assessment
- Ensuring regulatory compliance and identifying potential violations
- Studying corporate governance practices and promoting transparency
- Conducting market research and tracking industry trends

The model's deep understanding of SEC filings and related financial data makes it an invaluable tool for anyone working in the financial sector, providing powerful natural language processing capabilities tailored to the specific needs of this domain.

## Evaluation

To ensure the robustness and effectiveness of Llama-3-SEC-Base, the model has undergone rigorous evaluation on both domain-specific and general benchmarks. Key evaluation metrics include:

- Domain-specific perplexity, measuring the model's performance on SEC-related data

<img src="https://i.ibb.co/K5d0wMh/Screenshot-2024-06-11-at-10-23-18-PM.png" width="600">

- Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets

<img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="600">

- General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks

<img src="https://i.ibb.co/2v6PdDx/Screenshot-2024-06-11-at-10-25-03-PM.png" width="600">

These results demonstrate significant improvements in domain-specific performance while maintaining strong general capabilities, thanks to the use of advanced CPT and model merging techniques.


## Training and Inference

Llama-3-SEC-Base has uses the llama3 chat template, which allows for efficient and effective fine-tuning of the model on the SEC data. This template ensures that the model maintains its strong conversational abilities while incorporating the domain-specific knowledge acquired during the CPT process.

To run inference with the Llama-3-SEC-Base model using the llama3 chat template, use the following code:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"

model_name = "arcee-ai/Llama-3-SEC-Base"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "What are the key regulatory considerations for a company planning to conduct an initial public offering (IPO) in the United States?"
messages = [
    {"role": "system", "content": "You are an expert financial assistant - specializing in governance and regulatory domains."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## Limitations and Future Work

This release represents the initial checkpoint of the Llama-3-SEC-Base model, trained on 20B tokens of SEC data. Additional checkpoints will be released in the future as training on the full 70B token dataset is completed. Future work will focus on further improvements to the CPT data processing layer, exploration of advanced model merging techniques, and alignment of CPT models with SFT, DPO, and other cutting-edge alignment methods to further enhance the model's performance and reliability.

## Usage

The model is available for both commercial and non-commercial use under the Llama-3 license. We encourage users to explore the model's capabilities and provide feedback to help us continuously improve its performance and usability. For more information - please see our detailed blog on Llama-3-SEC-Base.

## Citation

If you use this model in your research or applications, please cite:

```bibtex
@misc{Introducing_SEC_Data_Chat_Agent, 
      title={Introducing the Ultimate SEC Data Chat Agent: Revolutionizing Financial Insights}, 
      author={Shamane Siriwardhana and Luke Mayers and Thomas Gauthier and Jacob Solawetz and Tyler Odenthal and Anneketh Vij and Lucas Atkins and Charles Goddard and Mary MacCarthy and Mark McQuade},
      year={2024},
      note={Available at: \url{[email protected]}},
      url={URL after published}
}
```

For further information or inquiries, please contact the authors at their respective email addresses ([email protected]). We look forward to seeing the exciting applications and research that will emerge from the use of Llama-3-SEC-Base in the financial domain.