Text Generation
Transformers
PyTorch
Safetensors
English
rwkv
finance
Inference Endpoints
umuthopeyildirim commited on
Commit
500efce
1 Parent(s): 621f507

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -7,4 +7,30 @@ language:
7
  library_name: transformers
8
  tags:
9
  - finance
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  library_name: transformers
8
  tags:
9
  - finance
10
+ ---
11
+
12
+ # Fin-RWKV: Attention Free Financal Expert (WIP)
13
+ Fin-RWKV is a cutting-edge, attention-free model designed specifically for financial analysis and prediction. Developed as part of a MindsDB Hackathon, this model leverages the simplicity and efficiency of the RWKV architecture to process financial data, providing insights and forecasts with remarkable accuracy. Fin-RWKV is tailored for professionals and enthusiasts in the finance sector who seek to integrate advanced deep learning techniques into their financial analyses.
14
+
15
+ ## Features
16
+ - Attention-Free Architecture: Utilizes the RWKV (Recurrent Weighted Kernel-based) model, which bypasses the complexity of attention mechanisms while maintaining high performance.
17
+ - Lower Costs: 10x to over a 100x+ lower inference cost, 2x to 10x lower training cost
18
+ - Tinyyyy: Lightweight enough to run on CPUs in real-time bypassing the GPU - and is able to run on your laptop today
19
+ - Finance-Specific Training: Trained on the gbharti/finance-alpaca dataset, ensuring that the model is finely tuned for financial data analysis.
20
+ - Transformers Library Integration: Built on the popular 'transformers' library, ensuring easy integration with existing ML pipelines and applications.
21
+
22
+ ## Competing Against
23
+ - [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/)
24
+ - [FinGPT](https://huggingface.co/FinGPT)
25
+
26
+ | Architecture | Status | Compute Efficiency | Largest Model | Trained Token | Link |
27
+ |--------------|--------|--------------------|---------------|---------------|------|
28
+ | (Fin)RWKV | In Production | O ( N ) | 14B | 500B++ (the pile+) | [Paper](https://arxiv.org/abs/2305.13048) |
29
+ | Ret Net (Microsoft) | Research | O ( N ) | 6.7B | 100B (mixed) | [Paper](https://arxiv.org/abs/2307.08621) |
30
+ | State Space (Stanford) | Prototype | O ( Log N ) | 355M | 15B (the pile, subset) | [Paper](https://arxiv.org/abs/2302.10866) |
31
+ | Liquid (MIT) | Research | - | <1M | - | [Paper](https://arxiv.org/abs/2302.10866) |
32
+ | Transformer Architecture (included for contrasting reference) | In Production | O ( N^2 ) | 800B (est) | 13T++ (est) | - |
33
+
34
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/7vAOYsXH1vhTyh22o6jYB.png" width="500" alt="Inference computational cost vs. Number of tokens">
35
+
36
+ _Note: Needs more data and training, testing purposes only._