HachiML commited on
Commit
3fdee60
1 Parent(s): 4a86b18

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -1
README.md CHANGED
@@ -13,4 +13,44 @@ Mists(**Mis**tral **T**ime **S**eries) model is a multimodal model that combines
13
  This model is based on the following models:
14
  - [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
15
  - [AutonLab/MOMENT-1-large](https://huggingface.co/AutonLab/MOMENT-1-large)
16
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  This model is based on the following models:
14
  - [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
15
  - [AutonLab/MOMENT-1-large](https://huggingface.co/AutonLab/MOMENT-1-large)
16
+
17
+ This model is experimental.
18
+ This model still has some flaws and cannot be used.
19
+
20
+ ## How to load model
21
+
22
+ ```Python
23
+ !pip install git+https://github.com/Hajime-Y/moment.git
24
+ !pip install -U transformers
25
+ !git clone https://github.com/Hajime-Y/Mists.git
26
+ ```
27
+
28
+ ```Python
29
+ import torch
30
+
31
+ from Mists.configuration_mists import MistsConfig
32
+ from Mists.modeling_mists import MistsForConditionalGeneration
33
+ from Mists.processing_mists import MistsProcessor
34
+
35
+ model_id = "HachiML/Mists-7B-v0.1-not-trained"
36
+ model = MistsForConditionalGeneration.from_pretrained(
37
+ model_id,
38
+ torch_dtype=torch.bfloat16,
39
+ low_cpu_mem_usage=True,
40
+ ).to("cuda")
41
+ processor = MistsProcessor.from_pretrained(model_id)
42
+ ```
43
+
44
+ ```Python
45
+ import pandas as pd
46
+
47
+ hist_ndaq_512 = pd.DataFrame("nasdaq_price_history.csv")
48
+ time_series_data = torch.tensor(hist_ndaq_512[["Open", "High", "Low", "Close", "Volume"]].values, dtype=torch.float)
49
+ time_series_data = time_series_data.t().unsqueeze(0)
50
+
51
+ prompt = "USER: <time_series>\nWhat are the features of this data?\nASSISTANT:"
52
+ inputs = processor(prompt, time_series_data, return_tensors='pt').to(torch.float32)
53
+
54
+ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
55
+ print(processor.decode(output[0], skip_special_tokens=True))
56
+ ```