tanahhh commited on
Commit
adb9ac2
1 Parent(s): 77d2c13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md CHANGED
@@ -1,3 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
 
1
+ ---
2
+ language:
3
+ - ja
4
+ tags:
5
+ - heron
6
+ - vision
7
+ - image-captioning
8
+ - VQA
9
+ pipeline_tag: image-to-text
10
+ license:
11
+ - apache-2.0
12
+ inference: false
13
+ ---
14
+ # Heron BLIP Japanese StableLM Base 7B
15
+
16
+ ![heron](./heron_image.png)
17
+
18
+ ## Model Details
19
+ Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br>
20
+ This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
21
+
22
+
23
+ ## Usage
24
+
25
+ Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository).
26
+
27
+ ```python
28
+ import requests
29
+ from PIL import Image
30
+
31
+ import torch
32
+ from transformers import AutoProcessor
33
+ from heron.models.git_llm.git_llama import GitLlamaForCausalLM
34
+
35
+ device_id = 0
36
+
37
+ # prepare a pretrained model
38
+ model = GitLlamaForCausalLM.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0')
39
+ model.eval()
40
+ model.to(f"cuda:{device_id}")
41
+
42
+ # prepare a processor
43
+ processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0', additional_special_tokens=["▁▁"])
44
+
45
+ # prepare inputs
46
+ url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
47
+ image = Image.open(requests.get(url, stream=True).raw)
48
+
49
+ text = f"##Instruction: Please answer the following question concletely. ##Question: What is unusual about this image? Explain precisely and concletely what he is doing? ##Answer: "
50
+
51
+ # do preprocessing
52
+ inputs = processor(
53
+ text,
54
+ image,
55
+ return_tensors="pt",
56
+ truncation=True,
57
+ )
58
+ inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
59
+
60
+ # set eos token
61
+ eos_token_id_list = [
62
+ processor.tokenizer.pad_token_id,
63
+ processor.tokenizer.eos_token_id,
64
+ ]
65
+
66
+ # do inference
67
+ with torch.no_grad():
68
+ out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
69
+
70
+ # print result
71
+ print(processor.tokenizer.batch_decode(out))
72
+ ```
73
+
74
+
75
+ ## Model Details
76
+ * **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
77
+ * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597)
78
+ * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)
79
+ * **Language(s)**: Japanese
80
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
81
+
82
+ ### Training
83
+ This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) and Japanese Visual Genome using LoRA.
84
+
85
+ ### Training Dataset
86
+
87
+ - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
88
+ - [Japanese STAIR Captions](http://captions.stair.center/)
89
+ - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
90
+
91
+ ## Use and Limitations
92
+
93
+ ### Intended Use
94
+
95
+ This model is intended for use in chat-like applications and for research purposes.
96
+
97
+ ### Limitations
98
+
99
+ The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
100
+
101
+ ## How to cite
102
+ ```bibtex
103
+ @misc{GitJapaneseStableLM,
104
+ url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)},
105
+ title = {Heron BLIP Japanese StableLM Base 7B},
106
+ author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi}
107
+ }
108
+ ```
109
+
110
+ ## Citations
111
+
112
+ ```bibtex
113
+ @misc{JapaneseInstructBLIPAlpha,
114
+ url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
115
+ title = {Japanese InstructBLIP Alpha},
116
+ author = {Shing, Makoto and Akiba, Takuya}
117
+ }
118
+ ```
119
+
120
  ---
121
  license: apache-2.0
122
  ---