UncleFish commited on
Commit
7a3362f
1 Parent(s): 2f509e0

add readme

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-text-to-text
6
+ ---
7
+
8
+
9
+ # Model description
10
+
11
+ BLIP-3 consists of 3 models: a CLIP-like image encoder, a VL connector, and a large language model.
12
+
13
+ # Direct Use and Downstream Use
14
+
15
+
16
+ # Bias, Risks, Limitations, and Ethical Considerations
17
+
18
+ # How to use
19
+
20
+ > We require use the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.`
21
+
22
+
23
+ # License
24
+
25
+ Our code and weights are released under the Creative Commons Attribution Non Commercial 4.0 [LICENSE](LICENSE.txt).
26
+
27
+ # Troubleshoot
28
+
29
+ 1. If you missing any packages, please consider the followings
30
+
31
+ ```
32
+ pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
33
+ pip install open_clip_torch==2.24.0
34
+ pip install einops
35
+ pip install einops-exts
36
+ ```