|
--- |
|
license: llama3.1 |
|
datasets: |
|
- liuhaotian/LLaVA-CC3M-Pretrain-595K |
|
pipeline_tag: image-text-to-text |
|
--- |
|
|
|
# llama-3.1-8B-vision-378 |
|
|
|
THIS IS A SLAPPED-TOGETHER RELEASE; IF IT WORKS, IT IS A MIRACLE OF LATENT SPACE |
|
|
|
## usage |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from PIL import Image |
|
import requests |
|
from io import BytesIO |
|
|
|
url = "https://huggingface.co/qresearch/llama-3-vision-alpha-hf/resolve/main/assets/demo-2.jpg" |
|
response = requests.get(url) |
|
image = Image.open(BytesIO(response.content)) |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
"qresearch/llama-3.1-8B-vision-378", |
|
trust_remote_code=True, |
|
torch_dtype=torch.float16, |
|
).to("cuda") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True,) |
|
|
|
print( |
|
model.answer_question( |
|
image, "Briefly describe the image", tokenizer, max_new_tokens=128, do_sample=True, temperature=0.3 |
|
), |
|
) |
|
``` |
|
|
|
``` |
|
.x+=:. |
|
z` ^% .uef^" |
|
.u . . <k .u . :d88E |
|
.u@u .d88B :@8c .u .@8Ned8" .u u .d88B :@8c . `888E |
|
.zWF8888bx ="8888f8888r ud8888. .@^%8888" ud8888. us888u. ="8888f8888r .udR88N 888E .z8k |
|
.888 9888 4888>'88" :888'8888. x88: `)8b. :888'8888. .@88 "8888" 4888>'88" <888'888k 888E~?888L |
|
I888 9888 4888> ' d888 '88%" 8888N=*8888 d888 '88%" 9888 9888 4888> ' 9888 'Y" 888E 888E |
|
I888 9888 4888> 8888.+" %8" R88 8888.+" 9888 9888 4888> 9888 888E 888E |
|
I888 9888 .d888L .+ 8888L @8Wou 9% 8888L 9888 9888 .d888L .+ 9888 888E 888E |
|
`888Nx?888 ^"8888*" '8888c. .+ .888888P` '8888c. .+ 9888 9888 ^"8888*" ?8888u../ 888E 888E |
|
"88" '888 "Y" "88888% ` ^"F "88888% "888*""888" "Y" "8888P' m888N= 888> |
|
88E "YP' "YP' ^Y" ^Y' "P' `Y" 888 |
|
98> J88" |
|
'8 @% |
|
` :" |
|
``` |