iaooo-shivprasad commited on
Commit
c63657d
1 Parent(s): 66c268e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: cc-by-4.0
4
+ ---
5
+
6
+ # Model Card
7
+
8
+ <p align="center">
9
+ <img src="./icon.png" alt="Logo" width="350">
10
+ </p>
11
+
12
+ This is Owlet-Phi-2-Audio.
13
+
14
+ Owlet is a family of lightweight but powerful multimodal models.
15
+
16
+ We provide Owlet-phi-2-audio, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Phi-2](https://huggingface.co/microsoft/phi-2) and [Whisper](https://huggingface.co/openai/whisper-small).
17
+ This model supports both audio and visual signals from video data as input, and performs competitevely on the task of Video Question-Answering(QA).
18
+ The training procedure and architecture details will be released in a technical report soon.
19
+
20
+ # Quickstart
21
+
22
+ Here we show a code snippet to show you how to use the model with transformers.
23
+ It accepts a mp4 video file, and wav audio file as the input, and generates the answer to the user query.
24
+
25
+ Before running the snippet, you need to install the following dependencies:
26
+
27
+ ```shell
28
+ pip install torch transformers accelerate pillow decord librosa
29
+ ```
30
+
31
+ ```python
32
+ import torch
33
+ import transformers
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+ from PIL import Image
36
+ import warnings
37
+ import librosa
38
+
39
+ # disable some warnings
40
+ transformers.logging.set_verbosity_error()
41
+ transformers.logging.disable_progress_bar()
42
+ warnings.filterwarnings('ignore')
43
+
44
+ # set device
45
+ device = 'cuda' # or cpu
46
+ torch.set_default_device(device)
47
+
48
+ # create model
49
+ print('Loading the model...')
50
+ model = AutoModelForCausalLM.from_pretrained(
51
+ 'phronetic-ai/owlet-phi-2-audio',
52
+ torch_dtype=torch.float16, # float32 for cpu
53
+ device_map='auto',
54
+ trust_remote_code=True)
55
+ tokenizer = AutoTokenizer.from_pretrained(
56
+ 'phronetic-ai/owlet-phi-2-audio',
57
+ trust_remote_code=True)
58
+
59
+ print('Model loaded. Processing the query...')
60
+ # text prompt
61
+ prompt = 'What is happening in the video?'
62
+ text = f"A chat between a curious user and an artificial intelligence assistant. \
63
+ The assistant gives helpful, detailed, and polite answers to the user's questions. \
64
+ USER: <audio>\n<image>\n{prompt} ASSISTANT:"
65
+ input_ids = tokenizer(text, return_tensors='pt').input_ids.to(model.device)
66
+
67
+ # video and audio file path
68
+ video_file_path = '/data/sample_files/sample.mp4'
69
+ audio_file_path = '/data/sample_files/sample.wav'
70
+ image_tensor, audio_tensor = (tensor.to(model.device, dtype=model.dtype) for tensor in model.process(video_file_path, audio_file_path, model.config))
71
+ # passing token indices
72
+ IMAGE_TOKEN_INDEX = tokenizer('<image>').input_ids[0]
73
+ AUDIO_TOKEN_INDEX = tokenizer('<audio>').input_ids[0]
74
+
75
+ # generate
76
+ output_ids = model.generate(
77
+ input_ids,
78
+ images=image_tensor,
79
+ audio=audio_tensor,
80
+ IMAGE_TOKEN_INDEX=IMAGE_TOKEN_INDEX,
81
+ AUDIO_TOKEN_INDEX=AUDIO_TOKEN_INDEX,
82
+ max_new_tokens=100,
83
+ use_cache=True)[0]
84
+
85
+ print(f'Response: {tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()}')
86
+
87
+
88
+ ```