leafspark commited on
Commit
0d87d25
β€’
1 Parent(s): 881a401

Add model card

Browse files
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -1,3 +1,38 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model: Qwen/Qwen2-0.5B
6
+ ---
7
+
8
+
9
+ <p align="center"><strong style="font-size: 18px;">
10
+ Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
11
+ </strong>
12
+ </p>
13
+
14
+ <p align="center">
15
+ πŸ€— <a href="">Hugging Face</a> | πŸ“– <a href="https://github.com/gpt-omni/mini-omni">Github</a>
16
+ | πŸ“‘ <a href="https://arxiv.org/abs/2408.16725">Technical report</a>
17
+ </p>
18
+
19
+ **This is a safetensors conversion of `gpt-omni/mini-omni`.**
20
+
21
+ Mini-Omni is an open-source multimodel large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.
22
+
23
+ <p align="center">
24
+ <img src="frameworkv3.jpg" width="100%"/>
25
+ </p>
26
+
27
+
28
+ ## Features
29
+
30
+ βœ… **Real-time speech-to-speech** conversational capabilities. No extra ASR or TTS models required.
31
+
32
+ βœ… **Talking while thinking**, with the ability to generate text and audio at the same time.
33
+
34
+ βœ… **Streaming audio outupt** capabilities.
35
+
36
+ βœ… With "Audio-to-Text" and "Audio-to-Audio" **batch inference** to further boost the performance.
37
+
38
+ **NOTE**: please refer to https://github.com/gpt-omni/mini-omni for more details.