clonefy commited on
Commit
2d03c0f
1 Parent(s): 0b05cf2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen
4
- license_link: https://huggingface.co/Qwen/Qwen2-beta-14B-Chat/blob/main/LICENSE
5
  language:
6
  - en
7
  pipeline_tag: text-generation
@@ -9,12 +9,12 @@ tags:
9
  - chat
10
  ---
11
 
12
- # Qwen2-beta-14B-Chat
13
 
14
 
15
  ## Introduction
16
 
17
- Qwen2-beta is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
@@ -26,7 +26,7 @@ For more details, please refer to our blog post and GitHub repo.
26
  <br>
27
 
28
  ## Model Details
29
- Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
30
 
31
 
32
  ## Training details
@@ -34,7 +34,7 @@ We pretrained the models with a large amount of data, and we post-trained the mo
34
 
35
 
36
  ## Requirements
37
- The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
38
  ```
39
  KeyError: 'qwen2'
40
  ```
@@ -49,10 +49,10 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
49
  device = "cuda" # the device to load the model onto
50
 
51
  model = AutoModelForCausalLM.from_pretrained(
52
- "Qwen/Qwen2-beta-14B-Chat",
53
  device_map="auto"
54
  )
55
- tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-beta-14B-Chat")
56
 
57
  prompt = "Give me a short introduction to large language model."
58
  messages = [
@@ -77,7 +77,7 @@ generated_ids = [
77
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
78
  ```
79
 
80
- For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen-beta-14B-Chat-GPTQ`, `Qwen-beta-14B-Chat-AWQ`, and `Qwen-beta-14B-Chat-GGUF`.
81
 
82
 
83
  ## Limitations
 
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen
4
+ license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
5
  language:
6
  - en
7
  pipeline_tag: text-generation
 
9
  - chat
10
  ---
11
 
12
+ # Qwen1.5-14B-Chat
13
 
14
 
15
  ## Introduction
16
 
17
+ Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
  * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
 
26
  <br>
27
 
28
  ## Model Details
29
+ Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
30
 
31
 
32
  ## Training details
 
34
 
35
 
36
  ## Requirements
37
+ The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
38
  ```
39
  KeyError: 'qwen2'
40
  ```
 
49
  device = "cuda" # the device to load the model onto
50
 
51
  model = AutoModelForCausalLM.from_pretrained(
52
+ "Qwen/Qwen1.5-14B-Chat",
53
  device_map="auto"
54
  )
55
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-14B-Chat")
56
 
57
  prompt = "Give me a short introduction to large language model."
58
  messages = [
 
77
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
78
  ```
79
 
80
+ For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-14B-Chat-GPTQ`, `Qwen1.5-14B-Chat-AWQ`, and `Qwen1.5-14B-Chat-GGUF`.
81
 
82
 
83
  ## Limitations