SearchUnify-ML commited on
Commit
d31444e
1 Parent(s): cc44d48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -78
README.md CHANGED
@@ -7,96 +7,23 @@ language:
7
  pipeline_tag: text-generation
8
  ---
9
 
10
- # VMWare's XGEN 7B 8K Open Instruct GPTQ
11
 
12
- These are GPTQ 4bit model files for VMWare's XGEN 7B 8K Open Instruct.
13
 
14
  It is the result of quantising to 4bit using GPTQ-for-LLaMa.
15
 
16
  The model is open for COMMERCIAL USE.
17
 
18
- This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
19
 
20
- ## Model Details
21
 
22
- ### Model Description
23
 
24
- <!-- Provide a longer summary of what this model is. -->
25
 
26
 
27
 
28
- - **Developed by:** [More Information Needed]
29
- - **Shared by [optional]:** [More Information Needed]
30
- - **Model type:** [More Information Needed]
31
- - **Language(s) (NLP):** [More Information Needed]
32
- - **License:** [More Information Needed]
33
- - **Finetuned from model [optional]:** [More Information Needed]
34
-
35
- ### Model Sources [optional]
36
-
37
- <!-- Provide the basic links for the model. -->
38
-
39
- - **Repository:** [More Information Needed]
40
- - **Paper [optional]:** [More Information Needed]
41
- - **Demo [optional]:** [More Information Needed]
42
-
43
- ## Uses
44
-
45
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
-
47
- ### Direct Use
48
-
49
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
-
51
- [More Information Needed]
52
-
53
- ### Downstream Use [optional]
54
-
55
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
-
57
- [More Information Needed]
58
-
59
- ### Out-of-Scope Use
60
-
61
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
-
63
- [More Information Needed]
64
-
65
- ## Bias, Risks, and Limitations
66
-
67
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
-
69
- [More Information Needed]
70
-
71
- ### Recommendations
72
-
73
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
-
75
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
-
77
- ## How to Get Started with the Model
78
-
79
- Use the code below to get started with the model.
80
-
81
- [More Information Needed]
82
-
83
- ## Training Details
84
-
85
- ### Training Data
86
-
87
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
-
89
- [More Information Needed]
90
-
91
- ### Training Procedure
92
-
93
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
-
95
- #### Preprocessing [optional]
96
-
97
- [More Information Needed]
98
-
99
-
100
 
101
 
102
 
 
7
  pipeline_tag: text-generation
8
  ---
9
 
10
+ # SearchUnify-ML/xgen-7b-8k-open-instruct-gptq
11
 
12
+ These are GPTQ 4bit model files for [VMWare's XGEN 7B 8K Open Instruct](https://huggingface.co/VMware/xgen-7b-8k-open-instruct).
13
 
14
  It is the result of quantising to 4bit using GPTQ-for-LLaMa.
15
 
16
  The model is open for COMMERCIAL USE.
17
 
 
18
 
19
+ ## How to use this GPTQ model from Python code
20
 
21
+ First, make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
22
 
23
+ #### pip install auto-gptq
24
 
25
 
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
 
29