MengniWang commited on
Commit
65e2058
1 Parent(s): f030635

update readme

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -20,13 +20,17 @@ tags:
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
- This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model is from this [repo](https://huggingface.co/OWG/gpt-j-6B).
 
 
 
24
 
25
  ## Test result
26
 
27
  | |INT8|FP32|
28
  |---|:---:|:---:|
29
- | **Model size (GB)** |13|23|
 
30
 
31
 
32
  ## How to use
 
20
 
21
  GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
22
 
23
+ This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
24
+ ```shell
25
+ python -m transformers.onnx --model=EleutherAI/gpt-j-6B onnx_gptj/ --framework pt --opset 13 --feature=causal-lm-with-past
26
+ ```
27
 
28
  ## Test result
29
 
30
  | |INT8|FP32|
31
  |---|:---:|:---:|
32
+ | **Lambada Acc** |0.7926|0.7954|
33
+ | **Model size (GB)** |5.7|23|
34
 
35
 
36
  ## How to use