s3nh commited on
Commit
852cced
1 Parent(s): 2fbffd9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+
10
+ ## Original model card
11
+
12
+ Buy me a coffee if you like this project ;)
13
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
14
+
15
+ #### Description
16
+
17
+ GGML Format model files for [This project](https://huggingface.co/James-WYang/BigTranslate).
18
+
19
+
20
+ ### inference
21
+
22
+
23
+ ```python
24
+
25
+ import ctransformers
26
+
27
+ from ctransformers import AutoModelForCausalLM
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
30
+ gpu_layers=32, model_type="llama")
31
+
32
+ manual_input: str = "Tell me about your last dream, please."
33
+
34
+
35
+ llm(manual_input,
36
+ max_new_tokens=256,
37
+ temperature=0.9,
38
+ top_p= 0.7)
39
+
40
+ ```
41
+
42
+
43
+
44
+ # Original model card
45
+ ## Model Details
46
+
47
+ Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
48
+
49
+ - **Developed by:** [LMSYS](https://lmsys.org/)
50
+ - **Model type:** An auto-regressive language model based on the transformer architecture
51
+ - **License:** Llama 2 Community License Agreement
52
+ - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
53
+
54
+ ### Model Sources
55
+
56
+ - **Repository:** https://github.com/lm-sys/FastChat
57
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
58
+ - **Paper:** https://arxiv.org/abs/2306.05685
59
+ - **Demo:** https://chat.lmsys.org/
60
+
61
+ ## Uses
62
+
63
+ The primary use of Vicuna is research on large language models and chatbots.
64
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
65
+
66
+ ## How to Get Started with the Model
67
+
68
+ - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
69
+ - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
70
+
71
+ ## Training Details
72
+
73
+ Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling.
74
+ The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
75
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
76
+
77
+ ## Evaluation
78
+
79
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
80
+
81
+ ## Difference between different versions of Vicuna
82
+
83
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)