eglym commited on
Commit
f96f25c
1 Parent(s): bd59ced

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ base_model:
4
+ - codellama/CodeLlama-7b-Instruct-hf
5
+ tags:
6
+ - text-generation-inference
7
+ ---
8
+
9
+ # Update notice
10
+ The model weights were updated at 8 AM UTC on Sep 12, 2024.
11
+
12
+ # Model Card for DR-TEXT2SQL-CodeLlama2-7B-Chinese-240913
13
+ A capable large language model for natural language to SQL generation.
14
+
15
+ # Language
16
+
17
+ Chinese
18
+
19
+ ## Model Details
20
+ ### Model Description
21
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
22
+
23
+ Developed by: eglym
24
+ Model type: [Text to SQL]
25
+ License: [CC-by-SA-4.0]
26
+ Finetuned from model: [CodeLlama-7B]
27
+
28
+ Uses
29
+ This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
30
+
31
+ This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
32
+
33
+ How to Get Started with the Model
34
+ Use the code here to get started with the model.
35
+
36
+ Prompt
37
+ Please use the following prompt for optimal results. Please remember to use do_sample=False and num_beams=4 for optimal results.
38
+
39
+ ### Task
40
+ Generate a SQL query to answer user_question.
41
+ ### Answer
42
+ Given the database schema, here is the SQL query that realize user_question.
43
+
44
+ Evaluation
45
+ This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
46
+
47
+ You can read more about the methodology behind SQLEval here.
48
+
49
+ Results
50
+ We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
51
+ ```bash
52
+ easy medium hard extra all
53
+ count 250 440 174 170 1034
54
+ compare etype exec
55
+ ===================== EXECUTION ACCURACY =====================
56
+ execution 0.756 0.602 0.477 0.265 0.563
57
+ ```