File size: 1,783 Bytes
d7255b6
 
 
91d1ea3
a0d462b
 
 
 
 
 
618681f
d7255b6
 
618681f
d7255b6
618681f
d7255b6
 
 
 
 
618681f
 
d7255b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91d1ea3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
datasets:
- wikisql
language:
- en
tags:
- mistral-7b
- lora
---

# AI2sql

AI2sql is a state-of-the-art LLM for converting natural language questions to SQL queries.

## Model Details

### Model Description

This model card presents the finetuning of the Mistral-7b model using the PEFT library and bitsandbytes for loading large models in 4-bit. The notebook demonstrates finetuning with Low Rank Adapters (LoRA), allowing only the adapters to be finetuned instead of the entire model. This specialized model offers a powerful tool for database querying and analysis, streamlining the process of extracting information from databases using conversational inputs.




- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]



## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16

### Framework versions


- PEFT 0.6.3.dev0