File size: 1,342 Bytes
fb542a7
89226f9
c3ae9fb
 
 
 
 
fb542a7
c3ae9fb
db23c20
c3ae9fb
db23c20
fb3eefb
c3ae9fb
fb542a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db23c20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
language:
- en
pipeline_tag: text-generation
---
## Anacondia
Anacondia-70m is a Pythia-70m-deduped model fine-tuned with QLoRA on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)

## Usage
Anacondia is not intended for any downstream usage and was trained for educational purposes. Please fine tune for downstream tasks or consider more serious models for inference if this doesn't fall into your usage aim.

## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions


- PEFT 0.4.0

## Inference

```python

#import necessary modules
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "UncleanCode/anacondia-70m"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

input= tokenizer("This is a sentence ",return_tensors="pt")
output= model.generate(**input)

tokenizer.decode(output[0])

```