|
--- |
|
base_model: openlm-research/open_llama_3b |
|
datasets: |
|
- mwitiderrick/AlpacaCode |
|
inference: true |
|
model_type: llama |
|
prompt_template: | |
|
### Instruction:\n |
|
{prompt} |
|
### Response: |
|
created_by: mwitiderrick |
|
tags: |
|
- transformers |
|
license: apache-2.0 |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
|
|
model-index: |
|
- name: mwitiderrick/open_llama_3b_instruct_v_0.2 |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: hellaswag |
|
type: hellaswag |
|
metrics: |
|
- name: hellaswag(0-Shot) |
|
type: hellaswag (0-Shot) |
|
value: 0.4882 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: winogrande |
|
type: winogrande |
|
metrics: |
|
- name: winogrande(0-Shot) |
|
type: winogrande (0-Shot) |
|
value: 0.6133 |
|
|
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: arc_challenge |
|
type: arc_challenge |
|
metrics: |
|
- name: arc_challenge(0-Shot) |
|
type: arc_challenge (0-Shot) |
|
value: 0.3362 |
|
source: |
|
name: open_llama_3b_instruct_v_0.2 model card |
|
url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2 |
|
|
|
|
|
--- |
|
# OpenLLaMA Code Instruct: An Open Reproduction of LLaMA |
|
|
|
This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 1 epoch of the |
|
[AlpacaCode](https://huggingface.co/datasets/mwitiderrick/AlpacaCode) dataset. |
|
|
|
The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus) |
|
## Prompt Template |
|
``` |
|
### Instruction: |
|
|
|
{query} |
|
|
|
### Response: |
|
<Leave new line for model to respond> |
|
``` |
|
## Usage |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_instruct_v_0.2") |
|
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_instruct_v_0.2") |
|
query = "Provide step-by-step instructions for making a sweet chicken bugger" |
|
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500) |
|
output = text_gen(f"### Instruction:\n{query}\n### Response:\n") |
|
print(output[0]['generated_text']) |
|
""" |
|
|
|
""" |
|
``` |
|
## Metrics |
|
``` |
|
|
|
``` |