Edit model card

Code Explainer

The model does its best to explain python code in plain language

Model Details

Trained by: trained by AllStax Technologies Model type: CodeExplainer-7b-v0.1 is a language model based on mistralai/Mistral-7B-v0.1. Language(s): English We fine-tuned using a data generated by GPT-3.5 and other models.

Prompting

Prompt Template for alpaca style

### Instruction:

<prompt>

### Response:

Loading the model

from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "allstax/CodeExplainer-7b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quant_model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto')
Downloads last month
15
Safetensors
Model size
7.24B params
Tensor type
F32
·
I8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for allstax/CodeExplainer-7b-v0.1

Quantized
(164)
this model