license: mit
language:
- en
pipeline_tag: image-to-text
git_20
This model is fine-tuned with Microsoft GIT with 1 Nvidia A100-80G GPU. We extracted 100,000 student assignments containing teacher feedback from 3 million student assignments as training data. The training data is divided into the image part of student assignments and the text part of teacher feedback. git_20 consists of 18 layers and over 170 million parameters, consuming up to 0.7 gigabytes of disk space. The project aims to use multi-modal and multi-task deep learning models to create a machine learning pipeline that provides automatic diagnostic feedback for students' mathematical reasoning. Researchers can experiment with and finetune the model to help construct multimodel that can effectively provide automatic diagnostic feedback for students' mathematical reasoning.
Here is how to use it with texts in HuggingFace
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor
from PIL import Image
model = AutoModelForCausalLM.from_pretrained("Fan21/git_20")
processor = AutoProcessor.from_pretrained("Fan21/git_20")
image_path ='Please enter the image address here'
image = Image.open(image_path)
width, height = image.size
display(image.resize((int(1 * width), int(1 * height))))
pixel_values = processor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = model.generate(pixel_values=pixel_values, max_length=50)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)