pamixsun's picture
Update README.md
48187c3
|
raw
history blame
2.57 kB
metadata
license: apache-2.0
tags:
  - image-classification
  - vision
widget:
  - src: >-
      https://huggingface.co/pamixsun/swinv2_tiny_for_glaucoma_classification/blob/main/example.jpg
    example_title: fundus image

Model Card for Model ID

This is a Swin Transformer model fine-tuned on the REFUGE challenge dataset. It is able to classify an retinal fundns image into glaucoma and non-glaucoma.

Model Details

Model Description

  • Developed by: Xu Sun
  • Shared by: Xu Sun
  • Model type: Image classification
  • License: Apache-2.0

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

You can use the raw model for glaucoma classification based on retinal fundus images.

Bias, Risks, and Limitations

The model is trained/fine-tuned on retinal fundus images only, and was intended to classify glaucoma and non-glaucoma images. Thus please make sure to feed only fundus image into the model to obtain reasonable results.

How to Get Started with the Model

Use the code below to get started with the model.

import cv2
import torch

from transformers import AutoImageProcessor, Swinv2ForImageClassification

image = cv2.imread('./example.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

processor = AutoImageProcessor.from_pretrained("pamixsun/swinv2_tiny_for_glaucoma_classification")
model = Swinv2ForImageClassification.from_pretrained("pamixsun/swinv2_tiny_for_glaucoma_classification")

inputs = processor(image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Model Card Contact