File size: 6,635 Bytes
2c32da9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154

# Health Vision AI

The Health Vision AI is a Flask-based web application powered by machine learning models and a large language model (LLM) that assists in predicting various diseases using medical images. The app also allows users to access research papers and case reports related to their predictions and ask questions directly from the LLM regarding any disease.
## Project Overview

The Health Vision AI web application uses machine learning models to classify medical images into different disease categories. It supports three main disease prediction categories:

- Gastrointestinal Disease
- Chest CT Disease
- Chest X-ray Disease

Additionally, users can query a built-in `LLM` for medical advice or information. The app also provides a unique feature where users can access research papers and case reports related to the predicted disease.


## Tech Stack

- **Backend:** Flask (Python)
- **Frontend:** HTML, CSS, JavaScript
- **Machine Learning:** TensorFlow, PyTorch, OpenCV, Scikit-learn
- **LLM Integration:** Fine-tuned GPT 2 Model (on custom dataset)
- **Database:** None (all data is dynamically served)
## Features

- **Medical Image Predictions:**
    Predict diseases using models trained on Gastrointestinal, Chest CT, and Chest X-ray images.
- **LLM-powered Assistant:**
    Query the assistant about any disease or condition for a text-based answer.
- **Research Papers and Case Reports:**
    Dynamically load related research papers and case reports for the predicted disease.
- **Modern UI with Tabbed Navigation:**
    Clean, user-friendly interface with multiple tabs for disease categories and interactive elements for prediction results.
## Architecture

The project follows a **Model-View-Controller (MVC)** pattern.

- **Models:** Machine learning models for image classification.
- **Views:** HTML templates with dynamic JavaScript interactivity.
- **Controllers:** Flask routes for handling predictions and LLM queries.
## Frontend

**Tabs:** 
    The main interface consists of four tabs:

    1. Gastrointestinal Disease Prediction
    2. Chest CT Disease Prediction
    3. Chest X-ray Disease Prediction
    4. LLM Chat

Each tab contains:
  - A form to upload an image and submit for prediction.
  - A results section for displaying the prediction.
  - Dropdowns for research papers and case reports
**User Interaction:**
- **File Upload:** Users can upload an image for disease prediction.
- **Prediction Result Display:** The model's prediction is displayed after analysis.
- **Research Links & Case Reports:** These links are loaded dynamically based on the prediction.
- **LLM Prompt Bar:** At the bottom, users can type a question to ask the assistant and get a response.

**Styling:**
    The UI is clean with a modern design. A dark mode and light-themed background with disease-specific images that change continuously create a professional appearance. Hover effects on the research and case report links provide better interactivity.
## Backend
The backend of the project is powered by `Flask`. Below are the key elements:

**Routes:**

1. **Prediction Routes:**

`/predict_gastrointestinal`

`/predict_chest_ct`

`/predict_chest_xray`

Each route handles a POST request with an uploaded image, passes it to the corresponding model, and returns a prediction.

2. **LLM Query Route:**

`/ask_llm`

This route accepts a user query and sends it to the LLM API to retrieve a text-based answer.


**Handling Predictions**

For each prediction:

1. **Image Preprocessing:** Uploaded images are processed (resizing, normalization, etc.).
2. **Model Inference:** The processed image is passed to the corresponding machine learning model for prediction.
3. **Prediction Result:** The result is sent back to the frontend and displayed in the UI.

## Models

1. **Gastrointestinal Model:**
- **Task:** Classify images into four categories: Diverticulosis, Neoplasm, Peritonitis, and Ureters.
- **Dataset:** Gastrointestinal endoscopic images.
- **Architecture:** Transfer learning using a pre-trained transformer model i.e Swin.

2. **Chest CT Model:**
- **Task:** Classify Chest CT-Scan images into four categories: Adenocarcinoma, Large cell carcinoma, Squamous cell carcinoma, and Normal.
- **Dataset:** Chest CT images (split into training, validation, and test sets).
- **Architecture:** Transfer learning using a pre-trained transformer model i.e Swin.

3. **Chest X-ray Model:**
- **Task:** Classify Chest X-rays into two categories: Pneumonia and Normal.
- **Dataset:** Chest X-ray dataset.
- **Architecture:** Transfer learning using a pre-trained transformer model (i.e Vision Transformer).
Each model outputs a prediction for its respective image, which is then used to populate research links and case reports.
## LLM Integration


The application integrates with LLaMA 3.1 API to provide a conversational interface where users can ask medical-related questions.

**Workflow:**

1. Users enter their query in the prompt bar.
2. The query is sent via a POST request to `/ask_llm`.
3. The backend forwards the request to the LLM API.
4. The API response is sent back to the frontend and displayed as an answer.
## Project Structure

    Health Vision AI/
    β”œβ”€β”€ static/
    β”‚   β”œβ”€β”€ styles.css        # Custom CSS for styling
    β”‚   └── images/   # Images for background (optional)
    β”œβ”€β”€ templates/
    β”‚   └── index.html        # Main HTML file
    β”œβ”€β”€ models/
    β”‚   β”œβ”€β”€ gastro_model.h5    # Gastrointestinal model
    β”‚   β”œβ”€β”€ chest_ct_model.h5  # Chest CT model
    β”‚   β”œβ”€β”€ chest_xray_model.h5# Chest X-ray model
    β”‚   └── LLM                # LLM model
    β”œβ”€β”€ app.py                # Flask application
    β”œβ”€β”€ requirements.txt      # Required Python packages
    └── README.md             # Project documentation

## Usage

**Image Prediction:**

- Navigate to the appropriate tab (Gastrointestinal, Chest CT, Chest X-ray).
- Upload an image and click Predict.
- The prediction will be displayed along with links to research papers and case reports.

**LLM Query:**

- Enter a question in the prompt bar at the bottom.
- Click Ask to get an answer from the AI assistant.
## Future Improvements

- **Enhanced LLM:** Improve the LLM integration to offer more advanced medical advice based on user input.
- **Database Integration:** Store past predictions and questions for reference.
- **User Accounts:** Implement user authentication for storing personal data and history.
- **Performance Optimizations:** Speed up model inference for large images using optimized backend processing.