File size: 997 Bytes
e5003c0
3a710ef
69980cc
e5003c0
 
 
 
 
 
 
 
 
69980cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
title: FastSAM & CLIP Inference with Gradio
emoji: 🏃
colorFrom: red
colorTo: blue
sdk: gradio
sdk_version: 3.47.1
app_file: app.py
pinned: false
license: mit
---

**GitHub Link:** https://github.com/RaviNaik/ERA-SESSION19/blob/main/README.md

### Tasks:
1. :heavy_check_mark: Make a CLIP or FastSAM application on gradio/spaces using open-source models.
2. :heavy_check_mark: Share the link on the assignment page for the GitHub, and Spaces

### FastSAM (Variant of Segment Anything-SAM Model)
**Architecture:**
![image](https://github.com/RaviNaik/ERA-SESSION19/assets/23289802/044d2f07-f8de-478f-b189-219dc0b0b52a)

#### Gradio Results
![image](https://github.com/RaviNaik/ERA-SESSION19/blob/main/sam_gradio.png)

### CLIP (Contrastive Language–Image Pre-training)
**Architecture:**
![image](https://github.com/RaviNaik/ERA-SESSION19/assets/23289802/03758a42-8464-4663-849f-b90c8cf0c03f)

#### Gradio Results
![image](https://github.com/RaviNaik/ERA-SESSION19/blob/main/clip_gradio.png)