SkalskiP commited on
Commit
ae5c0ef
1 Parent(s): 5ae5bca

markdown improvements

Browse files
Files changed (1) hide show
  1. app.py +20 -4
app.py CHANGED
@@ -15,10 +15,26 @@ from utils.sam import load_sam_model, run_sam_inference
15
  MARKDOWN = """
16
  # Florence2 + SAM2 🔥
17
 
18
- This demo integrates Florence2 and SAM2 models for detailed image captioning and object
19
- detection. Florence2 generates detailed captions that are then used to perform phrase
20
- grounding. The Segment Anything Model 2 (SAM2) converts these phrase-grounded boxes
21
- into masks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  """
23
 
24
  EXAMPLES = [
 
15
  MARKDOWN = """
16
  # Florence2 + SAM2 🔥
17
 
18
+ <div>
19
+ <a href="https://github.com/facebookresearch/segment-anything-2">
20
+ <img src="https://badges.aleen42.com/src/github.svg" alt="GitHub" style="display:inline-block;">
21
+ </a>
22
+ <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-images-with-sam-2.ipynb">
23
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab" style="display:inline-block;">
24
+ </a>
25
+ <a href="https://blog.roboflow.com/what-is-segment-anything-2/">
26
+ <img src="https://raw.githubusercontent.com/roboflow-ai/notebooks/main/assets/badges/roboflow-blogpost.svg" alt="Roboflow" style="display:inline-block;">
27
+ </a>
28
+ <a href="https://www.youtube.com/watch?v=Dv003fTyO-Y">
29
+ <img src="https://badges.aleen42.com/src/youtube.svg" alt="YouTube" style="display:inline-block;">
30
+ </a>
31
+ </div>
32
+
33
+ This demo integrates Florence2 and SAM2 by creating a two-stage inference pipeline. In
34
+ the first stage, Florence2 performs tasks such as object detection, open-vocabulary
35
+ object detection, image captioning, or phrase grounding. In the second stage, SAM2
36
+ performs object segmentation on the image. **Video segmentation will be available
37
+ soon.**
38
  """
39
 
40
  EXAMPLES = [