Spaces:
Runtime error
Runtime error
File size: 2,555 Bytes
94e735e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
import streamlit as st
from streamlit_extras.switch_page_button import switch_page
st.title("RT-DETR")
st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1807790959884665029) (July 1, 2024)""", icon="βΉοΈ")
st.markdown(""" """)
st.markdown("""Real-time DEtection Transformer (RT-DETR) landed in π€ Transformers with Apache 2.0 license π
Do DETRs Beat YOLOs on Real-time Object Detection? Keep reading π
""")
st.markdown(""" """)
st.video("pages/RT-DETR/video_1.mp4", format="video/mp4")
st.markdown(""" """)
st.markdown("""
Short answer, it does! π [notebook](https://t.co/NNRpG9cAEa), π [models](https://t.co/ctwWQqNcEt), π [demo](https://t.co/VrmDDDjoNw)
YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS π₯²
Transformer-based models on the other hand are computationally not as efficient π₯²
Isn't there something in between? Enter RT-DETR!
The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder β
""")
st.markdown(""" """)
st.image("pages/RT-DETR/image_1.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether.
They also conduct many ablation studies and try different decoders.
""")
st.markdown(""" """)
st.image("pages/RT-DETR/image_2.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art π€©
""")
st.markdown(""" """)
st.image("pages/RT-DETR/image_3.jpg", use_column_width=True)
st.markdown(""" """)
st.info("""
Ressources:
[DETRs Beat YOLOs on Real-time Object Detection](https://arxiv.org/abs/2304.08069)
by Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen (2023)
[GitHub](https://github.com/lyuwenyu/RT-DETR/)
[Hugging Face documentation](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)""", icon="π")
st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3 = st.columns(3)
with col1:
if st.button('Previous paper', use_container_width=True):
switch_page("4M-21")
with col2:
if st.button('Home', use_container_width=True):
switch_page("Home")
with col3:
if st.button('Next paper', use_container_width=True):
switch_page("Llava-NeXT-Interleave") |