Spaces:
Runtime error
Runtime error
File size: 2,626 Bytes
94e735e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
import streamlit as st
from streamlit_extras.switch_page_button import switch_page
st.title("4M-21")
st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1804138208814309626) (June 21, 2024)""", icon="ℹ️")
st.markdown(""" """)
st.markdown("""
EPFL and Apple just released 4M-21: single any-to-any model that can do anything from text-to-image generation to generating depth masks! 🙀
Let's unpack 🧶
""")
st.markdown(""" """)
st.image("pages/4M-21/image_1.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""4M is a multimodal training [framework](https://t.co/jztLublfSF) introduced by Apple and EPFL.
Resulting model takes image and text and output image and text 🤩
[Models](https://t.co/1LC0rAohEl) | [Demo](https://t.co/Ra9qbKcWeY)
""")
st.markdown(""" """)
st.video("pages/4M-21/video_1.mp4", format="video/mp4")
st.markdown(""" """)
st.markdown("""
This model consists of transformer encoder and decoder, where the key to multimodality lies in input and output data:
input and output tokens are decoded to generate bounding boxes, generated image's pixels, captions and more!
""")
st.markdown(""" """)
st.image("pages/4M-21/image_2.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
This model also learnt to generate canny maps, SAM edges and other things for steerable text-to-image generation 🖼️
The authors only added image-to-all capabilities for the demo, but you can try to use this model for text-to-image generation as well ☺️
""")
st.markdown(""" """)
st.image("pages/4M-21/image_3.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
In the project page you can also see the model's text-to-image and steered generation capabilities with model's own outputs as control masks!
""")
st.markdown(""" """)
st.video("pages/4M-21/video_2.mp4", format="video/mp4")
st.markdown(""" """)
st.info("""
Ressources
[4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities](https://arxiv.org/abs/2406.09406) by Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir (2024)
[GitHub](https://github.com/apple/ml-4m/)""", icon="📚")
st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3 = st.columns(3)
with col1:
if st.button('Previous paper', use_container_width=True):
switch_page("Florence-2")
with col2:
if st.button('Home', use_container_width=True):
switch_page("Home")
with col3:
if st.button('Next paper', use_container_width=True):
switch_page("RT-DETR") |