Spaces:
Sleeping
Sleeping
import streamlit as st | |
from utils.levels import complete_level, render_page, initialize_level | |
from utils.login import get_login, initialize_login | |
from utils.inference import query | |
import os | |
import time | |
initialize_login() | |
initialize_level() | |
LEVEL = 4 | |
def infer(image): | |
time.sleep(1) | |
output = query(image) | |
cols = st.columns(2) | |
cols[0].image(image, use_column_width=True) | |
with cols[1]: | |
for item in output: | |
st.progress(item["score"], text=item["label"]) | |
def step4_page(): | |
st.header("Trying It Out") | |
st.markdown( | |
""" | |
### How Our Emotion Detection Application Works | |
Now that we have trained our emotion detection application, let's see how it works in action! Here's a simple explanation of how the application recognizes emotions: | |
1. **Looking at Faces**: When we use our emotion detection application, we can show it a picture of a face or use a camera to capture a real-time image. It's like giving our application a chance to see someone's expression. | |
2. **Observing the Features**: The application carefully looks at the face and pays attention to different parts, like the eyes, mouth, and eyebrows. It tries to understand the expressions by noticing how these parts look and how they are positioned. It's like the application is taking a close look at the face, just like we do when we try to understand someone's emotions. | |
""" | |
) | |
st.image( | |
"https://camo.githubusercontent.com/3bb4e2eba7c8a91d71916496bc775e870222f19bb5098cb4bc514ed60078c1e7/68747470733a2f2f626c6f672e7161746573746c61622e636f6d2f77702d636f6e74656e742f75706c6f6164732f323032302f30312f4d4c5f6578616d706c652e6769663f7261773d74727565", | |
use_column_width=True, | |
) | |
st.markdown( | |
""" | |
3. **Guessing the Emotion**: Based on what it observed, our application uses the knowledge it learned during training to make its best guess about the person's emotion. It remembers the patterns it saw before and tries to match them with the features it observed. It might think the person looks happy, sad, or maybe surprised! | |
""" | |
) | |
st.image( | |
"https://miro.medium.com/v2/resize:fit:1358/1*KoHwRNZGrVrhdbye3BDEew.png", | |
use_column_width=True, | |
) | |
st.markdown( | |
""" | |
4. **Providing a Result**: Finally, our emotion detection application tells us what emotion it thinks the person is feeling. It might say, "I think this person looks happy!" or "I think this person looks sad." It's like having a virtual friend who can give us their guess about someone's emotion. | |
By going through these steps, our emotion detection application can quickly analyze faces and give us an idea of how someone might be feeling. It's like having a special friend who can understand and guess emotions based on facial expressions! | |
""" | |
) | |
st.info( | |
"Now that we know how our emotion detection application works, let's try it out!" | |
) | |
st.info("Select an image to analyze!") | |
input_type = st.radio("Select the Input Type", ["Image", "Camera"]) | |
if input_type == "Camera": | |
image = st.camera_input("Take a picture") | |
byte_image = image.getvalue() if image else None | |
else: | |
image = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"]) | |
byte_image = image.read() if image else None | |
try_img = os.path.join(".sessions", get_login()["username"], "try.jpg") | |
if byte_image: | |
with open(try_img, "wb") as f: | |
f.write(byte_image) | |
infer(try_img) | |
st.info("Click on the button below to complete this level!") | |
if st.button("Complete Level"): | |
complete_level(LEVEL) | |
render_page(step4_page, LEVEL) | |