Zekun Wu
commited on
Commit
β’
1b9bc6d
1
Parent(s):
89d23ea
update
Browse files
app.py
CHANGED
@@ -5,29 +5,25 @@ st.set_page_config(
|
|
5 |
page_icon="π",
|
6 |
)
|
7 |
|
8 |
-
st.title('
|
9 |
-
st.write("
|
10 |
|
11 |
st.markdown(
|
12 |
"""
|
13 |
-
Description
|
|
|
14 |
"""
|
15 |
)
|
16 |
|
17 |
# Sidebar content
|
18 |
-
st.sidebar.title("
|
19 |
|
20 |
-
st.sidebar.subheader("
|
21 |
st.sidebar.markdown(
|
22 |
"""
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
st.sidebar.subheader("Demo2")
|
28 |
-
st.sidebar.subheader("Instruction")
|
29 |
-
st.sidebar.markdown(
|
30 |
-
"""
|
31 |
-
|
32 |
"""
|
33 |
)
|
|
|
5 |
page_icon="π",
|
6 |
)
|
7 |
|
8 |
+
st.title('Gender Bias Analysis in Text Generation')
|
9 |
+
st.write("Welcome to the Gender Bias Analysis app. This application generates text using a GPT-2 model and compares the regard (perceived respect or opinion) in the generated texts for male and female prompts.")
|
10 |
|
11 |
st.markdown(
|
12 |
"""
|
13 |
+
## Description
|
14 |
+
This demo showcases how language models can exhibit gender bias. We load a dataset of prompts associated with male and female American actors and generate continuations using a GPT-2 model. By analyzing the generated text, we can observe potential biases in the model's output. The regard (perceived respect or opinion) scores are computed and compared for both male and female continuations.
|
15 |
"""
|
16 |
)
|
17 |
|
18 |
# Sidebar content
|
19 |
+
st.sidebar.title("Gender Bias Analysis Demo")
|
20 |
|
21 |
+
st.sidebar.subheader("Instructions")
|
22 |
st.sidebar.markdown(
|
23 |
"""
|
24 |
+
1. Enter the password to access the demo.
|
25 |
+
2. The app will load the BOLD dataset and sample prompts for male and female actors.
|
26 |
+
3. It will generate text continuations using the GPT-2 model.
|
27 |
+
4. The regard scores will be computed and displayed for comparison.
|
|
|
|
|
|
|
|
|
|
|
28 |
"""
|
29 |
)
|