nicholasKluge commited on
Commit
ca1f04e
1 Parent(s): 33cc93c

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -64,15 +64,15 @@ Aira is intended only for academic research. For more information, read our [mod
64
 
65
  ## How does this demo work?
66
 
67
- For this demo, we use the lighter model we have trained from the OPT series (`Aira-OPT-125M`). This demo employs a [`reward model`](https://huggingface.co/nicholasKluge/RewardModel) and a [`toxicity model`](https://huggingface.co/nicholasKluge/ToxicityModel) to evaluate the score of each candidate's response, considering its alignment with the user's message and its level of toxicity. The generation function arranges the candidate responses in order of their reward scores and eliminates any responses deemed toxic or harmful. Subsequently, the generation function returns the candidate response with the highest score that surpasses the safety threshold, or a default message if no safe candidates are identified.
68
  """
69
 
70
  search_intro ="""
71
  <h2><center>Explore Aira's Dataset 🔍</h2></center>
72
 
73
- Here, users can look for instances in Aira's fine-tuning dataset where a given prompt or completion resembles an instruction in its dataset. We use the Term Frequency-Inverse Document Frequency (TF-IDF) representation and cosine similarity to enable a fast search to explore the dataset. The pre-trained TF-IDF vectorizers and corresponding TF-IDF matrices are available in this repository. Below, we present the top five most similar instances in Aira's dataset for every search query.
74
 
75
- Users can use this to explore how the model interpolates on the fine-tuning data and if it can follow instructions that are out of the fine-tuning distribution.
76
  """
77
 
78
  disclaimer = """
 
64
 
65
  ## How does this demo work?
66
 
67
+ For this demo, we use the lighter model we have trained from the OPT series (Aira-OPT-125M). This demo employs a [reward model](https://huggingface.co/nicholasKluge/RewardModel) and a [toxicity model](https://huggingface.co/nicholasKluge/ToxicityModel) to evaluate the score of each candidate's response, considering its alignment with the user's message and its level of toxicity. The generation function arranges the candidate responses in order of their reward scores and eliminates any responses deemed toxic or harmful. Subsequently, the generation function returns the candidate response with the highest score that surpasses the safety threshold, or a default message if no safe candidates are identified.
68
  """
69
 
70
  search_intro ="""
71
  <h2><center>Explore Aira's Dataset 🔍</h2></center>
72
 
73
+ Here, users can look for instances in Aira's fine-tuning dataset. We use the Term Frequency-Inverse Document Frequency (TF-IDF) representation and cosine similarity to enable a fast search to explore the dataset. The pre-trained TF-IDF vectorizers and corresponding TF-IDF matrices are available in this repository. Below, we present the top ten most similar instances in Aira's dataset for every search query.
74
 
75
+ Users can use this tool to explore how the model interpolates on the fine-tuning data and if it can follow instructions that are out of the fine-tuning distribution.
76
  """
77
 
78
  disclaimer = """