bluebalam commited on
Commit
5b553bb
β€’
1 Parent(s): 414fda8

adding title, descritpio, and examples to the app.

Browse files
Files changed (1) hide show
  1. app.py +27 -6
app.py CHANGED
@@ -40,16 +40,37 @@ def recommend(txt):
40
  return recs_output
41
 
42
 
43
- def inputs():
44
- pass
 
 
45
 
 
 
46
 
47
- title = "Interactive demo: paper-rec"
48
- description = "Demo that recommends you what recent papers in AI/ML to read next based on what you like."
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  iface = gr.Interface(fn=recommend,
51
- inputs=[Textbox(lines=10, placeholder="Titles and abstracts from papers you like", default="", label="Sample of what I like <3")],
 
 
 
52
  outputs="json",
53
- layout='vertical'
 
 
 
54
  )
55
  iface.launch()
 
40
  return recs_output
41
 
42
 
43
+ title = "Interactive demo: paper-rec"
44
+ description = """What paper in ML/AI should I read next? It is difficult to choose from all great research publications
45
+ published daily. This demo gives you a personalized selection of papers from the latest scientific contributions
46
+ available in arXiv – https://arxiv.org/.
47
 
48
+ You just input the title or abstract (or both) of paper(s) you liked in the past or you can also use keywords of topics
49
+ of interest and get the top-10 article recommendations tailored to your taste.
50
 
51
+ Enjoy!"""
52
+
53
+ examples = ["""Attention Is All You Need – The dominant sequence transduction models are based on complex recurrent or
54
+ convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder
55
+ and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely
56
+ on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation
57
+ tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time
58
+ to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing
59
+ best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model
60
+ establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small
61
+ fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to
62
+ other tasks by applying it successfully to English constituency parsing both with large and limited training data.""",
63
+ "GANs, Diffusion Models, Art"]
64
 
65
  iface = gr.Interface(fn=recommend,
66
+ inputs=[Textbox(lines=10, placeholder="Titles and abstracts from papers you like", default="",
67
+ label="""Sample of what I like: title(s) or abstract(s) of papers you love or a set
68
+ of keywords about your interests (e.g., Transformers, GANs, Recommender Systems):
69
+ """)],
70
  outputs="json",
71
+ layout='vertical',
72
+ title=title,
73
+ description=description,
74
+ examples=examples
75
  )
76
  iface.launch()