Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update src/app.py
Browse files- src/app.py +3 -17
src/app.py
CHANGED
@@ -19,22 +19,7 @@ def get_results(model_name: str, library: str, options: list, access_token: str)
|
|
19 |
with gr.Blocks() as demo:
|
20 |
with gr.Column():
|
21 |
gr.Markdown(
|
22 |
-
""
|
23 |
-
|
24 |
-
This tool will help you calculate how much vRAM is needed to train and perform big model inference
|
25 |
-
on a model hosted on the π€ Hugging Face Hub. The minimum recommended vRAM needed for a model
|
26 |
-
is denoted as the size of the "largest layer", and training of a model is roughly 4x its size (for Adam).
|
27 |
-
|
28 |
-
These calculations are accurate within a few percent at most, such as `bert-base-cased` being 413.68 MB and the calculator estimating 413.18 MB.
|
29 |
-
|
30 |
-
When performing inference, expect to add up to an additional 20% to this as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/).
|
31 |
-
More tests will be performed in the future to get a more accurate benchmark for each model.
|
32 |
-
|
33 |
-
Currently this tool supports all models hosted that use `transformers` and `timm`.
|
34 |
-
|
35 |
-
To use this tool pass in the URL or model name of the model you want to calculate the memory usage for,
|
36 |
-
select which framework it originates from ("auto" will try and detect it from the model metadata), and
|
37 |
-
what precisions you want to use."""
|
38 |
)
|
39 |
out_text = gr.Markdown()
|
40 |
out = gr.DataFrame(
|
@@ -62,9 +47,10 @@ with gr.Blocks() as demo:
|
|
62 |
get_results,
|
63 |
inputs=[inp, library, options, access_token],
|
64 |
outputs=[out_text, out, post_to_hub],
|
|
|
65 |
)
|
66 |
|
67 |
-
post_to_hub.click(lambda: gr.Button.update(visible=False), outputs=post_to_hub).then(
|
68 |
report_results, inputs=[inp, library, access_token]
|
69 |
)
|
70 |
|
|
|
19 |
with gr.Blocks() as demo:
|
20 |
with gr.Column():
|
21 |
gr.Markdown(
|
22 |
+
"..."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
)
|
24 |
out_text = gr.Markdown()
|
25 |
out = gr.DataFrame(
|
|
|
47 |
get_results,
|
48 |
inputs=[inp, library, options, access_token],
|
49 |
outputs=[out_text, out, post_to_hub],
|
50 |
+
api_name=False,
|
51 |
)
|
52 |
|
53 |
+
post_to_hub.click(lambda: gr.Button.update(visible=False), outputs=post_to_hub, api_name=False).then(
|
54 |
report_results, inputs=[inp, library, access_token]
|
55 |
)
|
56 |
|