echarlaix HF staff commited on
Commit
0aa1075
1 Parent(s): 3a01f1b

rephrase export steps

Browse files
Files changed (1) hide show
  1. app.py +11 -8
app.py CHANGED
@@ -78,21 +78,24 @@ TITLE = """
78
  "
79
  >
80
  <h1 style="font-weight: 900; margin-bottom: 10px; margin-top: 10px;">
81
- Export your Transformers and Diffusers model to OpenVINO with 🤗 Optimum Intel (experimental)
82
  </h1>
83
  </div>
84
  """
85
 
86
  DESCRIPTION = """
87
- This Space allows you to automatically export to the OpenVINO format various 🤗 Transformers and Diffusers PyTorch models hosted on the Hugging Face Hub.
88
 
89
- Once exported, you will be able to load the resulting model using the [🤗 Optimum Intel](https://huggingface.co/docs/optimum/intel/inference).
 
 
 
 
90
 
91
- To export your model, the steps are as following:
92
- - Paste a read-access token from [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens). Read access is enough given that we will open a PR against the source repo.
93
- - Input a model id from the Hub (for example: [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english))
94
- - Click "Export"
95
- - That’s it! You’ll get feedback if it works or not, and if it worked, you’ll get the URL of the opened PR 🔥
96
  """
97
 
98
  with gr.Blocks() as demo:
 
78
  "
79
  >
80
  <h1 style="font-weight: 900; margin-bottom: 10px; margin-top: 10px;">
81
+ Export your model to OpenVINO with Optimum Intel
82
  </h1>
83
  </div>
84
  """
85
 
86
  DESCRIPTION = """
87
+ This Space allows you to automatically export your model to the OpenVINO format.
88
 
89
+ To export your model you need:
90
+ - A read-access token from [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
91
+ Read access is enough given that we will open a PR against the source repo.
92
+ - A model id of the model you'd like to export (for example: [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english))
93
+ - A [task](https://huggingface.co/docs/optimum/main/en/exporters/task_manager#pytorch) that will be used to load the model before exporting it. If set to "auto", the task will be automatically inferred.
94
 
95
+ That's it ! 🔥
96
+
97
+ After the model conversion, we will open a PR against the source repo.
98
+ You will then be able to load the resulting model and run inference using [Optimum Intel](https://huggingface.co/docs/optimum/intel/inference).
 
99
  """
100
 
101
  with gr.Blocks() as demo: