sayakpaul HF staff commited on
Commit
e548249
1 Parent(s): 0554219

fix: create_repo call.

Browse files
Files changed (2) hide show
  1. app.py +2 -2
  2. hub_utils/repo.py +2 -2
app.py CHANGED
@@ -11,9 +11,9 @@ This Space lets you convert KerasCV Stable Diffusion weights to a format compati
11
 
12
  * The Space downloads a couple of pre-trained weights and runs a dummy inference. Depending, on the machine type, the enture process can take anywhere between 2 - 5 minutes.
13
  * Only Stable Diffusion (v1) is supported as of now. In particular this checkpoint: [`"CompVis/stable-diffusion-v1-4"`](https://huggingface.co/CompVis/stable-diffusion-v1-4).
14
- * Only the text encoder and the UNet parameters converted since only these two elements are generally fine-tuned.
15
  * [This Colab Notebook](https://colab.research.google.com/drive/1RYY077IQbAJldg8FkK8HSEpNILKHEwLb?usp=sharing) was used to develop the conversion utilities initially.
16
- * You can choose not to provide `text_encoder_weights` and `unet_weights` in case you don't have any fine-tuned weights. In that case, the original parameters of the respective models (text encoder and UNet) from KerasCV will be used.
17
  * You can provide only `text_encoder_weights` or `unet_weights` or both.
18
  * When providing the weights' links, ensure they're directly downloadable. Internally, the Space uses [`tf.keras.utils.get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) to retrieve the weights locally.
19
  * If you don't provide `your_hf_token` the converted pipeline won't be pushed.
 
11
 
12
  * The Space downloads a couple of pre-trained weights and runs a dummy inference. Depending, on the machine type, the enture process can take anywhere between 2 - 5 minutes.
13
  * Only Stable Diffusion (v1) is supported as of now. In particular this checkpoint: [`"CompVis/stable-diffusion-v1-4"`](https://huggingface.co/CompVis/stable-diffusion-v1-4).
14
+ * Only the text encoder and UNet parameters are converted since only these two elements are generally fine-tuned.
15
  * [This Colab Notebook](https://colab.research.google.com/drive/1RYY077IQbAJldg8FkK8HSEpNILKHEwLb?usp=sharing) was used to develop the conversion utilities initially.
16
+ * You can choose NOT to provide `text_encoder_weights` and `unet_weights` in case you don't have any fine-tuned weights. In that case, the original parameters of the respective models (text encoder and UNet) from KerasCV will be used.
17
  * You can provide only `text_encoder_weights` or `unet_weights` or both.
18
  * When providing the weights' links, ensure they're directly downloadable. Internally, the Space uses [`tf.keras.utils.get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) to retrieve the weights locally.
19
  * If you don't provide `your_hf_token` the converted pipeline won't be pushed.
hub_utils/repo.py CHANGED
@@ -13,9 +13,9 @@ def push_to_hub(hf_token: str, push_dir: str, repo_prefix: None) -> str:
13
  if repo_prefix == ""
14
  else f"{user}/{repo_prefix}-{push_dir}"
15
  )
16
- _ = create_repo(repo_id=repo_id, token=hf_token)
17
  url = hf_api.upload_folder(
18
- folder_path=push_dir, repo_id=repo_id, exist_ok=True
19
  )
20
  return f"Model successfully pushed: [{url}]({url})"
21
  except Exception as e:
 
13
  if repo_prefix == ""
14
  else f"{user}/{repo_prefix}-{push_dir}"
15
  )
16
+ _ = create_repo(repo_id=repo_id, token=hf_token, exist_ok=True)
17
  url = hf_api.upload_folder(
18
+ folder_path=push_dir, repo_id=repo_id
19
  )
20
  return f"Model successfully pushed: [{url}]({url})"
21
  except Exception as e: