아래는 허깅 페이스 Hub의 API를 위한 파이썬 래퍼인 HfApi
클래스에 대한 문서입니다.
HfApi
의 모든 메서드는 패키지의 루트에서 직접 접근할 수 있습니다. 두 접근 방식은 아래에서 자세히 설명합니다.
루트 메서드를 사용하는 것이 더 간단하지만 HfApi 클래스를 사용하면 더 유연하게 사용할 수 있습니다.
특히 모든 HTTP 호출에서 재사용할 토큰을 전달할 수 있습니다.
이 방식은 토큰이 머신에 유지되지 않기 때문에 huggingface-cli login
또는 login()를 사용하는 방식과는 다르며,
다른 엔드포인트를 제공하거나 사용자정의 에이전트를 구성할 수도 있습니다.
from huggingface_hub import HfApi, list_models
# 루트 메서드를 사용하세요.
models = list_models()
# 또는 HfApi client를 구성하세요.
hf_api = HfApi(
endpoint="https://huggingface.co", # 비공개 Hub 엔드포인트를 지정할 수 있습니다.
token="hf_xxx", # 토큰은 머신에 유지되지 않습니다.
)
models = hf_api.list_models()
( endpoint: Optional[str] = None token: Union[str, bool, None] = None library_name: Optional[str] = None library_version: Optional[str] = None user_agent: Union[Dict, str, None] = None headers: Optional[Dict[str, str]] = None )
( repo_id: str user: str repo_type: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
The id of the repo to accept access request for. str
) —
The username of the user which access request should be accepted. str
, optional) —
The type of the repo to accept access request for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.HTTPError
—
HTTP 404 if the user does not exist on the Hub.HTTPError
—
HTTP 404 if the user access request cannot be found.HTTPError
—
HTTP 404 if the user access request is already in the accepted list.Accept an access request from a user for a given gated repo.
Once the request is accepted, the user will be able to download any file of the repo and access the community tab. If the approval mode is automatic, you don’t have to accept requests manually. An accepted request can be cancelled or rejected at any time using cancel_access_request() and reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
( collection_slug: str item_id: str item_type: CollectionItemType_T note: Optional[str] = None exists_ok: bool = False token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. str
) —
ID of the item to add to the collection. It can be the ID of a repo on the Hub (e.g. "facebook/bart-large-mnli"
)
or a paper id (e.g. "2307.09288"
). str
) —
Type of the item to add. Can be one of "model"
, "dataset"
, "space"
or "paper"
. str
, optional) —
A note to attach to the item in the collection. The maximum size for a note is 500 characters. bool
, optional) —
If True
, do not raise an error if item already exists. False
. Raises
HTTPError
HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.HTTPError
—
HTTP 404 if the item you try to add to the collection does not exist on the Hub.HTTPError
—
HTTP 409 if the item you try to add to the collection is already in the collection (and exists_ok=False)Add an item to a collection on the Hub.
Returns: Collection
Example:
>>> from huggingface_hub import add_collection_item
>>> collection = add_collection_item(
... collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
... item_id="pierre-loic/climate-news-articles",
... item_type="dataset"
... )
>>> collection.items[-1].item_id
"pierre-loic/climate-news-articles"
# ^item got added to the collection on last position
# Add item with a note
>>> add_collection_item(
... collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
... item_id="datasets/climate_fever",
... item_type="dataset"
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
(...)
( repo_id: str key: str value: str description: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. str
) —
Secret key. Example: "GITHUB_API_KEY"
str
) —
Secret value. Example: "your_github_api_key"
. str
, optional) —
Secret description. Example: "Github API key to access the Github API"
. False
. Adds or updates a secret in a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
( repo_id: str key: str value: str description: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. str
) —
Variable key. Example: "MODEL_REPO_ID"
str
) —
Variable value. Example: "the_model_repo_id"
. str
) —
Description of the variable. Example: "Model Repo ID of the implemented model"
. False
. Adds or updates a variable in a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
( repo_id: str user: str repo_type: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
The id of the repo to cancel access request for. str
) —
The username of the user which access request should be cancelled. str
, optional) —
The type of the repo to cancel access request for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.HTTPError
—
HTTP 404 if the user does not exist on the Hub.HTTPError
—
HTTP 404 if the user access request cannot be found.HTTPError
—
HTTP 404 if the user access request is already in the pending list.Cancel an access request from a user for a given gated repo.
A cancelled request will go back to the pending list and the user will lose access to the repo.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
( repo_id: str discussion_num: int new_status: Literal['open', 'closed'] token: Union[bool, str, None] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
) —
The new status for the discussion, either "open"
or "closed"
. str
, optional) —
An optional comment to post with the status change. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionStatusChange
the status change event
Closes or re-opens a Discussion or Pull Request.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str discussion_num: int comment: str token: Union[bool, str, None] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
) —
The content of the comment to create. Comments support markdown formatting. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionComment
the newly created comment
Creates a new comment on the given Discussion.
Examples:
>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """
>>> HfApi().comment_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str branch: str revision: Optional[str] = None token: Union[bool, str, None] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
str
) —
The repository in which the branch will be created.
Example: "user/my-cool-model"
. str
) —
The name of the branch to create. str
, optional) —
The git revision to create the branch from. It can be a branch name or
the OID/SHA of a commit, as a hexadecimal string. Defaults to the head
of the "main"
branch. False
. str
, optional) —
Set to "dataset"
or "space"
if creating a branch on a dataset or
space, None
or "model"
if tagging a model. Default is None
. bool
, optional, defaults to False
) —
If True
, do not raise an error if branch already exists. Raises
RepositoryNotFoundError or BadRequestError or HfHubHTTPError
refs/pr/5
or ‘refs/foo/bar’.exist_ok
is
set to False
.Create a new branch for a repo on the Hub, starting from the specified revision (defaults to main
).
To find a revision suiting your needs, you can use list_repo_refs() or list_repo_commits().
( title: str namespace: Optional[str] = None description: Optional[str] = None private: bool = False exists_ok: bool = False token: Union[bool, str, None] = None )
Parameters
str
) —
Title of the collection to create. Example: "Recent models"
. str
, optional) —
Namespace of the collection to create (username or org). Will default to the owner name. str
, optional) —
Description of the collection to create. bool
, optional) —
Whether the collection should be private or not. Defaults to False
(i.e. public collection). bool
, optional) —
If True
, do not raise an error if collection already exists. False
. Create a new Collection on the Hub.
Returns: Collection
( repo_id: str operations: Iterable[CommitOperation] commit_message: str commit_description: Optional[str] = None token: Union[str, bool, None] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 parent_commit: Optional[str] = None run_as_future: bool = False ) → CommitInfo or Future
Parameters
str
) —
The repository in which the commit will be created, for example:
"username/custom_transformers"
Iterable
of CommitOperation()
) —
An iterable of operations to include in the commit, either:
Operation objects will be mutated to include information relative to the upload. Do not reuse the same objects for multiple commits.
str
) —
The summary (first line) of the commit that will be created. str
, optional) —
The description of the commit that will be created False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. boolean
, optional) —
Whether or not to create a Pull Request with that commit. Defaults to False
.
If revision
is not set, PR is opened against the "main"
branch. If
revision
is set and is a branch, PR is opened against this branch. If
revision
is set and is not a branch name (example: a commit oid), an
RevisionNotFoundError
is returned by the server. int
, optional) —
Number of concurrent threads for uploading files. Defaults to 5.
Setting it to 2 means at most 2 files will be uploaded concurrently. str
, optional) —
The OID / SHA of the parent commit, as a hexadecimal string.
Shorthands (7 first characters) are also supported. If specified and create_pr
is False
,
the commit will fail if revision
does not point to parent_commit
. If specified and create_pr
is True
, the pull request will be created from parent_commit
. Specifying parent_commit
ensures the repo has not changed before committing the changes, and can be especially useful
if the repo is updated / committed to concurrently. bool
, optional) —
Whether or not to run this method in the background. Background jobs are run sequentially without
blocking the main thread. Passing run_as_future=True
will return a Future
object. Defaults to False
. Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
Raises
ValueError
or RepositoryNotFoundError
ValueError
—
If commit message is empty.ValueError
—
If parent commit is not a valid commit OID.ValueError
—
If a README.md file with an invalid metadata section is committed. In this case, the commit will fail
early, before trying to upload any file.ValueError
—
If create_pr
is True
and revision is neither None
nor "main"
.Creates a commit in the given repo, deleting & uploading files as needed.
The input list of CommitOperation
will be mutated during the commit process. Do not reuse the same objects
for multiple commits.
create_commit
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
create_commit
is limited to 25k LFS files and a 1GB payload for regular files.
( repo_id: str addition_commits: List[List[CommitOperationAdd]] deletion_commits: List[List[CommitOperationDelete]] commit_message: str commit_description: Optional[str] = None token: Union[str, bool, None] = None repo_type: Optional[str] = None merge_pr: bool = True num_threads: int = 5 verbose: bool = False ) → str
Parameters
str
) —
The repository in which the commits will be pushed. Example: "username/my-cool-model"
. List
of List
of CommitOperationAdd) —
A list containing lists of CommitOperationAdd. Each sublist will result in a commit on the
PR.
deletion_commits — A list containing lists of CommitOperationDelete. Each sublist will result in a commit on the PR. Deletion commits are pushed before addition commits.
str
) —
The summary (first line) of the commit that will be created. Will also be the title of the PR. str
, optional) —
The description of the commit that will be created. The description will be added to the PR. False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or space, None
or "model"
if uploading to
a model. Default is None
. bool
) —
If set to True
, the Pull Request is merged at the end of the process. Defaults to True
. int
, optional) —
Number of concurrent threads for uploading files. Defaults to 5. bool
) —
If set to True
, process will run on verbose mode i.e. print information about the ongoing tasks.
Defaults to False
. Returns
str
URL to the created PR.
Raises
MultiCommitException
MultiCommitException
—
If an unexpected issue occur in the process: empty commits, unexpected commits in a PR, unexpected PR
description, etc.Push changes to the Hub in multiple commits.
Commits are pushed to a draft PR branch. If the upload fails or gets interrupted, it can be resumed. Progress
is tracked in the PR description. At the end of the process, the PR is set as open and the title is updated to
match the initial commit message. If merge_pr=True
is passed, the PR is merged automatically.
All deletion commits are pushed first, followed by the addition commits. The order of the commits is not guaranteed as we might implement parallel commits in the future. Be sure that your are not updating several times the same file.
create_commits_on_pr
is experimental. Its API and behavior is subject to change in the future without prior notice.
create_commits_on_pr
assumes that the repo already exists on the Hub. If you get a Client error 404, please
make sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist,
create it first using create_repo().
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
( repo_id: str title: str token: Union[bool, str, None] = None description: Optional[str] = None repo_type: Optional[str] = None pull_request: bool = False )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
) —
The title of the discussion. It can be up to 200 characters long,
and must be at least 3 characters long. Leading and trailing whitespaces
will be stripped. False
. str
, optional) —
An optional description for the Pull Request.
Defaults to "Discussion opened with the huggingface_hub Python library"
bool
, optional) —
Whether to create a Pull Request or discussion. If True
, creates a Pull Request.
If False
, creates a discussion. Defaults to False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. Creates a Discussion or Pull Request.
Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( name: str repository: str framework: str accelerator: str instance_size: str instance_type: str region: str vendor: str account_id: Optional[str] = None min_replica: int = 0 max_replica: int = 1 revision: Optional[str] = None task: Optional[str] = None custom_image: Optional[Dict] = None type: InferenceEndpointType = <InferenceEndpointType.PROTECTED: 'protected'> namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The unique name for the new Inference Endpoint. str
) —
The name of the model repository associated with the Inference Endpoint (e.g. "gpt2"
). str
) —
The machine learning framework used for the model (e.g. "custom"
). str
) —
The hardware accelerator to be used for inference (e.g. "cpu"
). str
) —
The size or type of the instance to be used for hosting the model (e.g. "large"
). str
) —
The cloud instance type where the Inference Endpoint will be deployed (e.g. "c6i"
). str
) —
The cloud region in which the Inference Endpoint will be created (e.g. "us-east-1"
). str
) —
The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. "aws"
). str
, optional) —
The account ID used to link a VPC to a private Inference Endpoint (if applicable). int
, optional) —
The minimum number of replicas (instances) to keep running for the Inference Endpoint. Defaults to 0. int
, optional) —
The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1. str
, optional) —
The specific model revision to deploy on the Inference Endpoint (e.g. "6c0e6080953db56375760c0471a8c5f2929baf11"
). str
, optional) —
The task on which to deploy the model (e.g. "text-classification"
). Dict
, optional) —
A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
Inference Endpoint running on the text-generation-inference
(TGI) framework (see examples). , *optional*) -- The type of the Inference Endpoint, which can be
“protected”(default),
“public”or
“private”`. str
, optional) —
The namespace where the Inference Endpoint will be created. Defaults to the current user’s namespace. False
. Returns
information about the updated Inference Endpoint.
Create a new Inference Endpoint.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
... "my-endpoint-name",
... repository="gpt2",
... framework="pytorch",
... task="text-generation",
... accelerator="cpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="c6i",
... )
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', status="pending",...)
# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
"..."
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
( repo_id: str title: str token: Union[bool, str, None] = None description: Optional[str] = None repo_type: Optional[str] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
) —
The title of the discussion. It can be up to 200 characters long,
and must be at least 3 characters long. Leading and trailing whitespaces
will be stripped. False
. str
, optional) —
An optional description for the Pull Request.
Defaults to "Discussion opened with the huggingface_hub Python library"
str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. Creates a Pull Request . Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit();
This is a wrapper around HfApi.create_discussion().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str token: Union[str, bool, None] = None private: bool = False repo_type: Optional[str] = None exist_ok: bool = False space_sdk: Optional[str] = None space_hardware: Optional[SpaceHardware] = None space_storage: Optional[SpaceStorage] = None space_sleep_time: Optional[int] = None space_secrets: Optional[List[Dict[str, str]]] = None space_variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. False
. bool
, optional, defaults to False
) —
Whether the model repo should be private. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. bool
, optional, defaults to False
) —
If True
, do not raise an error if repo already exists. str
, optional) —
Choice of SDK to use if repo_type is “space”. Can be “streamlit”, “gradio”, “docker”, or “static”. SpaceHardware
or str
, optional) —
Choice of Hardware if repo_type is “space”. See SpaceHardware for a complete list. SpaceStorage
or str
, optional) —
Choice of persistent storage tier. Example: "small"
. See SpaceStorage for a complete list. int
, optional) —
Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1
if you don’t want
your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure
the sleep time (value is fixed to 48 hours of inactivity).
See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. List[Dict[str, str]]
, optional) —
A list of secret keys to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...}
where description is optional.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. List[Dict[str, str]]
, optional) —
A list of public environment variables to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...}
where description is optional.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables. Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Create an empty repo on the HuggingFace Hub.
( repo_id: str tag: str tag_message: Optional[str] = None revision: Optional[str] = None token: Union[bool, str, None] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
str
) —
The repository in which a commit will be tagged.
Example: "user/my-cool-model"
. str
) —
The name of the tag to create. str
, optional) —
The description of the tag to create. str
, optional) —
The git revision to tag. It can be a branch name or the OID/SHA of a
commit, as a hexadecimal string. Shorthands (7 first characters) are
also supported. Defaults to the head of the "main"
branch. False
. str
, optional) —
Set to "dataset"
or "space"
if tagging a dataset or
space, None
or "model"
if tagging a model. Default is
None
. bool
, optional, defaults to False
) —
If True
, do not raise an error if tag already exists. Raises
RepositoryNotFoundError or RevisionNotFoundError or HfHubHTTPError
exist_ok
is
set to False
.Tag a given commit of a repo on the Hub.
( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Union[bool, str, None] = None ) → hf_api.DatasetInfo
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
The revision of the dataset repository from which to get the
information. float
, optional) —
Whether to set a timeout for the request to the Hub. bool
, optional) —
Whether or not to retrieve metadata for files in the repository
(size, LFS metadata, etc). Defaults to False
. False
. Returns
The dataset repository information.
Get info on one specific dataset on huggingface.co.
Dataset can be private if you pass an acceptable token.
Raises the following errors:
private
and you do not have access.( repo_id: str branch: str token: Union[bool, str, None] = None repo_type: Optional[str] = None )
Parameters
str
) —
The repository in which a branch will be deleted.
Example: "user/my-cool-model"
. str
) —
The name of the branch to delete. False
. str
, optional) —
Set to "dataset"
or "space"
if creating a branch on a dataset or
space, None
or "model"
if tagging a model. Default is None
. Raises
main
cannot be deleted.Delete a branch from a repo on the Hub.
( collection_slug: str missing_ok: bool = False token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection to delete. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. bool
, optional) —
If True
, do not raise an error if collection doesn’t exists. False
. Delete a collection on the Hub.
Example:
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
This is a non-revertible action. A deleted collection cannot be restored.
( collection_slug: str item_object_id: str missing_ok: bool = False token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. str
) —
ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
It must be retrieved from a CollectionItem object. Example: collection.items[0]._id
. bool
, optional) —
If True
, do not raise an error if item doesn’t exists. False
. Delete an item from a collection.
Example:
>>> from huggingface_hub import get_collection, delete_collection_item
# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
# Delete item based on its ID
>>> delete_collection_item(
... collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
... item_object_id=collection.items[-1].item_object_id,
... )
( path_in_repo: str repo_id: str token: Union[str, bool, None] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
str
) —
Relative filepath in the repo, for example:
"checkpoints/1fec34a/weights.bin"
str
) —
The repository from which the file will be deleted, for example:
"username/custom_transformers"
False
. str
, optional) —
Set to "dataset"
or "space"
if the file is in a dataset or
space, None
or "model"
if in a model. Default is None
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. str
, optional) —
The summary / title / first line of the generated commit. Defaults to
f"Delete {path_in_repo} with huggingface_hub"
. str
optional) —
The description of the generated commit boolean
, optional) —
Whether or not to create a Pull Request with that commit. Defaults to False
.
If revision
is not set, PR is opened against the "main"
branch. If
revision
is set and is a branch, PR is opened against this branch. If
revision
is set and is not a branch name (example: a commit oid), an
RevisionNotFoundError
is returned by the server. str
, optional) —
The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
If specified and create_pr
is False
, the commit will fail if revision
does not point to parent_commit
.
If specified and create_pr
is True
, the pull request will be created from parent_commit
.
Specifying parent_commit
ensures the repo has not changed before committing the changes, and can be
especially useful if the repo is updated / committed to concurrently. Deletes a file in the given repo.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( path_in_repo: str repo_id: str token: Union[bool, str, None] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
str
) —
Relative folder path in the repo, for example: "checkpoints/1fec34a"
. str
) —
The repository from which the folder will be deleted, for example:
"username/custom_transformers"
False
.
to the stored token. str
, optional) —
Set to "dataset"
or "space"
if the folder is in a dataset or
space, None
or "model"
if in a model. Default is None
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. str
, optional) —
The summary / title / first line of the generated commit. Defaults to
f"Delete folder {path_in_repo} with huggingface_hub"
. str
optional) —
The description of the generated commit. boolean
, optional) —
Whether or not to create a Pull Request with that commit. Defaults to False
.
If revision
is not set, PR is opened against the "main"
branch. If
revision
is set and is a branch, PR is opened against this branch. If
revision
is set and is not a branch name (example: a commit oid), an
RevisionNotFoundError
is returned by the server. str
, optional) —
The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
If specified and create_pr
is False
, the commit will fail if revision
does not point to parent_commit
.
If specified and create_pr
is True
, the pull request will be created from parent_commit
.
Specifying parent_commit
ensures the repo has not changed before committing the changes, and can be
especially useful if the repo is updated / committed to concurrently. Deletes a folder in the given repo.
Simple wrapper around create_commit() method.
( name: str namespace: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
The name of the Inference Endpoint to delete. str
, optional) —
The namespace in which the Inference Endpoint is located. Defaults to the current user. False
. Delete an Inference Endpoint.
This operation is not reversible. If you don’t want to be charged for an Inference Endpoint, it is preferable to pause it with pause_inference_endpoint() or scale it to zero with scale_to_zero_inference_endpoint().
For convenience, you can also delete an Inference Endpoint using InferenceEndpoint.delete().
( repo_id: str token: Union[str, bool, None] = None repo_type: Optional[str] = None missing_ok: bool = False )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. bool
, optional, defaults to False
) —
If True
, do not raise an error if repo does not exist. Raises
missing_ok
is set to False (default).Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.
( repo_id: str key: str token: Union[bool, str, None] = None )
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. str
) —
Secret key. Example: "GITHUB_API_KEY"
. False
. Deletes a secret from a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
( repo_id: str token: Union[bool, str, None] = None ) → SpaceRuntime
Parameters
str
) —
ID of the Space to update. Example: "HuggingFaceH4/open_llm_leaderboard"
. False
. Returns
Runtime information about a Space including Space stage and hardware.
Raises
BadRequestError
BadRequestError
—
If space has no persistent storage.Delete persistent storage for a Space.
( repo_id: str key: str token: Union[bool, str, None] = None )
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. str
) —
Variable key. Example: "MODEL_REPO_ID"
False
. Deletes a variable from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
( repo_id: str tag: str token: Union[bool, str, None] = None repo_type: Optional[str] = None )
Parameters
str
) —
The repository in which a tag will be deleted.
Example: "user/my-cool-model"
. str
) —
The name of the tag to delete. False
. str
, optional) —
Set to "dataset"
or "space"
if tagging a dataset or space, None
or
"model"
if tagging a model. Default is None
. Raises
Delete a tag from a repo on the Hub.
( from_id: str to_id: Optional[str] = None private: Optional[bool] = None token: Union[bool, str, None] = None exist_ok: bool = False hardware: Optional[SpaceHardware] = None storage: Optional[SpaceStorage] = None sleep_time: Optional[int] = None secrets: Optional[List[Dict[str, str]]] = None variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
str
) —
ID of the Space to duplicate. Example: "pharma/CLIP-Interrogator"
. str
, optional) —
ID of the new Space. Example: "dog/CLIP-Interrogator"
. If not provided, the new Space will have the same
name as the original Space, but in your account. bool
, optional) —
Whether the new Space should be private or not. Defaults to the same privacy as the original Space. False
. bool
, optional, defaults to False
) —
If True
, do not raise an error if repo already exists. SpaceHardware
or str
, optional) —
Choice of Hardware. Example: "t4-medium"
. See SpaceHardware for a complete list. SpaceStorage
or str
, optional) —
Choice of persistent storage tier. Example: "small"
. See SpaceStorage for a complete list. int
, optional) —
Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1
if you don’t want
your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure
the sleep time (value is fixed to 48 hours of inactivity).
See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. List[Dict[str, str]]
, optional) —
A list of secret keys to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...}
where description is optional.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. List[Dict[str, str]]
, optional) —
A list of public environment variables to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...}
where description is optional.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables. Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Raises
HTTPError
if the HuggingFace API returned an errorfrom_id
or to_id
cannot be found. This may be because it doesn’t exist,
or because it is set to private
and you do not have access.Duplicate a Space.
Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.
Example:
>>> from huggingface_hub import duplicate_space
# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)
# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)
( repo_id: str discussion_num: int comment_id: str new_content: str token: Union[bool, str, None] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
) —
The ID of the comment to edit. str
) —
The new content of the comment. Comments support markdown formatting. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionComment
the edited comment
Edits a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Union[str, bool, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
) —
The name of the file to check, for example:
"config.json"
str
, optional) —
Set to "dataset"
or "space"
if getting repository info from a dataset or a space,
None
or "model"
if getting repository info from a model. Default is None
. str
, optional) —
The revision of the repository from which to get the information. Defaults to "main"
branch. False
. Checks if a file exists in a repository on the Hugging Face Hub.
( collection_slug: str token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection of the Hub. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. False
. Gets information about a Collection on the Hub.
Returns: Collection
Example:
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem(
item_object_id='651446103cd773a050bf64c2',
item_id='TheBloke/U-Amethyst-20B-AWQ',
item_type='model',
position=88,
note=None
)
List all valid dataset tags as a nested namespace object.
( repo_id: str discussion_num: int repo_type: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Fetches a Discussion’s / Pull Request ‘s details from the Hub.
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( model_id: str organization: Optional[str] = None token: Union[bool, str, None] = None ) → str
Parameters
str
) —
The name of the model. str
, optional) —
If passed, the repository name will be in the organization
namespace instead of the user namespace. False
. Returns
str
The repository name in the user’s namespace ({username}/{model_id}) if no organization is passed, and under the organization namespace ({organization}/{model_id}) otherwise.
Returns the repository name for a given model ID and optional organization.
( url: str token: Union[bool, str, None] = None proxies: Optional[Dict] = None timeout: Optional[float] = 10 )
Parameters
str
) —
File url, for example returned by hf_hub_url()
. False
. dict
, optional) —
Dictionary mapping protocol to the URL of the proxy passed to requests.request
. float
, optional, defaults to 10) —
How many seconds to wait for the server to send metadata before giving up. Fetch metadata of a file versioned on the Hub for a given url.
( name: str namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The name of the Inference Endpoint to retrieve information about. str
, optional) —
The namespace in which the Inference Endpoint is located. Defaults to the current user. False
. Returns
information about the requested Inference Endpoint.
Get information about an Inference Endpoint.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)
# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'
# Run inference
>>> endpoint.client.text_to_image(...)
List all valid model tags as a nested namespace object
( repo_id: str paths: Union[List[str], str] expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Union[str, bool, None] = None ) → List[Union[RepoFile, RepoFolder]]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated by a /
. Union[List[str], str]
, optional) —
The paths to get information about. If a path do not exist, it is ignored without raising
an exception. bool
, optional, defaults to False
) —
Whether to fetch more information about the paths (e.g. last commit and files’ security scan results). This
operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
As pagination is implemented in huggingface_hub
, this is transparent for you except for the time it
takes to get the results. str
, optional) —
The revision of the repository from which to get the information. Defaults to "main"
branch. str
, optional) —
The type of the repository from which to get the information ("model"
, "dataset"
or "space"
.
Defaults to "model"
. False
. Returns
List[Union[RepoFile, RepoFolder]]
The information about the paths, as a list of RepoFile
and RepoFolder
objects.
Raises
Get information about a repo’s paths.
Example:
>>> from huggingface_hub import get_paths_info
>>> paths_info = get_paths_info("allenai/c4", ["README.md", "en"], repo_type="dataset")
>>> paths_info
[
RepoFile(path='README.md', size=2379, blob_id='f84cb4c97182890fc1dbdeaf1a6a468fd27b4fff', lfs=None, last_commit=None, security=None),
RepoFolder(path='en', tree_id='dc943c4c40f53d02b31ced1defa7e5f438d5862e', last_commit=None)
]
( repo_id: str author: Optional[str] = None discussion_type: Optional[DiscussionTypeFilter] = None discussion_status: Optional[DiscussionStatusFilter] = None repo_type: Optional[str] = None token: Union[bool, str, None] = None ) → Iterator[Discussion]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
Pass a value to filter by discussion author. None
means no filter.
Default is None
. str
, optional) —
Set to "pull_request"
to fetch only pull requests, "discussion"
to fetch only discussions. Set to "all"
or None
to fetch both.
Default is None
. str
, optional) —
Set to "open"
(respectively "closed"
) to fetch only open
(respectively closed) discussions. Set to "all"
or None
to fetch both.
Default is None
. str
, optional) —
Set to "dataset"
or "space"
if fetching from a dataset or
space, None
or "model"
if fetching from a model. Default is
None
. False
. Returns
Iterator[Discussion]
An iterator of Discussion
objects.
Fetches Discussions and Pull Requests for the given repo.
Example:
( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None token: Union[bool, str, None] = None ) → SafetensorsRepoMetadata
Parameters
str
) —
A user or an organization name and a repo name separated by a /
. str
) —
The name of the file in the repo. str
, optional) —
Set to "dataset"
or "space"
if the file is in a dataset or space, None
or "model"
if in a
model. Default is None
. str
, optional) —
The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
head of the "main"
branch. False
. Returns
SafetensorsRepoMetadata
information related to safetensors repo.
Raises
NotASafetensorsRepoError
: if the repo is not a safetensors repo i.e. doesn’t have either a
model.safetensors
or a model.safetensors.index.json
file.SafetensorsParsingError
: if a safetensors file header couldn’t be parsed correctly.Parse metadata for a safetensors repo on the Hub.
We first check if the repo has a single safetensors file or a sharded safetensors repo. If it’s a single safetensors file, we parse the metadata from this file. If it’s a sharded safetensors repo, we parse the metadata from the index file and then parse the metadata from each shard.
To parse metadata from a single safetensors file, use parse_safetensors_file_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
Example:
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
# Parse repo with sharded model
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00, 5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72 # All safetensors files have been fetched
# Parse repo with sharded model
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.
( repo_id: str token: Union[bool, str, None] = None ) → SpaceRuntime
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. False
. Returns
Runtime information about a Space including Space stage and hardware.
Gets runtime information about a Space.
( repo_id: str token: Union[bool, str, None] = None )
Parameters
str
) —
ID of the repo to query. Example: "bigcode/in-the-stack"
. False
. Gets all variables from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
( token: Union[bool, str, None] = None ) → Literal["read", "write", None]
Parameters
False
. Returns
Literal["read", "write", None]
Permission granted by the token (“read” or “write”). Returns None
if no
token passed or token is invalid.
Check if a given token
is valid and return its permissions.
For more details about tokens, please refer to https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens.
( username: str ) → User
Parameters
Returns
User
A User object with the user’s overview.
Raises
HTTPError
HTTPError
—
HTTP 404 If the user does not exist on the Hub.Get an overview of a user on the Hub.
( repo_id: str user: str repo_type: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
The id of the repo to grant access to. str
) —
The username of the user to grant access. str
, optional) —
The type of the repo to grant access to. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 400 if the user already has access to the repo.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.HTTPError
—
HTTP 404 if the user does not exist on the Hub.Grant access to a user for a given gated repo.
Granting access don’t require for the user to send an access request by themselves. The user is automatically added to the accepted list meaning they can download the files You can revoke the granted access at any time using cancel_access_request() or reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
( repo_id: str filename: str subfolder: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None force_download: bool = False proxies: Optional[Dict] = None etag_timeout: float = 10 token: Union[bool, str, None] = None local_files_only: bool = False resume_download: Optional[bool] = None legacy_cache_layout: bool = False force_filename: Optional[str] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' ) → str
Parameters
str
) —
A user or an organization name and a repo name separated by a /
. str
) —
The name of the file in the repo. str
, optional) —
An optional value corresponding to a folder inside the model repo. str
, optional) —
Set to "dataset"
or "space"
if downloading from a dataset or space,
None
or "model"
if downloading from a model. Default is None
. str
, optional) —
An optional Git revision id which can be a branch name, a tag, or a
commit hash. str
, Path
, optional) —
Path to the folder where cached files are stored. str
or Path
, optional) —
If provided, the downloaded file will be placed under this directory. bool
, optional, defaults to False
) —
Whether the file should be downloaded even if it already exists in
the local cache. dict
, optional) —
Dictionary mapping protocol to the URL of the proxy passed to
requests.request
. float
, optional, defaults to 10
) —
When fetching ETag, how many seconds to wait for the server to send
data before giving up which is passed to requests.request
. False
. bool
, optional, defaults to False
) —
If True
, avoid downloading the file and return the path to the
local cached file if it exists. Returns
str
Local path of file or if networking is off, last version of file cached on disk.
Raises
if
or If
or or
if
— token=True
and the token cannot be found.OSError
if
— ETag cannot be determined.if
— some parameter value is invalidIf
— the repository to download from cannot be found. This may be because it doesn’t exist,or
— because it is set to private
and you do not have access.If
— the revision to download from cannot be found.If
— the file to download cannot be found.If
— network is disabled or unavailable and file is not found in cache.Download a given file if it’s not already present in the local cache.
The new cache file layout looks like this:
[ 96] .
└── [ 160] models--julien-c--EsperBERTo-small
├── [ 160] blobs
│ ├── [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
│ ├── [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e
│ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812
├── [ 96] refs
│ └── [ 40] main
└── [ 128] snapshots
├── [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f
│ ├── [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
│ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
└── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48
├── [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
└── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
If local_dir
is provided, the file structure from the repo will be replicated in this location. When using this
option, the cache_dir
will not be used and a .huggingface/
folder will be created at the root of local_dir
to store some metadata related to the downloaded files. While this mechanism is not as robust as the main
cache-system, it’s optimized for regularly pulling the latest version of a repository.
( repo_id: str discussion_num: int comment_id: str token: Union[bool, str, None] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
) —
The ID of the comment to edit. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionComment
the hidden comment
Hides a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str token: Union[bool, str, None] = None repo_type: Optional[str] = None )
Parameters
str
) —
The repository to like. Example: "user/my-cool-model"
. False
. str
, optional) —
Set to "dataset"
or "space"
if liking a dataset or space, None
or
"model"
if liking a model. Default is None
. Raises
Like a given repo on the Hub (e.g. set as favorite).
See also unlike() and list_liked_repos().
( repo_id: str repo_type: Optional[str] = None token: Union[bool, str, None] = None ) → List[AccessRequest]
Parameters
str
) —
The id of the repo to get access requests for. str
, optional) —
The type of the repo to get access requests for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.Get accepted access requests for a given gated repo.
An accepted request means the user has requested access to the repo and the request has been accepted. The user can download any file of the repo. If the approval mode is automatic, this list should contains by default all requests. Accepted requests can be cancelled or rejected at any time using cancel_access_request() and reject_access_request(). A cancelled request will go back to the pending list while a rejected request will go to the rejected list. In both cases, the user will lose access to the repo.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_accepted_access_requests
>>> requests = list_accepted_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='accepted',
fields=None,
),
...
]
( owner: Union[List[str], str, None] = None item: Union[List[str], str, None] = None sort: Optional[Literal['lastModified', 'trending', 'upvotes']] = None limit: Optional[int] = None token: Union[bool, str, None] = None ) → Iterable[Collection]
Parameters
List[str]
or str
, optional) —
Filter by owner’s username. List[str]
or str
, optional) —
Filter collections containing a particular items. Example: "models/teknium/OpenHermes-2.5-Mistral-7B"
, "datasets/squad"
or "papers/2311.12983"
. Literal["lastModified", "trending", "upvotes"]
, optional) —
Sort collections by last modified, trending or upvotes. int
, optional) —
Maximum number of collections to be returned. False
. Returns
Iterable[Collection]
an iterable of Collection objects.
List collections on the Huggingface Hub, given some filters.
When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use get_collection().
( filter: Union[DatasetFilter, str, Iterable[str], None] = None author: Optional[str] = None benchmark: Optional[Union[str, List[str]]] = None dataset_name: Optional[str] = None language_creators: Optional[Union[str, List[str]]] = None language: Optional[Union[str, List[str]]] = None multilinguality: Optional[Union[str, List[str]]] = None size_categories: Optional[Union[str, List[str]]] = None task_categories: Optional[Union[str, List[str]]] = None task_ids: Optional[Union[str, List[str]]] = None search: Optional[str] = None sort: Optional[Union[Literal['last_modified'], str]] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None token: Union[bool, str, None] = None ) → Iterable[DatasetInfo]
Parameters
str
or Iterable
, optional) —
A string or DatasetFilter which can be used to identify
datasets on the hub. str
, optional) —
A string which identify the author of the returned datasets. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by their official benchmark. str
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by its name, such as SQAC
or wikineural
str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub with how the data was curated, such as crowdsourced
or
machine_generated
. str
or List
, optional) —
A string or list of strings representing a two-character language to
filter datasets by on the Hub. str
or List
, optional) —
A string or list of strings representing a filter for datasets that
contain multiple languages. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the size of the dataset such as 100K<n<1M
or
1M<n<10M
. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the designed task, such as audio_classification
or
named_entity_recognition
. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the specific task such as speech_emotion_recognition
or
paraphrase
. str
, optional) —
A string that will be contained in the returned datasets. Literal["last_modified"]
or str
, optional) —
The key with which to sort the resulting datasets. Possible
values are the properties of the huggingface_hub.hf_api.DatasetInfo class. Literal[-1]
or int
, optional) —
Direction in which to sort. The value -1
sorts by descending
order while all other values sort by ascending order. int
, optional) —
The limit on the number of datasets fetched. Leaving this option
to None
fetches all datasets. bool
, optional) —
Whether to fetch all dataset data, including the last_modified
,
the card_data
and the files. Can contain useful information such as the
PapersWithCode ID. False
. Returns
Iterable[DatasetInfo]
an iterable of huggingface_hub.hf_api.DatasetInfo objects.
List datasets hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets
>>> api.list_datasets()
>>> # List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")
>>> # List only the datasets in russian for language modeling
>>> api.list_datasets(
... filter=("language:ru", "task_ids:language-modeling")
... )
>>> api.list_datasets(filter=filt)
Example usage with the search
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets with "text" in their name
>>> api.list_datasets(search="text")
>>> # List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")
( namespace: Optional[str] = None token: Union[bool, str, None] = None ) → ListInferenceEndpoint
Parameters
str
, optional) —
The namespace to list endpoints for. Defaults to the current user. Set to "*"
to list all endpoints
from all namespaces (i.e. personal namespace and all orgs the user belongs to). False
. Returns
A list of all inference endpoints for the given namespace.
Lists all inference endpoints for the given namespace.
( user: Optional[str] = None token: Union[bool, str, None] = None ) → UserLikes
Parameters
str
, optional) —
Name of the user for which you want to fetch the likes. False
. Returns
object containing the user name and 3 lists of repo ids (1 for models, 1 for datasets and 1 for Spaces).
Raises
ValueError
ValueError
—
If user
is not passed and no token found (either from argument or from machine).List all public repos liked by a user on huggingface.co.
This list is public so token is optional. If user
is not passed, it defaults to
the logged in user.
( ) → List[MetricInfo]
Returns
List[MetricInfo]
a list of MetricInfo
objects which.
Get the public list of all the metrics on huggingface.co
( filter: Union[ModelFilter, str, Iterable[str], None] = None author: Optional[str] = None library: Optional[Union[str, List[str]]] = None language: Optional[Union[str, List[str]]] = None model_name: Optional[str] = None task: Optional[Union[str, List[str]]] = None trained_dataset: Optional[Union[str, List[str]]] = None tags: Optional[Union[str, List[str]]] = None search: Optional[str] = None emissions_thresholds: Optional[Tuple[float, float]] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None cardData: bool = False fetch_config: bool = False token: Union[bool, str, None] = None pipeline_tag: Optional[str] = None ) → Iterable[ModelInfo]
Parameters
str
or Iterable
, optional) —
A string or ModelFilter which can be used to identify models
on the Hub. str
, optional) —
A string which identify the author (user or organization) of the
returned models str
or List
, optional) —
A string or list of strings of foundational libraries models were
originally trained from, such as pytorch, tensorflow, or allennlp. str
or List
, optional) —
A string or list of strings of languages, both by name and country
code, such as “en” or “English” str
, optional) —
A string that contain complete or partial names for models on the
Hub, such as “bert” or “bert-base-cased” str
or List
, optional) —
A string or list of strings of tasks models were designed for, such
as: “fill-mask” or “automatic-speech-recognition” str
or List
, optional) —
A string tag or a list of string tags of the trained dataset for a
model on the Hub. str
or List
, optional) —
A string tag or a list of tags to filter models on the Hub by, such
as text-generation
or spacy
. str
, optional) —
A string that will be contained in the returned model ids. Tuple
, optional) —
A tuple of two ints or floats representing a minimum and maximum
carbon footprint to filter the resulting models with in grams. Literal["last_modified"]
or str
, optional) —
The key with which to sort the resulting models. Possible values
are the properties of the huggingface_hub.hf_api.ModelInfo class. Literal[-1]
or int
, optional) —
Direction in which to sort. The value -1
sorts by descending
order while all other values sort by ascending order. int
, optional) —
The limit on the number of models fetched. Leaving this option
to None
fetches all models. bool
, optional) —
Whether to fetch all model data, including the last_modified
,
the sha
, the files and the tags
. This is set to True
by
default when using a filter. bool
, optional) —
Whether to grab the metadata for the model as well. Can contain
useful information such as carbon emissions, metrics, and
datasets trained on. bool
, optional) —
Whether to fetch the model configs as well. This is not included
in full
due to its size. False
. str
, optional) —
A string pipeline tag to filter models on the Hub by, such as summarization
Returns
Iterable[ModelInfo]
an iterable of huggingface_hub.hf_api.ModelInfo objects.
List models hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all models
>>> api.list_models()
>>> # List only the text classification models
>>> api.list_models(filter="text-classification")
>>> # List only models from the AllenNLP library
>>> api.list_models(filter="allennlp")
( organization: str ) → Iterable[User]
Parameters
Returns
Iterable[User]
A list of User objects with the members of the organization.
Raises
HTTPError
HTTPError
—
HTTP 404 If the organization does not exist on the Hub.List of members of an organization on the Hub.
( repo_id: str repo_type: Optional[str] = None token: Union[bool, str, None] = None ) → List[AccessRequest]
Parameters
str
) —
The id of the repo to get access requests for. str
, optional) —
The type of the repo to get access requests for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.Get pending access requests for a given gated repo.
A pending request means the user has requested access to the repo but the request has not been processed yet. If the approval mode is automatic, this list should be empty. Pending requests can be accepted or rejected using accept_access_request() and reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
( repo_id: str repo_type: Optional[str] = None token: Union[bool, str, None] = None ) → List[AccessRequest]
Parameters
str
) —
The id of the repo to get access requests for. str
, optional) —
The type of the repo to get access requests for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.Get rejected access requests for a given gated repo.
A rejected request means the user has requested access to the repo and the request has been explicitly rejected by a repo owner (either you or another user from your organization). The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_rejected_access_requests
>>> requests = list_rejected_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='rejected',
fields=None,
),
...
]
( repo_id: str repo_type: Optional[str] = None token: Union[bool, str, None] = None revision: Optional[str] = None formatted: bool = False ) → List[GitCommitInfo]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated by a /
. str
, optional) —
Set to "dataset"
or "space"
if listing commits from a dataset or a Space, None
or "model"
if
listing from a model. Default is None
. False
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. bool
) —
Whether to return the HTML-formatted title and description of the commits. Defaults to False. Returns
List[GitCommitInfo]
list of objects containing information about the commits for a repo on the Hub.
Raises
Get the list of commits of a given revision for a repo on the Hub.
Commits are sorted by date (last commit first).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]
# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
authors=['system'],
created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
title='initial commit',
message='',
formatted_title=None,
formatted_message=None
)
# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)
( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None token: Union[str, bool, None] = None ) → List[str]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated by a /
. str
, optional) —
The revision of the model repository from which to get the information. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or space, None
or "model"
if uploading to
a model. Default is None
. False
. Returns
List[str]
the list of files in a given repository.
Get the list of files in a given repo.
( repo_id: str repo_type: Optional[str] = None token: Union[bool, str, None] = None ) → List[User]
Parameters
str
) —
The repository to retrieve . Example: "user/my-cool-model"
. False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. Returns
List[User]
a list of User objects.
List all users who liked a given repo on the hugging Face Hub.
See also like() and list_liked_repos().
( repo_id: str repo_type: Optional[str] = None include_pull_requests: bool = False token: Union[str, bool, None] = None ) → GitRefs
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
Set to "dataset"
or "space"
if listing refs from a dataset or a Space,
None
or "model"
if listing from a model. Default is None
. bool
, optional) —
Whether to include refs from pull requests in the list. Defaults to False
. False
. Returns
object containing all information about branches and tags for a repo on the Hub.
Get the list of refs of a given repo (both tags and branches).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])
>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
branches=[
GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
],
converts=[],
tags=[
GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
]
)
( repo_id: str path_in_repo: Optional[str] = None recursive: bool = False expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Union[str, bool, None] = None ) → Iterable[Union[RepoFile, RepoFolder]]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated by a /
. str
, optional) —
Relative path of the tree (folder) in the repo, for example:
"checkpoints/1fec34a/results"
. Will default to the root tree (folder) of the repository. bool
, optional, defaults to False
) —
Whether to list tree’s files and folders recursively. bool
, optional, defaults to False
) —
Whether to fetch more information about the tree’s files and folders (e.g. last commit and files’ security scan results). This
operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
As pagination is implemented in huggingface_hub
, this is transparent for you except for the time it
takes to get the results. str
, optional) —
The revision of the repository from which to get the tree. Defaults to "main"
branch. str
, optional) —
The type of the repository from which to get the tree ("model"
, "dataset"
or "space"
.
Defaults to "model"
. False
. Returns
Iterable[Union[RepoFile, RepoFolder]]
The information about the tree’s files and folders, as an iterable of RepoFile
and RepoFolder
objects. The order of the files and folders is
not guaranteed.
List a repo tree’s files and folders and get information about them.
Examples:
Get information about a repo’s tree.
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("lysandre/arxiv-nlp")
>>> repo_tree
<generator object HfApi.list_repo_tree at 0x7fa4088e1ac0>
>>> list(repo_tree)
[
RepoFile(path='.gitattributes', size=391, blob_id='ae8c63daedbd4206d7d40126955d4e6ab1c80f8f', lfs=None, last_commit=None, security=None),
RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None),
RepoFile(
path='flax_model.msgpack', size=497764107, blob_id='8095a62ccb4d806da7666fcda07467e2d150218e',
lfs={'size': 497764107, 'sha256': 'd88b0d6a6ff9c3f8151f9d3228f57092aaea997f09af009eefd7373a77b5abb9', 'pointer_size': 134}, last_commit=None, security=None
),
RepoFile(path='merges.txt', size=456318, blob_id='226b0752cac7789c48f0cb3ec53eda48b7be36cc', lfs=None, last_commit=None, security=None),
RepoFile(
path='pytorch_model.bin', size=548123560, blob_id='64eaa9c526867e404b68f2c5d66fd78e27026523',
lfs={'size': 548123560, 'sha256': '9be78edb5b928eba33aa88f431551348f7466ba9f5ef3daf1d552398722a5436', 'pointer_size': 134}, last_commit=None, security=None
),
RepoFile(path='vocab.json', size=898669, blob_id='b00361fece0387ca34b4b8b8539ed830d644dbeb', lfs=None, last_commit=None, security=None)]
]
Get even more information about a repo’s tree (last commit and files’ security scan results)
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("prompthero/openjourney-v4", expand=True)
>>> list(repo_tree)
[
RepoFolder(
path='feature_extractor',
tree_id='aa536c4ea18073388b5b0bc791057a7296a00398',
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
}
),
RepoFolder(
path='safety_checker',
tree_id='65aef9d787e5557373fdf714d6c34d4fcdd70440',
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
}
),
RepoFile(
path='model_index.json',
size=582,
blob_id='d3d7c1e8c3e78eeb1640b8e2041ee256e24c9ee1',
lfs=None,
last_commit={
'oid': 'b195ed2d503f3eb29637050a886d77bd81d35f0e',
'title': 'Fix deprecation warning by changing `CLIPFeatureExtractor` to `CLIPImageProcessor`. (#54)',
'date': datetime.datetime(2023, 5, 15, 21, 41, 59, tzinfo=datetime.timezone.utc)
},
security={
'safe': True,
'av_scan': {'virusFound': False, 'virusNames': None},
'pickle_import_scan': None
}
)
...
]
( filter: Union[str, Iterable[str], None] = None author: Optional[str] = None search: Optional[str] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None datasets: Union[str, Iterable[str], None] = None models: Union[str, Iterable[str], None] = None linked: bool = False full: Optional[bool] = None token: Union[bool, str, None] = None ) → Iterable[SpaceInfo]
Parameters
str
or Iterable
, optional) —
A string tag or list of tags that can be used to identify Spaces on the Hub. str
, optional) —
A string which identify the author of the returned Spaces. str
, optional) —
A string that will be contained in the returned Spaces. Literal["last_modified"]
or str
, optional) —
The key with which to sort the resulting Spaces. Possible
values are the properties of the huggingface_hub.hf_api.SpaceInfo` class. Literal[-1]
or int
, optional) —
Direction in which to sort. The value -1
sorts by descending
order while all other values sort by ascending order. int
, optional) —
The limit on the number of Spaces fetched. Leaving this option
to None
fetches all Spaces. str
or Iterable
, optional) —
Whether to return Spaces that make use of a dataset.
The name of a specific dataset can be passed as a string. str
or Iterable
, optional) —
Whether to return Spaces that make use of a model.
The name of a specific model can be passed as a string. bool
, optional) —
Whether to return Spaces that make use of either a model or a dataset. bool
, optional) —
Whether to fetch all Spaces data, including the last_modified
, siblings
and card_data
fields. False
. Returns
Iterable[SpaceInfo]
an iterable of huggingface_hub.hf_api.SpaceInfo objects.
List spaces hosted on the Huggingface Hub, given some filters.
( username: str ) → Iterable[User]
Parameters
Returns
Iterable[User]
A list of User objects with the followers of the user.
Raises
HTTPError
HTTPError
—
HTTP 404 If the user does not exist on the Hub.Get the list of followers of a user on the Hub.
( username: str ) → Iterable[User]
Parameters
Returns
Iterable[User]
A list of User objects with the users followed by the user.
Raises
HTTPError
HTTPError
—
HTTP 404 If the user does not exist on the Hub.Get the list of users followed by a user on the Hub.
( repo_id: str discussion_num: int token: Union[bool, str, None] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
, optional) —
An optional comment to post with the status change. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionStatusChange
the status change event
Merges a Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None securityStatus: Optional[bool] = None files_metadata: bool = False token: Union[bool, str, None] = None ) → huggingface_hub.hf_api.ModelInfo
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
The revision of the model repository from which to get the
information. float
, optional) —
Whether to set a timeout for the request to the Hub. bool
, optional) —
Whether to retrieve the security status from the model
repository as well. bool
, optional) —
Whether or not to retrieve metadata for files in the repository
(size, LFS metadata, etc). Defaults to False
. False
. Returns
The model repository information.
Get info on one specific model on huggingface.co
Model can be private if you pass an acceptable token or are logged in.
Raises the following errors:
private
and you do not have access.( from_id: str to_id: str repo_type: Optional[str] = None token: Union[str, bool, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. Original repository identifier. str
) —
A namespace (user or an organization) and a repo name separated
by a /
. Final repository identifier. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Moving a repository from namespace1/repo_name1 to namespace2/repo_name2
Note there are certain limitations. For more information about moving repositories, please see https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.
Raises the following errors:
private
and you do not have access.( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Union[bool, str, None] = None ) → SafetensorsFileMetadata
Parameters
str
) —
A user or an organization name and a repo name separated by a /
. str
) —
The name of the file in the repo. str
, optional) —
Set to "dataset"
or "space"
if the file is in a dataset or space, None
or "model"
if in a
model. Default is None
. str
, optional) —
The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
head of the "main"
branch. False
. Returns
SafetensorsFileMetadata
information related to a safetensors file.
Raises
NotASafetensorsRepoError
: if the repo is not a safetensors repo i.e. doesn’t have either a
model.safetensors
or a model.safetensors.index.json
file.SafetensorsParsingError
: if a safetensors file header couldn’t be parsed correctly.Parse metadata from a safetensors file on the Hub.
To parse metadata from all safetensors files in a repo at once, use get_safetensors_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
( name: str namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The name of the Inference Endpoint to pause. str
, optional) —
The namespace in which the Inference Endpoint is located. Defaults to the current user. False
. Returns
information about the paused Inference Endpoint.
Pause an Inference Endpoint.
A paused Inference Endpoint will not be charged. It can be resumed at any time using resume_inference_endpoint(). This is different than scaling the Inference Endpoint to zero with scale_to_zero_inference_endpoint(), which would be automatically restarted when a request is made to it.
For convenience, you can also pause an Inference Endpoint using pause_inference_endpoint().
( repo_id: str token: Union[bool, str, None] = None ) → SpaceRuntime
Parameters
str
) —
ID of the Space to pause. Example: "Salesforce/BLIP2"
. False
. Returns
Runtime information about your Space including stage=PAUSED
and requested hardware.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
Pause your Space.
A paused Space stops executing until manually restarted by its owner. This is different from the sleeping state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the hardware you’ve selected. To restart your Space, use restart_space() and go to your Space settings page.
For more details, please visit the docs.
( repo_id: str additions: Iterable[CommitOperationAdd] token: Union[str, bool, None] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 free_memory: bool = True gitignore_content: Optional[str] = None )
Parameters
str
) —
The repository in which you will commit the files, for example: "username/custom_transformers"
. Iterable
of CommitOperationAdd) —
The list of files to upload. Warning: the objects in this list will be mutated to include information
relative to the upload. Do not reuse the same objects for multiple commits. False
. str
, optional) —
The type of repository to upload to (e.g. "model"
-default-, "dataset"
or "space"
). str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. boolean
, optional) —
Whether or not you plan to create a Pull Request with that commit. Defaults to False
. int
, optional) —
Number of concurrent threads for uploading files. Defaults to 5.
Setting it to 2 means at most 2 files will be uploaded concurrently. str
, optional) —
The content of the .gitignore
file to know which files should be ignored. The order of priority
is to first check if gitignore_content
is passed, then check if the .gitignore
file is present
in the list of files to commit and finally default to the .gitignore
file already hosted on the Hub
(if any). Pre-upload LFS files to S3 in preparation on a future commit.
This method is useful if you are generating the files to upload on-the-fly and you don’t want to store them in memory before uploading them all at once.
This is a power-user method. You shouldn’t need to call it directly to make a normal commit. Use create_commit() directly instead.
Commit operations will be mutated during the process. In particular, the attached path_or_fileobj
will be
removed after the upload to save memory (and replaced by an empty bytes
object). Do not reuse the same
objects except to pass them to create_commit(). If you don’t want to remove the attached content from the
commit operation object, pass free_memory=False
.
Example:
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo
>>> repo_id = create_repo("test_preupload").repo_id
# Generate and preupload LFS files one by one
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
... content = ... # generate binary content
... addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
... preupload_lfs_files(repo_id, additions=[addition]) # upload + free memory
... operations.append(addition)
# Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
( repo_id: str user: str repo_type: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
The id of the repo to reject access request for. str
) —
The username of the user which access request should be rejected. str
, optional) —
The type of the repo to reject access request for. Must be one of model
, dataset
or space
.
Defaults to model
. False
. Raises
HTTPError
HTTPError
—
HTTP 400 if the repo is not gated.HTTPError
—
HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write
or admin
role in the organization the repo belongs to or if you passed a read
token.HTTPError
—
HTTP 404 if the user does not exist on the Hub.HTTPError
—
HTTP 404 if the user access request cannot be found.HTTPError
—
HTTP 404 if the user access request is already in the rejected list.Reject an access request from a user for a given gated repo.
A rejected request will go to the rejected list. The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
( repo_id: str discussion_num: int new_title: str token: Union[bool, str, None] = None repo_type: Optional[str] = None ) → DiscussionTitleChange
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. int
) —
The number of the Discussion or Pull Request . Must be a strictly positive integer. str
) —
The new title for the discussion str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. False
. Returns
DiscussionTitleChange
the title change event
Renames a Discussion.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.( repo_id: str repo_type: Optional[str] = None token: Union[str, bool, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
Set to "dataset"
or "space"
if getting repository info from a dataset or a space,
None
or "model"
if getting repository info from a model. Default is None
. False
. Checks if a repository exists on the Hugging Face Hub.
( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Union[bool, str, None] = None ) → Union[SpaceInfo, DatasetInfo, ModelInfo]
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
The revision of the repository from which to get the
information. str
, optional) —
Set to "dataset"
or "space"
if getting repository info from a dataset or a space,
None
or "model"
if getting repository info from a model. Default is None
. float
, optional) —
Whether to set a timeout for the request to the Hub. bool
, optional) —
Whether or not to retrieve metadata for files in the repository
(size, LFS metadata, etc). Defaults to False
. False
. Returns
Union[SpaceInfo, DatasetInfo, ModelInfo]
The repository information, as a huggingface_hub.hf_api.DatasetInfo, huggingface_hub.hf_api.ModelInfo or huggingface_hub.hf_api.SpaceInfo object.
Get the info object for a given repo of a given type.
Raises the following errors:
private
and you do not have access.( repo_id: str hardware: SpaceHardware token: Union[bool, str, None] = None sleep_time: Optional[int] = None ) → SpaceRuntime
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. str
or SpaceHardware) —
Hardware on which to run the Space. Example: "t4-medium"
. False
. int
, optional) —
Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1
if you don’t want
your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure
the sleep time (value is fixed to 48 hours of inactivity).
See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. Returns
Runtime information about a Space including Space stage and hardware.
Request new hardware for a Space.
It is also possible to request hardware directly when creating the Space repo! See create_repo() for details.
( repo_id: str storage: SpaceStorage token: Union[bool, str, None] = None ) → SpaceRuntime
Parameters
str
) —
ID of the Space to update. Example: "HuggingFaceH4/open_llm_leaderboard"
. str
or SpaceStorage) —
Storage tier. Either ‘small’, ‘medium’, or ‘large’. False
. Returns
Runtime information about a Space including Space stage and hardware.
Request persistent storage for a Space.
It is not possible to decrease persistent storage after its granted. To do so, you must delete it via delete_space_storage().
( repo_id: str token: Union[bool, str, None] = None factory_reboot: bool = False ) → SpaceRuntime
Parameters
str
) —
ID of the Space to restart. Example: "Salesforce/BLIP2"
. False
. bool
, optional) —
If True
, the Space will be rebuilt from scratch without caching any requirements. Returns
Runtime information about your Space.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
Restart your Space.
This is the only way to programmatically restart a Space if you’ve put it on Pause (see pause_space()). You must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.
For more details, please visit the docs.
( name: str namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The name of the Inference Endpoint to resume. str
, optional) —
The namespace in which the Inference Endpoint is located. Defaults to the current user. False
. Returns
information about the resumed Inference Endpoint.
Resume an Inference Endpoint.
For convenience, you can also resume an Inference Endpoint using InferenceEndpoint.resume().
( repo_id: str revision: str repo_type: Optional[str] = None token: Union[str, bool, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
) —
The revision of the repository to check. str
, optional) —
Set to "dataset"
or "space"
if getting repository info from a dataset or a space,
None
or "model"
if getting repository info from a model. Default is None
. False
. Checks if a specific revision exists on a repo on the Hugging Face Hub.
( fn: Callable[..., R] *args **kwargs ) → Future
Parameters
Callable
) —
The method to run in the background. Returns
Future
a Future instance to get the result of the task.
Run a method in the background and return a Future instance.
The main goal is to run methods without blocking the main thread (e.g. to push data during a training). Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts by parallelizing lots of call to the API, you must setup and use your own ThreadPoolExecutor.
Note: Most-used methods like upload_file(), upload_folder() and create_commit() have a run_as_future: bool
argument to directly call them in the background. This is equivalent to calling api.run_as_future(...)
on them
but less verbose.
( name: str namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The name of the Inference Endpoint to scale to zero. str
, optional) —
The namespace in which the Inference Endpoint is located. Defaults to the current user. False
. Returns
information about the scaled-to-zero Inference Endpoint.
Scale Inference Endpoint to zero.
An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a cold start delay. This is different than pausing the Inference Endpoint with pause_inference_endpoint(), which would require a manual resume with resume_inference_endpoint().
For convenience, you can also scale an Inference Endpoint to zero using InferenceEndpoint.scale_to_zero().
( repo_id: str sleep_time: int token: Union[bool, str, None] = None ) → SpaceRuntime
Parameters
str
) —
ID of the repo to update. Example: "bigcode/in-the-stack"
. int
, optional) —
Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1
if you don’t want
your Space to pause (default behavior for upgraded hardware). For free hardware, you can’t configure
the sleep time (value is fixed to 48 hours of inactivity).
See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. False
. Returns
Runtime information about a Space including Space stage and hardware.
Set a custom sleep time for a Space running on upgraded hardware..
Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in “sleep” mode. If a new visitor lands on your Space, it will “wake it up”. Only upgraded hardware can have a configurable sleep time. To know more about the sleep stage, please refer to https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
It is also possible to set a custom sleep time when requesting hardware with request_space_hardware().
( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None proxies: Optional[Dict] = None etag_timeout: float = 10 force_download: bool = False token: Union[bool, str, None] = None local_files_only: bool = False allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None max_workers: int = 8 tqdm_class: Optional[base_tqdm] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' resume_download: Optional[bool] = None ) → str
Parameters
str
) —
A user or an organization name and a repo name separated by a /
. str
, optional) —
Set to "dataset"
or "space"
if downloading from a dataset or space,
None
or "model"
if downloading from a model. Default is None
. str
, optional) —
An optional Git revision id which can be a branch name, a tag, or a
commit hash. str
, Path
, optional) —
Path to the folder where cached files are stored. str
or Path
, optional) —
If provided, the downloaded files will be placed under this directory. dict
, optional) —
Dictionary mapping protocol to the URL of the proxy passed to
requests.request
. float
, optional, defaults to 10
) —
When fetching ETag, how many seconds to wait for the server to send
data before giving up which is passed to requests.request
. bool
, optional, defaults to False
) —
Whether the file should be downloaded even if it already exists in the local cache. False
. bool
, optional, defaults to False
) —
If True
, avoid downloading the file and return the path to the
local cached file if it exists. List[str]
or str
, optional) —
If provided, only files matching at least one pattern are downloaded. List[str]
or str
, optional) —
If provided, files matching any of the patterns are not downloaded. int
, optional) —
Number of concurrent threads to download files (1 thread = 1 file download).
Defaults to 8. tqdm
, optional) —
If provided, overwrites the default behavior for the progress bar. Passed
argument must inherit from tqdm.auto.tqdm
or at least mimic its behavior.
Note that the tqdm_class
is not passed to each individual download.
Defaults to the custom HF progress bar that can be disabled by setting
HF_HUB_DISABLE_PROGRESS_BARS
environment variable. Returns
str
folder path of the repo snapshot.
Raises
if
or ETag
if
— token=True
and the token cannot be found.OSError
ifETag
— cannot be determined.if
— some parameter value is invalidDownload repo files.
Download a whole snapshot of a repo’s files at the specified revision. This is useful when you want all files from
a repo, because you don’t know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
allow_patterns
and ignore_patterns
.
If local_dir
is provided, the file structure from the repo will be replicated in this location. When using this
option, the cache_dir
will not be used and a .huggingface/
folder will be created at the root of local_dir
to store some metadata related to the downloaded files.While this mechanism is not as robust as the main
cache-system, it’s optimized for regularly pulling the latest version of a repository.
An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly configured. It is also not possible to filter which files to download when cloning a repository using git.
( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Union[bool, str, None] = None ) → SpaceInfo
Parameters
str
) —
A namespace (user or an organization) and a repo name separated
by a /
. str
, optional) —
The revision of the space repository from which to get the
information. float
, optional) —
Whether to set a timeout for the request to the Hub. bool
, optional) —
Whether or not to retrieve metadata for files in the repository
(size, LFS metadata, etc). Defaults to False
. False
. Returns
The space repository information.
Get info on one specific Space on huggingface.co.
Space can be private if you pass an acceptable token.
Raises the following errors:
private
and you do not have access.( repo_id: str branch: Optional[str] = None commit_message: Optional[str] = None repo_type: Optional[str] = None token: Union[str, bool, None] = None )
Parameters
str
) —
A namespace (user or an organization) and a repo name separated by a /
. str
, optional) —
The branch to squash. Defaults to the head of the "main"
branch. str
, optional) —
The commit message to use for the squashed commit. str
, optional) —
Set to "dataset"
or "space"
if listing commits from a dataset or a Space, None
or "model"
if
listing from a model. Default is None
. False
. Raises
RepositoryNotFoundError or RevisionNotFoundError or BadRequestError
Squash commit history on a branch for a repo on the Hub.
Squashing the repo history is useful when you know you’ll make hundreds of commits and you don’t want to clutter the history. Squashing commits can only be performed from the head of a branch.
Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.
Once the history of a branch has been squashed, it is not possible to merge it back into another branch since their history will have diverged.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id
# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")
# Squash history
>>> api.super_squash_history(repo_id=repo_id)
( repo_id: str token: Union[bool, str, None] = None repo_type: Optional[str] = None )
Parameters
str
) —
The repository to unlike. Example: "user/my-cool-model"
. False
. str
, optional) —
Set to "dataset"
or "space"
if unliking a dataset or space, None
or
"model"
if unliking a model. Default is None
. Raises
Unlike a given repo on the Hub (e.g. remove from favorite list).
See also like() and list_liked_repos().
( collection_slug: str item_object_id: str note: Optional[str] = None position: Optional[int] = None token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. str
) —
ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
It must be retrieved from a CollectionItem object. Example: collection.items[0].item_object_id
. str
, optional) —
A note to attach to the item in the collection. The maximum size for a note is 500 characters. int
, optional) —
New position of the item in the collection. False
. Update an item in a collection.
Example:
>>> from huggingface_hub import get_collection, update_collection_item
# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
# Update item based on its ID (add note + update position)
>>> update_collection_item(
... collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
... item_object_id=collection.items[-1].item_object_id,
... note="Newly updated model!"
... position=0,
... )
( collection_slug: str title: Optional[str] = None description: Optional[str] = None position: Optional[int] = None private: Optional[bool] = None theme: Optional[str] = None token: Union[bool, str, None] = None )
Parameters
str
) —
Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. str
) —
Title of the collection to update. str
, optional) —
Description of the collection to update. int
, optional) —
New position of the collection in the list of collections of the user. bool
, optional) —
Whether the collection should be private or not. str
, optional) —
Theme of the collection on the Hub. False
. Update metadata of a collection on the Hub.
All arguments are optional. Only provided metadata will be updated.
Returns: Collection
Example:
>>> from huggingface_hub import update_collection_metadata
>>> collection = update_collection_metadata(
... collection_slug="username/iccv-2023-64f9a55bb3115b4f513ec026",
... title="ICCV Oct. 2023"
... description="Portfolio of models, datasets, papers and demos I presented at ICCV Oct. 2023",
... private=False,
... theme="pink",
... )
>>> collection.slug
"username/iccv-oct-2023-64f9a55bb3115b4f513ec026"
# ^collection slug got updated but not the trailing ID
( name: str accelerator: Optional[str] = None instance_size: Optional[str] = None instance_type: Optional[str] = None min_replica: Optional[int] = None max_replica: Optional[int] = None repository: Optional[str] = None framework: Optional[str] = None revision: Optional[str] = None task: Optional[str] = None namespace: Optional[str] = None token: Union[bool, str, None] = None ) → InferenceEndpoint
Parameters
str
) —
The name of the Inference Endpoint to update. str
, optional) —
The hardware accelerator to be used for inference (e.g. "cpu"
). str
, optional) —
The size or type of the instance to be used for hosting the model (e.g. "large"
). str
, optional) —
The cloud instance type where the Inference Endpoint will be deployed (e.g. "c6i"
). int
, optional) —
The minimum number of replicas (instances) to keep running for the Inference Endpoint. int
, optional) —
The maximum number of replicas (instances) to scale to for the Inference Endpoint. str
, optional) —
The name of the model repository associated with the Inference Endpoint (e.g. "gpt2"
). str
, optional) —
The machine learning framework used for the model (e.g. "custom"
). str
, optional) —
The specific model revision to deploy on the Inference Endpoint (e.g. "6c0e6080953db56375760c0471a8c5f2929baf11"
). str
, optional) —
The task on which to deploy the model (e.g. "text-classification"
). str
, optional) —
The namespace where the Inference Endpoint will be updated. Defaults to the current user’s namespace. False
. Returns
information about the updated Inference Endpoint.
Update an Inference Endpoint.
This method allows the update of either the compute configuration, the deployed model, or both. All arguments are optional but at least one must be provided.
For convenience, you can also update an Inference Endpoint using InferenceEndpoint.update().
( repo_id: str private: bool = False token: Union[str, bool, None] = None organization: Optional[str] = None repo_type: Optional[str] = None name: Optional[str] = None )
Parameters
str
, optional) —
A namespace (user or an organization) and a repo name separated
by a /
. bool
, optional, defaults to False
) —
Whether the model repo should be private. False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. Update the visibility setting of a repository.
Raises the following errors:
private
and you do not have access.( path_or_fileobj: Union[str, Path, bytes, BinaryIO] path_in_repo: str repo_id: str token: Union[str, bool, None] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None run_as_future: bool = False ) → CommitInfo or Future
Parameters
str
, Path
, bytes
, or IO
) —
Path to a file on the local machine or binary data stream /
fileobj / buffer. str
) —
Relative filepath in the repo, for example:
"checkpoints/1fec34a/weights.bin"
str
) —
The repository to which the file will be uploaded, for example:
"username/custom_transformers"
False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. str
, optional) —
The summary / title / first line of the generated commit str
optional) —
The description of the generated commit boolean
, optional) —
Whether or not to create a Pull Request with that commit. Defaults to False
.
If revision
is not set, PR is opened against the "main"
branch. If
revision
is set and is a branch, PR is opened against this branch. If
revision
is set and is not a branch name (example: a commit oid), an
RevisionNotFoundError
is returned by the server. str
, optional) —
The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
If specified and create_pr
is False
, the commit will fail if revision
does not point to parent_commit
.
If specified and create_pr
is True
, the pull request will be created from parent_commit
.
Specifying parent_commit
ensures the repo has not changed before committing the changes, and can be
especially useful if the repo is updated / committed to concurrently. bool
, optional) —
Whether or not to run this method in the background. Background jobs are run sequentially without
blocking the main thread. Passing run_as_future=True
will return a Future
object. Defaults to False
. Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
Upload a local file (up to 50 GB) to the given repo. The upload is done through a HTTP post request, and doesn’t require git or git-lfs to be installed.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidprivate
and you do not have access.upload_file
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
Example:
>>> from huggingface_hub import upload_file
>>> with open("./local/filepath", "rb") as fobj:
... upload_file(
... path_or_fileobj=fileobj,
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-dataset",
... repo_type="dataset",
... token="my_token",
... )
"https://huggingface.co/datasets/username/my-dataset/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... )
"https://huggingface.co/username/my-model/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/username/my-model/blob/refs%2Fpr%2F1/remote/file/path.h5"
( repo_id: str folder_path: Union[str, Path] path_in_repo: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None token: Union[str, bool, None] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None delete_patterns: Optional[Union[List[str], str]] = None multi_commits: bool = False multi_commits_verbose: bool = False run_as_future: bool = False ) → CommitInfo or Future
Parameters
str
) —
The repository to which the file will be uploaded, for example:
"username/custom_transformers"
str
or Path
) —
Path to the folder to upload on the local file system str
, optional) —
Relative path of the directory in the repo, for example:
"checkpoints/1fec34a/results"
. Will default to the root folder of the repository. False
. str
, optional) —
Set to "dataset"
or "space"
if uploading to a dataset or
space, None
or "model"
if uploading to a model. Default is
None
. str
, optional) —
The git revision to commit from. Defaults to the head of the "main"
branch. str
, optional) —
The summary / title / first line of the generated commit. Defaults to:
f"Upload {path_in_repo} with huggingface_hub"
str
optional) —
The description of the generated commit boolean
, optional) —
Whether or not to create a Pull Request with that commit. Defaults to False
. If revision
is not
set, PR is opened against the "main"
branch. If revision
is set and is a branch, PR is opened
against this branch. If revision
is set and is not a branch name (example: a commit oid), an
RevisionNotFoundError
is returned by the server. If both multi_commits
and create_pr
are True,
the PR created in the multi-commit process is kept opened. str
, optional) —
The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
If specified and create_pr
is False
, the commit will fail if revision
does not point to parent_commit
.
If specified and create_pr
is True
, the pull request will be created from parent_commit
.
Specifying parent_commit
ensures the repo has not changed before committing the changes, and can be
especially useful if the repo is updated / committed to concurrently. List[str]
or str
, optional) —
If provided, only files matching at least one pattern are uploaded. List[str]
or str
, optional) —
If provided, files matching any of the patterns are not uploaded. List[str]
or str
, optional) —
If provided, remote files matching any of the patterns will be deleted from the repo while committing
new files. This is useful if you don’t know which files have already been uploaded.
Note: to avoid discrepancies the .gitattributes
file is not deleted even if it matches the pattern. bool
) —
If True, changes are pushed to a PR using a multi-commit process. Defaults to False
. bool
) —
If True and multi_commits
is used, more information will be displayed to the user. bool
, optional) —
Whether or not to run this method in the background. Background jobs are run sequentially without
blocking the main thread. Passing run_as_future=True
will return a Future
object. Defaults to False
. Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
str
or Future
:
If multi_commits=True
, returns the url of the PR created to push the changes. If run_as_future=True
is passed, returns a Future object which will contain the result when executed.
Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn’t require git or git-lfs to be installed.
The structure of the folder will be preserved. Files with the same name already present in the repository will be overwritten. Others will be left untouched.
Use the allow_patterns
and ignore_patterns
arguments to specify which files to upload. These parameters
accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as
documented here. If both allow_patterns
and
ignore_patterns
are provided, both constraints apply. By default, all files from the folder are uploaded.
Use the delete_patterns
argument to specify remote files you want to delete. Input type is the same as for
allow_patterns
(see above). If path_in_repo
is also provided, the patterns are matched against paths
relative to this folder. For example, upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")
will delete any remote file under ./experiment/logs/
. Note that the .gitattributes
file will not be deleted
even if it matches the patterns.
Any .git/
folder present in any subdirectory will be ignored. However, please be aware that the .gitignore
file is not taken into account.
Uses HfApi.create_commit
under the hood.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalidupload_folder
assumes that the repo already exists on the Hub. If you get a Client error 404, please make
sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist, create
it first using create_repo().
multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
# Upload checkpoints folder except the log files
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... ignore_patterns="**/logs/*.txt",
... )
# "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... delete_patterns="**/logs/*.txt",
... )
"https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder while creating a PR
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/datasets/username/my-dataset/tree/refs%2Fpr%2F1/remote/experiment/checkpoints"
( token: Union[bool, str, None] = None )
Parameters
False
. Call HF API to know “whoami”.
( operations: Iterable max_operations_per_commit: int = 50 max_upload_size_per_commit: int = 2147483648 ) → Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
Parameters
List
of CommitOperation()
) —
The list of operations to split into commits. int
) —
Maximum number of operations in a single commit. Defaults to 50. int
) —
Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are
uploaded, 1 per commit. Returns
Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
a tuple. First item is a list of lists of CommitOperationAdd representing the addition commits to push. The second item is a list of lists of CommitOperationDelete representing the deletion commits.
Split a list of operations in a list of commits to perform.
Implementation follows a sub-optimal (yet simple) algorithm:
max_operations_per_commits
operations.max_upload_size_per_commit
are committed 1 by 1.max_operations_per_commit
or the
max_upload_size_per_commit
limit is reached.We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see bin packing problem). For our use case, it is not problematic to use a sub-optimal solution so we favored an easy-to-explain implementation.
plan_multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are not updating multiple times the same file, you are fine.
( username: str fullname: str email: str timestamp: datetime status: Literal['pending', 'accepted', 'rejected'] fields: Optional[Dict[str, Any]] = None )
Parameters
str
) —
Username of the user who requested access. str
) —
Fullname of the user who requested access. str
) —
Email of the user who requested access. datetime
) —
Timestamp of the request. Literal["pending", "accepted", "rejected"]
) —
Status of the request. Can be one of ["pending", "accepted", "rejected"]
. Dict[str, Any]
, optional) —
Additional fields filled by the user in the gate form. Data structure containing information about a user access request.
( *args commit_url: str _url: Optional[str] = None **kwargs )
Parameters
str
) —
Url where to find the commit. str
) —
The summary (first line) of the commit that has been created. str
) —
Description of the commit that has been created. Can be empty. str
) —
Commit hash id. Example: "91c54ad1727ee830252e457677f467be0bfd8a57"
. str
, optional) —
Url to the PR that has been created, if any. Populated when create_pr=True
is passed. str
, optional) —
Revision of the PR that has been created, if any. Populated when
create_pr=True
is passed. Example: "refs/pr/1"
. int
, optional) —
Number of the PR discussion that has been created, if any. Populated when
create_pr=True
is passed. Can be passed as discussion_num
in
get_discussion_details(). Example: 1
. str
, optional) —
Legacy url for str
compatibility. Can be the url to the uploaded file on the Hub (if returned by
upload_file()), to the uploaded folder on the Hub (if returned by upload_folder()) or to the commit on
the Hub (if returned by create_commit()). Defaults to commit_url
. It is deprecated to use this
attribute. Please use commit_url
instead. Data structure containing information about a newly created commit.
Returned by any method that creates a commit on the Hub: create_commit(), upload_file(), upload_folder(),
delete_file(), delete_folder(). It inherits from str
for backward compatibility but using methods specific
to str
is deprecated.
( **kwargs )
Parameters
str
) —
ID of dataset. str
) —
Author of the dataset. str
) —
Repo SHA at this particular revision. datetime
, optional) —
Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z
,
corresponding to the date when we began to store creation dates. datetime
, optional) —
Date of last commit to the repo. bool
) —
Is the repo private. bool
, optional) —
Is the repo disabled. Literal["auto", "manual", False]
, optional) —
Is the repo gated.
If so, whether there is manual or automatic approval. int
) —
Number of downloads of the dataset over the last 30 days. int
) —
Number of likes of the dataset. List[str]
) —
List of tags of the dataset. DatasetCardData
, optional) —
Model Card Metadata as a huggingface_hub.repocard_data.DatasetCardData
object. List[RepoSibling]
) —
List of huggingface_hub.hf_api.RepoSibling objects that constitute the dataset. Contains information about a dataset on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing datasets using list_datasets() only a subset of the attributes are returned.
( name: str ref: str target_commit: str )
Contains information about a git reference for a repo on the Hub.
( commit_id: str authors: List[str] created_at: datetime title: str message: str formatted_title: Optional[str] formatted_message: Optional[str] )
Parameters
str
) —
OID of the commit (e.g. "e7da7f221d5bf496a48136c0cd264e630fe9fcc8"
) List[str]
) —
List of authors of the commit. datetime
) —
Datetime when the commit was created. str
) —
Title of the commit. This is a free-text value entered by the authors. str
) —
Description of the commit. This is a free-text value entered by the authors. str
) —
Title of the commit formatted as HTML. Only returned if formatted=True
is set. str
) —
Description of the commit formatted as HTML. Only returned if formatted=True
is set. Contains information about a git commit for a repo on the Hub. Check out list_repo_commits() for more details.
( branches: List[GitRefInfo] converts: List[GitRefInfo] tags: List[GitRefInfo] pull_requests: Optional[List[GitRefInfo]] = None )
Parameters
List[GitRefInfo]
) —
A list of GitRefInfo containing information about branches on the repo. List[GitRefInfo]
) —
A list of GitRefInfo containing information about “convert” refs on the repo.
Converts are refs used (internally) to push preprocessed data in Dataset repos. List[GitRefInfo]
) —
A list of GitRefInfo containing information about tags on the repo. List[GitRefInfo]
, optional) —
A list of GitRefInfo containing information about pull requests on the repo.
Only returned if include_prs=True
is set. Contains information about all git references for a repo on the Hub.
Object is returned by list_repo_refs().
( **kwargs )
Parameters
str
) —
ID of model. str
, optional) —
Author of the model. str
, optional) —
Repo SHA at this particular revision. datetime
, optional) —
Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z
,
corresponding to the date when we began to store creation dates. datetime
, optional) —
Date of last commit to the repo. bool
) —
Is the repo private. bool
, optional) —
Is the repo disabled. Literal["auto", "manual", False]
, optional) —
Is the repo gated.
If so, whether there is manual or automatic approval. int
) —
Number of downloads of the model over the last 30 days. int
) —
Number of likes of the model. str
, optional) —
Library associated with the model. List[str]
) —
List of tags of the model. Compared to card_data.tags
, contains extra tags computed by the Hub
(e.g. supported libraries, model’s arXiv). str
, optional) —
Pipeline tag associated with the model. str
, optional) —
Mask token used by the model. Any
, optional) —
Widget data associated with the model. Dict
, optional) —
Model index for evaluation. Dict
, optional) —
Model configuration. TransformersInfo
, optional) —
Transformers-specific info (auto class, processor, etc.) associated with the model. ModelCardData
, optional) —
Model Card Metadata as a huggingface_hub.repocard_data.ModelCardData
object. List[RepoSibling]
) —
List of huggingface_hub.hf_api.RepoSibling objects that constitute the model. List[str]
, optional) —
List of spaces using the model. SafeTensorsInfo
, optional) —
Model’s safetensors information. Contains information about a model on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing models using list_models() only a subset of the attributes are returned.
( rfilename: str size: Optional[int] = None blob_id: Optional[str] = None lfs: Optional[BlobLfsInfo] = None )
Parameters
int
, optional) —
The file’s size, in bytes. This attribute is defined when files_metadata
argument of repo_info() is set
to True
. It’s None
otherwise. str
, optional) —
The file’s git OID. This attribute is defined when files_metadata
argument of repo_info() is set to
True
. It’s None
otherwise. BlobLfsInfo
, optional) —
The file’s LFS metadata. This attribute is defined whenfiles_metadata
argument of repo_info() is set to
True
and the file is stored with Git LFS. It’s None
otherwise. Contains basic information about a repo file inside a repo on the Hub.
All attributes of this class are optional except rfilename
. This is because only the file names are returned when
listing repositories on the Hub (with list_models(), list_datasets() or list_spaces()). If you need more
information like file size, blob id or lfs details, you must request them specifically from one repo at a time
(using model_info(), dataset_info() or space_info()) as it adds more constraints on the backend server to
retrieve these.
( **kwargs )
Parameters
int
) —
The file’s size, in bytes. str
) —
The file’s git OID. BlobLfsInfo
) —
The file’s LFS metadata. LastCommitInfo
, optional) —
The file’s last commit metadata. Only defined if list_repo_tree() and get_paths_info()
are called with expand=True
. BlobSecurityInfo
, optional) —
The file’s security scan metadata. Only defined if list_repo_tree() and get_paths_info()
are called with expand=True
. Contains information about a file on the Hub.
( url: Any endpoint: Optional[str] = None )
Parameters
Any
) —
String value of the repo url. str
, optional) —
Endpoint of the Hub. Defaults to https://huggingface.co. Raises
ValueError
If URL cannot be parsed.ValueError
If repo_type
is unknown.Subclass of str
describing a repo URL on the Hub.
RepoUrl
is returned by HfApi.create_repo
. It inherits from str
for backward
compatibility. At initialization, the URL is parsed to populate properties:
str
)Optional[str]
)str
)str
)Literal["model", "dataset", "space"]
)str
)Example:
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')
>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')
>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')
>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
( metadata: Optional sharded: bool weight_map: Dict files_metadata: Dict )
Parameters
Dict
, optional) —
The metadata contained in the ‘model.safetensors.index.json’ file, if it exists. Only populated for sharded
models. bool
) —
Whether the repo contains a sharded model or not. Dict[str, str]
) —
A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors. Dict[str, SafetensorsFileMetadata]
) —
A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as
a SafetensorsFileMetadata
object. Dict[str, int]
) —
A map of the number of parameters per data type. Keys are data types and values are the number of parameters
of that data type. Metadata for a Safetensors repo.
A repo is considered to be a Safetensors repo if it contains either a ‘model.safetensors’ weight file (non-shared model) or a ‘model.safetensors.index.json’ index file (sharded model) at its root.
This class is returned by get_safetensors_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
( metadata: Dict tensors: Dict )
Parameters
Dict
) —
The metadata contained in the file. Dict[str, TensorInfo]
) —
A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as a
TensorInfo
object. Dict[str, int]
) —
A map of the number of parameters per data type. Keys are data types and values are the number of parameters
of that data type. Metadata for a Safetensors file hosted on the Hub.
This class is returned by parse_safetensors_file_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
( **kwargs )
Parameters
str
) —
ID of the Space. str
, optional) —
Author of the Space. str
, optional) —
Repo SHA at this particular revision. datetime
, optional) —
Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z
,
corresponding to the date when we began to store creation dates. datetime
, optional) —
Date of last commit to the repo. bool
) —
Is the repo private. Literal["auto", "manual", False]
, optional) —
Is the repo gated.
If so, whether there is manual or automatic approval. bool
, optional) —
Is the Space disabled. str
, optional) —
Host URL of the Space. str
, optional) —
Subdomain of the Space. int
) —
Number of likes of the Space. List[str]
) —
List of tags of the Space. List[RepoSibling]
) —
List of huggingface_hub.hf_api.RepoSibling objects that constitute the Space. SpaceCardData
, optional) —
Space Card Metadata as a huggingface_hub.repocard_data.SpaceCardData
object. SpaceRuntime
, optional) —
Space runtime information as a huggingface_hub.hf_api.SpaceRuntime object. str
, optional) —
SDK used by the Space. List[str]
, optional) —
List of models used by the Space. List[str]
, optional) —
List of datasets used by the Space. Contains information about a Space on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing spaces using list_spaces() only a subset of the attributes are returned.
( dtype: Literal shape: List data_offsets: Tuple )
Parameters
str
) —
The data type of the tensor (“F64”, “F32”, “F16”, “BF16”, “I64”, “I32”, “I16”, “I8”, “U8”, “BOOL”). List[int]
) —
The shape of the tensor. Tuple[int, int]
) —
The offsets of the data in the file as a tuple [BEGIN, END]
. int
) —
The number of parameters in the tensor. Information about a tensor.
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
( **kwargs )
Parameters
str
) —
URL of the user’s avatar. str
) —
Name of the user on the Hub (unique). str
) —
User’s full name. bool
, optional) —
Whether the user is a pro user. int
, optional) —
Number of models created by the user. int
, optional) —
Number of datasets created by the user. int
, optional) —
Number of spaces created by the user. int
, optional) —
Number of discussions initiated by the user. int
, optional) —
Number of papers authored by the user. int
, optional) —
Number of upvotes received by the user. int
, optional) —
Number of likes given by the user. bool
, optional) —
Whether the authenticated user is following this user. str
, optional) —
User’s details. Contains information about a user on the Hub.
( user: str total: int datasets: List[str] models: List[str] spaces: List[str] )
Parameters
str
) —
Name of the user for which we fetched the likes. int
) —
Total number of likes. List[str]
) —
List of datasets liked by the user (as repo_ids). List[str]
) —
List of models liked by the user (as repo_ids). List[str]
) —
List of spaces liked by the user (as repo_ids). Contains information about a user likes on the Hub.
CommitOperation()
에 지원되는 값은 다음과 같습니다:
( path_in_repo: str path_or_fileobj: Union )
Parameters
str
) —
Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin"
str
, Path
, bytes
, or BinaryIO
) —
Either:str
or pathlib.Path
) to uploadbytes
) holding the content of the file to uploadio.BufferedIOBase
), typically obtained
with open(path, "rb")
. It must support seek()
and tell()
methods.Raises
ValueError
ValueError
—
If path_or_fileobj
is not one of str
, Path
, bytes
or io.BufferedIOBase
.ValueError
—
If path_or_fileobj
is a str
or Path
but not a path to an existing file.ValueError
—
If path_or_fileobj
is a io.BufferedIOBase
but it doesn’t support both
seek()
and tell()
.Data structure holding necessary info to upload a file to a repository on the Hub.
( with_tqdm: bool = False )
A context manager that yields a file-like object allowing to read the underlying
data behind path_or_fileobj
.
Example:
>>> operation = CommitOperationAdd(
... path_in_repo="remote/dir/weights.h5",
... path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')
>>> with operation.as_file() as file:
... content = file.read()
>>> with operation.as_file(with_tqdm=True) as file:
... while True:
... data = file.read(1024)
... if not data:
... break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
>>> with operation.as_file(with_tqdm=True) as file:
... requests.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
( path_in_repo: str is_folder: Union = 'auto' )
Parameters
str
) —
Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin"
for a file or "checkpoints/1fec34a/"
for a folder. bool
or Literal["auto"]
, optional) —
Whether the Delete Operation applies to a folder or not. If “auto”, the path
type (file or folder) is guessed automatically by looking if path ends with
a ”/” (folder) or not (file). To explicitly set the path type, you can set
is_folder=True
or is_folder=False
. Data structure holding necessary info to delete a file or a folder from a repository on the Hub.
( src_path_in_repo: str path_in_repo: str src_revision: Optional = None )
Parameters
str
) —
Relative filepath in the repo of the file to be copied, e.g. "checkpoints/1fec34a/weights.bin"
. str
) —
Relative filepath in the repo where to copy the file, e.g. "checkpoints/1fec34a/weights_copy.bin"
. str
, optional) —
The git revision of the file to be copied. Can be any valid git revision.
Default to the target commit revision. Data structure holding necessary info to copy a file in a repository on the Hub.
Limitations:
Note: you can combine a CommitOperationCopy and a CommitOperationDelete to rename an LFS file on the Hub.
( repo_id: str folder_path: Union every: Union = 5 path_in_repo: Optional = None repo_type: Optional = None revision: Optional = None private: bool = False token: Optional = None allow_patterns: Union = None ignore_patterns: Union = None squash_history: bool = False hf_api: Optional = None )
Parameters
str
) —
The id of the repo to commit to. str
or Path
) —
Path to the local folder to upload regularly. int
or float
, optional) —
The number of minutes between each commit. Defaults to 5 minutes. str
, optional) —
Relative path of the directory in the repo, for example: "checkpoints/"
. Defaults to the root folder
of the repository. str
, optional) —
The type of the repo to commit to. Defaults to model
. str
, optional) —
The revision of the repo to commit to. Defaults to main
. bool
, optional) —
Whether to make the repo private. Defaults to False
. This value is ignored if the repo already exist. str
, optional) —
The token to use to commit to the repo. Defaults to the token saved on the machine. List[str]
or str
, optional) —
If provided, only files matching at least one pattern are uploaded. List[str]
or str
, optional) —
If provided, files matching any of the patterns are not uploaded. bool
, optional) —
Whether to squash the history of the repo after each commit. Defaults to False
. Squashing commits is
useful to avoid degraded performances on the repo when it grows too large. HfApi
, optional) —
The HfApi client to use to commit to the Hub. Can be set with custom settings (user agent, token,…). Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).
The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is triggered. Checkout the upload guide to learn more about how to use it.
Example:
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler
# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)
>>> with csv_path.open("a") as f:
... f.write("first line")
# Some time later (...)
>>> with csv_path.open("a") as f:
... f.write("second line")
Push folder to the Hub and return the commit info.
This method is not meant to be called directly. It is run in the background by the scheduler, respecting a queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency issues.
The default behavior of push_to_hub
is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from CommitScheduler and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our integration guide.
Stop the scheduler.
A stopped scheduler cannot be restarted. Mostly for tests purposes.
Trigger a push_to_hub
and return a future.
This method is automatically called every every
minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.
huggingface_hub
패키지에는 Hub에서 리포지토리를 필터링하는 데 도움되는 도구들이 포함되어 있습니다.
( author: Optional = None benchmark: Union = None dataset_name: Optional = None language_creators: Union = None language: Union = None multilinguality: Union = None size_categories: Union = None task_categories: Union = None task_ids: Union = None )
Parameters
str
, optional) —
A string that can be used to identify datasets on
the Hub by the original uploader (author or organization), such as
facebook
or huggingface
. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by their official benchmark. str
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by its name, such as SQAC
or wikineural
str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub with how the data was curated, such as crowdsourced
or
machine_generated
. str
or List
, optional) —
A string or list of strings representing a two-character language to
filter datasets by on the Hub. str
or List
, optional) —
A string or list of strings representing a filter for datasets that
contain multiple languages. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the size of the dataset such as 100K<n<1M
or
1M<n<10M
. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the designed task, such as audio_classification
or
named_entity_recognition
. str
or List
, optional) —
A string or list of strings that can be used to identify datasets on
the Hub by the specific task such as speech_emotion_recognition
or
paraphrase
. A class that converts human-readable dataset search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
The DatasetFilter
class is deprecated and will be removed in huggingface_hub>=0.24. Please pass the filter parameters as keyword arguments directly to list_datasets().
Examples:
>>> from huggingface_hub import DatasetFilter
>>> # Using author
>>> new_filter = DatasetFilter(author="facebook")
>>> # Using benchmark
>>> new_filter = DatasetFilter(benchmark="raft")
>>> # Using dataset_name
>>> new_filter = DatasetFilter(dataset_name="wikineural")
>>> # Using language_creator
>>> new_filter = DatasetFilter(language_creator="crowdsourced")
>>> # Using language
>>> new_filter = DatasetFilter(language="en")
>>> # Using multilinguality
>>> new_filter = DatasetFilter(multilinguality="multilingual")
>>> # Using size_categories
>>> new_filter = DatasetFilter(size_categories="100K<n<1M")
>>> # Using task_categories
>>> new_filter = DatasetFilter(task_categories="audio_classification")
>>> # Using task_ids
>>> new_filter = DatasetFilter(task_ids="paraphrase")
( author: Optional = None library: Union = None language: Union = None model_name: Optional = None task: Union = None trained_dataset: Union = None tags: Union = None )
Parameters
str
, optional) —
A string that can be used to identify models on the Hub by the
original uploader (author or organization), such as facebook
or
huggingface
. str
or List
, optional) —
A string or list of strings of foundational libraries models were
originally trained from, such as pytorch, tensorflow, or allennlp. str
or List
, optional) —
A string or list of strings of languages, both by name and country
code, such as “en” or “English” str
, optional) —
A string that contain complete or partial names for models on the
Hub, such as “bert” or “bert-base-cased” str
or List
, optional) —
A string or list of strings of tasks models were designed for, such
as: “fill-mask” or “automatic-speech-recognition” str
or List
, optional) —
A string tag or a list of tags to filter models on the Hub by, such
as text-generation
or spacy
. str
or List
, optional) —
A string tag or a list of string tags of the trained dataset for a
model on the Hub. A class that converts human-readable model search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
The ModelFilter
class is deprecated and will be removed in huggingface_hub>=0.24. Please pass the filter parameters as keyword arguments directly to list_models().
Examples:
>>> from huggingface_hub import ModelFilter
>>> # For the author_or_organization
>>> new_filter = ModelFilter(author_or_organization="facebook")
>>> # For the library
>>> new_filter = ModelFilter(library="pytorch")
>>> # For the language
>>> new_filter = ModelFilter(language="french")
>>> # For the model_name
>>> new_filter = ModelFilter(model_name="bert")
>>> # For the task
>>> new_filter = ModelFilter(task="text-classification")
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# To list model tags
>>> new_filter = ModelFilter(tags="benchmark:raft")
>>> # Related to the dataset
>>> new_filter = ModelFilter(trained_dataset="common_voice")