This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub. Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization due to Python requirements. Visit @huggingface.js/tasks to find the JSON schemas for each task.
This part of the lib is still under development and will be improved in future releases.
( inputs: Any parameters: Optional = None )
Inputs for Audio Classification inference
Outputs for Audio Classification inference
( function_to_apply: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Audio Classification
Inputs for Audio to Audio inference
( blob: Any content_type: str label: str )
Outputs of inference for the Audio To Audio task A generated audio file with its label.
( do_sample: Optional = None early_stopping: Union = None epsilon_cutoff: Optional = None eta_cutoff: Optional = None max_length: Optional = None max_new_tokens: Optional = None min_length: Optional = None min_new_tokens: Optional = None num_beam_groups: Optional = None num_beams: Optional = None penalty_alpha: Optional = None temperature: Optional = None top_k: Optional = None top_p: Optional = None typical_p: Optional = None use_cache: Optional = None )
Parametrization of the text generation process Ad-hoc parametrization of the text generation process
( inputs: Any parameters: Optional = None )
Inputs for Automatic Speech Recognition inference
( text: str chunks: Optional = None )
Outputs of inference for the Automatic Speech Recognition task
( text: str timestamps: List )
( generate: Optional = None return_timestamps: Optional = None )
Additional inference parameters Additional inference parameters for Automatic Speech Recognition
( messages: List frequency_penalty: Optional = None logit_bias: Optional = None logprobs: Optional = None max_tokens: Optional = None model: Optional = None n: Optional = None presence_penalty: Optional = None response_format: Optional = None seed: Optional = None stop: Optional = None stream: Optional = None temperature: Optional = None tool_choice: Union = None tool_prompt: Optional = None tools: Optional = None top_logprobs: Optional = None top_p: Optional = None )
Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( arguments: Any name: str description: Optional = None )
( type: ChatCompletionInputGrammarTypeType value: Any )
( content: Union role: str name: Optional = None )
( type: ChatCompletionInputMessageChunkType image_url: Optional = None text: Optional = None )
( function: ChatCompletionInputFunctionDefinition type: str )
( choices: List created: int id: str model: str system_fingerprint: str usage: ChatCompletionOutputUsage )
Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( finish_reason: str index: int message: ChatCompletionOutputMessage logprobs: Optional = None )
( arguments: Any name: str description: Optional = None )
( logprob: float token: str top_logprobs: List )
( role: str content: Optional = None tool_calls: Optional = None )
( function: ChatCompletionOutputFunctionDefinition id: str type: str )
( completion_tokens: int prompt_tokens: int total_tokens: int )
( choices: List created: int id: str model: str system_fingerprint: str )
Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( delta: ChatCompletionStreamOutputDelta index: int finish_reason: Optional = None logprobs: Optional = None )
( role: str content: Optional = None tool_calls: Optional = None )
( function: ChatCompletionStreamOutputFunction id: str index: int type: str )
( arguments: str name: Optional = None )
( logprob: float token: str top_logprobs: List )
Inputs for Depth Estimation inference
Outputs of inference for the Depth Estimation task
( inputs: DocumentQuestionAnsweringInputData parameters: Optional = None )
Inputs for Document Question Answering inference
One (document, question) pair to answer
( answer: str end: int score: float start: int words: List )
Outputs of inference for the Document Question Answering task
( doc_stride: Optional = None handle_impossible_answer: Optional = None lang: Optional = None max_answer_len: Optional = None max_question_len: Optional = None max_seq_len: Optional = None top_k: Optional = None word_boxes: Optional = None )
Additional inference parameters Additional inference parameters for Document Question Answering
( inputs: str normalize: Optional = None prompt_name: Optional = None truncate: Optional = None truncation_direction: Optional = None )
Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.
Inputs for Fill Mask inference
( score: float sequence: str token: int token_str: Any fill_mask_output_token_str: Optional = None )
Outputs of inference for the Fill Mask task
( targets: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Fill Mask
( inputs: Any parameters: Optional = None )
Inputs for Image Classification inference
Outputs of inference for the Image Classification task
( function_to_apply: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Image Classification
Inputs for Image Segmentation inference
( label: str mask: Any score: Optional = None )
Outputs of inference for the Image Segmentation task A predicted mask / segment
( mask_threshold: Optional = None overlap_mask_area_threshold: Optional = None subtask: Optional = None threshold: Optional = None )
Additional inference parameters Additional inference parameters for Image Segmentation
Inputs for Image To Image inference
Outputs of inference for the Image To Image task
( guidance_scale: Optional = None negative_prompt: Optional = None num_inference_steps: Optional = None target_size: Optional = None )
Additional inference parameters Additional inference parameters for Image To Image
The size in pixel of the output image
( do_sample: Optional = None early_stopping: Union = None epsilon_cutoff: Optional = None eta_cutoff: Optional = None max_length: Optional = None max_new_tokens: Optional = None min_length: Optional = None min_new_tokens: Optional = None num_beam_groups: Optional = None num_beams: Optional = None penalty_alpha: Optional = None temperature: Optional = None top_k: Optional = None top_p: Optional = None typical_p: Optional = None use_cache: Optional = None )
Parametrization of the text generation process Ad-hoc parametrization of the text generation process
Inputs for Image To Text inference
( generated_text: Any image_to_text_output_generated_text: Optional = None )
Outputs of inference for the Image To Text task
( generate: Optional = None max_new_tokens: Optional = None )
Additional inference parameters Additional inference parameters for Image To Text
( xmax: int xmin: int ymax: int ymin: int )
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
Inputs for Object Detection inference
( box: ObjectDetectionBoundingBox label: str score: float )
Outputs of inference for the Object Detection task
Additional inference parameters Additional inference parameters for Object Detection
( inputs: QuestionAnsweringInputData parameters: Optional = None )
Inputs for Question Answering inference
One (context, question) pair to answer
( answer: str end: int score: float start: int )
Outputs of inference for the Question Answering task
( align_to_words: Optional = None doc_stride: Optional = None handle_impossible_answer: Optional = None max_answer_len: Optional = None max_question_len: Optional = None max_seq_len: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Question Answering
( inputs: SentenceSimilarityInputData parameters: Optional = None )
Inputs for Sentence similarity inference
( sentences: List source_sentence: str )
( clean_up_tokenization_spaces: Optional = None generate_parameters: Optional = None truncation: Optional = None )
Additional inference parameters Additional inference parameters for Text2text Generation
Inputs for Summarization inference Inputs for Text2text Generation inference
Outputs of inference for the Summarization task
( inputs: TableQuestionAnsweringInputData parameters: Optional = None )
Inputs for Table Question Answering inference
One (table, question) pair to answer
( answer: str cells: List coordinates: List aggregator: Optional = None )
Outputs of inference for the Table Question Answering task
( inputs: str parameters: Optional = None )
Inputs for Text2text Generation inference
( generated_text: Any text2_text_generation_output_generated_text: Optional = None )
Outputs of inference for the Text2text Generation task
( clean_up_tokenization_spaces: Optional = None generate_parameters: Optional = None truncation: Optional = None )
Additional inference parameters Additional inference parameters for Text2text Generation
( inputs: str parameters: Optional = None )
Inputs for Text Classification inference
Outputs of inference for the Text Classification task
( function_to_apply: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Text Classification
( inputs: str parameters: Optional = None stream: Optional = None )
Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( adapter_id: Optional = None best_of: Optional = None decoder_input_details: Optional = None details: Optional = None do_sample: Optional = None frequency_penalty: Optional = None grammar: Optional = None max_new_tokens: Optional = None repetition_penalty: Optional = None return_full_text: Optional = None seed: Optional = None stop: Optional = None temperature: Optional = None top_k: Optional = None top_n_tokens: Optional = None top_p: Optional = None truncate: Optional = None typical_p: Optional = None watermark: Optional = None )
( generated_text: str details: Optional = None )
Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( finish_reason: TextGenerationOutputFinishReason generated_text: str generated_tokens: int prefill: List tokens: List seed: Optional = None top_tokens: Optional = None )
( finish_reason: TextGenerationOutputFinishReason generated_tokens: int prefill: List tokens: List best_of_sequences: Optional = None seed: Optional = None top_tokens: Optional = None )
( id: int logprob: float text: str )
( id: int logprob: float special: bool text: str )
( index: int token: TextGenerationStreamOutputToken details: Optional = None generated_text: Optional = None top_tokens: Optional = None )
Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
( finish_reason: TextGenerationOutputFinishReason generated_tokens: int seed: Optional = None )
( id: int logprob: float special: bool text: str )
( do_sample: Optional = None early_stopping: Union = None epsilon_cutoff: Optional = None eta_cutoff: Optional = None max_length: Optional = None max_new_tokens: Optional = None min_length: Optional = None min_new_tokens: Optional = None num_beam_groups: Optional = None num_beams: Optional = None penalty_alpha: Optional = None temperature: Optional = None top_k: Optional = None top_p: Optional = None typical_p: Optional = None use_cache: Optional = None )
Parametrization of the text generation process Ad-hoc parametrization of the text generation process
Inputs for Text To Audio inference
( audio: Any sampling_rate: Any text_to_audio_output_sampling_rate: Optional = None )
Outputs of inference for the Text To Audio task
Additional inference parameters Additional inference parameters for Text To Audio
Inputs for Text To Image inference
Outputs of inference for the Text To Image task
( guidance_scale: Optional = None negative_prompt: Optional = None num_inference_steps: Optional = None scheduler: Optional = None target_size: Optional = None )
Additional inference parameters Additional inference parameters for Text To Image
The size in pixel of the output image
( inputs: str parameters: Optional = None )
Inputs for Token Classification inference
( label: Any score: float end: Optional = None entity_group: Optional = None start: Optional = None word: Optional = None )
Outputs of inference for the Token Classification task
( aggregation_strategy: Optional = None ignore_labels: Optional = None stride: Optional = None )
Additional inference parameters Additional inference parameters for Token Classification
( clean_up_tokenization_spaces: Optional = None generate_parameters: Optional = None truncation: Optional = None )
Additional inference parameters Additional inference parameters for Text2text Generation
Inputs for Translation inference Inputs for Text2text Generation inference
Outputs of inference for the Translation task
( inputs: Any parameters: Optional = None )
Inputs for Video Classification inference
Outputs of inference for the Video Classification task
( frame_sampling_rate: Optional = None function_to_apply: Optional = None num_frames: Optional = None top_k: Optional = None )
Additional inference parameters Additional inference parameters for Video Classification
( inputs: VisualQuestionAnsweringInputData parameters: Optional = None )
Inputs for Visual Question Answering inference
One (image, question) pair to answer
( label: Any score: float answer: Optional = None )
Outputs of inference for the Visual Question Answering task
Additional inference parameters Additional inference parameters for Visual Question Answering
( inputs: ZeroShotClassificationInputData parameters: Optional = None )
Inputs for Zero Shot Classification inference
( candidate_labels: List text: str )
The input text data, with candidate labels
Outputs of inference for the Zero Shot Classification task
( hypothesis_template: Optional = None multi_label: Optional = None )
Additional inference parameters Additional inference parameters for Zero Shot Classification
( inputs: ZeroShotImageClassificationInputData parameters: Optional = None )
Inputs for Zero Shot Image Classification inference
( candidate_labels: List image: Any )
The input image data, with candidate labels
( label: str score: float )
Outputs of inference for the Zero Shot Image Classification task
( hypothesis_template: Optional = None )
Additional inference parameters Additional inference parameters for Zero Shot Image Classification
( xmax: int xmin: int ymax: int ymin: int )
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
( inputs: ZeroShotObjectDetectionInputData parameters: Optional = None )
Inputs for Zero Shot Object Detection inference
( candidate_labels: List image: Any )
The input image data, with candidate labels
( box: ZeroShotObjectDetectionBoundingBox label: str score: float )
Outputs of inference for the Zero Shot Object Detection task