Transformers
PyTorch
English
whisper
Eval Results
Inference Endpoints
prompteus commited on
Commit
4cad71c
1 Parent(s): e61873f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -58,11 +58,10 @@ Minimal example:
58
 
59
  ```python
60
  # Load model
61
- architecture = "openai/whisper-large-v2"
62
  checkpoint = "MU-NLPC/whisper-large-v2-audio-captioning"
63
- model = audiocap.WhisperForAudioCaptioning.from_pretrained(checkpoint)
64
  tokenizer = transformers.WhisperTokenizer.from_pretrained(checkpoint, language="en", task="transcribe")
65
- feature_extractor = transformers.WhisperFeatureExtractor.from_pretrained(architecture)
66
 
67
  # Load and preprocess audio
68
  input_file = "..."
 
58
 
59
  ```python
60
  # Load model
 
61
  checkpoint = "MU-NLPC/whisper-large-v2-audio-captioning"
62
+ model = WhisperForAudioCaptioning.from_pretrained(checkpoint)
63
  tokenizer = transformers.WhisperTokenizer.from_pretrained(checkpoint, language="en", task="transcribe")
64
+ feature_extractor = transformers.WhisperFeatureExtractor.from_pretrained(checkpoint)
65
 
66
  # Load and preprocess audio
67
  input_file = "..."