nielsr HF staff fcakyon commited on
Commit
c080598
1 Parent(s): d88ecd6

fix a typo in code snippet and processor config (#2)

Browse files

- fix a typo in code snippet (d4a091673f1e222362b66e76cd12503485811488)
- Update README.md (283d3dadb4278dff272703e1e49660120ac9ee32)
- Update README.md (3d47cc1abbe7e66e6e1508588b094529329c99a0)
- fix processor config (5c99ed640fbd5953e8c10441c808bbb1d4eedca4)


Co-authored-by: Fatih <[email protected]>

Files changed (2) hide show
  1. README.md +4 -4
  2. preprocessor_config.json +3 -3
README.md CHANGED
@@ -20,16 +20,16 @@ You can use the raw model for video classification into one of the 600 possible
20
  Here is how to use this model to classify a video:
21
 
22
  ```python
23
- from transformers import TimesformerFeatureExtractor, TimesformerForVideoClassification
24
  import numpy as np
25
  import torch
26
 
27
- video = list(np.random.randn(8, 3, 224, 224))
28
 
29
- feature_extractor = TimesformerFeatureExtractor.from_pretrained("facebook/timesformer-hr-finetuned-k600")
30
  model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k600")
31
 
32
- inputs = feature_extractor(video, return_tensors="pt")
33
 
34
  with torch.no_grad():
35
  outputs = model(**inputs)
 
20
  Here is how to use this model to classify a video:
21
 
22
  ```python
23
+ from transformers import AutoImageProcessor, TimesformerForVideoClassification
24
  import numpy as np
25
  import torch
26
 
27
+ video = list(np.random.randn(16, 3, 448, 448))
28
 
29
+ processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-k600")
30
  model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k600")
31
 
32
+ inputs = processor(images=video, return_tensors="pt")
33
 
34
  with torch.no_grad():
35
  outputs = model(**inputs)
preprocessor_config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "crop_size": {
3
- "height": 224,
4
- "width": 224
5
  },
6
  "do_center_crop": true,
7
  "do_normalize": true,
@@ -21,6 +21,6 @@
21
  "resample": 2,
22
  "rescale_factor": 0.00392156862745098,
23
  "size": {
24
- "shortest_edge": 224
25
  }
26
  }
 
1
  {
2
  "crop_size": {
3
+ "height": 448,
4
+ "width": 448
5
  },
6
  "do_center_crop": true,
7
  "do_normalize": true,
 
21
  "resample": 2,
22
  "rescale_factor": 0.00392156862745098,
23
  "size": {
24
+ "shortest_edge": 448
25
  }
26
  }