pruned-transducer-stateless7-streaming-id / exp /modified_beam_search /log-decode-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model-2023-06-21-09-41-35
w11wo's picture
Added Model
9a835b2
raw
history blame
7.9 kB
2023-06-21 09:41:35,276 INFO [decode.py:654] Decoding started
2023-06-21 09:41:35,276 INFO [decode.py:660] Device: cuda:0
2023-06-21 09:41:35,277 INFO [lexicon.py:168] Loading pre-compiled data/lang_phone/Linv.pt
2023-06-21 09:41:35,280 INFO [decode.py:668] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '9426c9f730820d291f5dcb06be337662595fa7b4', 'k2-git-date': 'Sun Feb 5 17:35:01 2023', 'lhotse-version': '1.15.0.dev+git.00d3e36.clean', 'torch-version': '1.13.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.1', 'icefall-git-branch': 'master', 'icefall-git-sha1': 'd3f5d01-dirty', 'icefall-git-date': 'Wed May 31 04:15:45 2023', 'icefall-path': '/root/icefall', 'k2-path': '/usr/local/lib/python3.10/dist-packages/k2/__init__.py', 'lhotse-path': '/root/lhotse/lhotse/__init__.py', 'hostname': 'bookbot-k2', 'IP address': '127.0.0.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp'), 'lang_dir': 'data/lang_phone', 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/modified_beam_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 7, 'vocab_size': 33}
2023-06-21 09:41:35,281 INFO [decode.py:670] About to create model
2023-06-21 09:41:35,838 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2023-06-21 09:41:35,843 INFO [decode.py:741] Calculating the averaged model over epoch range from 21 (excluded) to 30
2023-06-21 09:41:39,380 INFO [decode.py:774] Number of model parameters: 69471350
2023-06-21 09:41:39,380 INFO [multidataset.py:122] About to get LibriVox test cuts
2023-06-21 09:41:39,380 INFO [multidataset.py:124] Loading LibriVox in lazy mode
2023-06-21 09:41:39,381 INFO [multidataset.py:133] About to get FLEURS test cuts
2023-06-21 09:41:39,381 INFO [multidataset.py:135] Loading FLEURS in lazy mode
2023-06-21 09:41:39,381 INFO [multidataset.py:144] About to get Common Voice test cuts
2023-06-21 09:41:39,381 INFO [multidataset.py:146] Loading Common Voice in lazy mode
2023-06-21 09:41:43,886 INFO [decode.py:565] batch 0/?, cuts processed until now is 44
2023-06-21 09:41:46,269 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.3801, 1.7156, 1.0930, 1.5632, 1.3604, 1.3437, 1.7393, 0.6970],
device='cuda:0'), covar=tensor([0.4497, 0.2012, 0.2669, 0.2689, 0.2707, 0.2909, 0.1440, 0.5122],
device='cuda:0'), in_proj_covar=tensor([0.0074, 0.0053, 0.0059, 0.0067, 0.0065, 0.0064, 0.0051, 0.0077],
device='cuda:0'), out_proj_covar=tensor([5.5637e-05, 3.5992e-05, 4.1115e-05, 4.8266e-05, 4.8700e-05, 4.4501e-05,
3.4417e-05, 7.3250e-05], device='cuda:0')
2023-06-21 09:42:00,403 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/modified_beam_search/recogs-test-librivox-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:42:00,449 INFO [utils.py:561] [test-librivox-beam_size_4] %WER 4.71% [1725 / 36594, 309 ins, 836 del, 580 sub ]
2023-06-21 09:42:00,531 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/modified_beam_search/errs-test-librivox-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:42:00,531 INFO [decode.py:604]
For test-librivox, WER of different settings are:
beam_size_4 4.71 best for test-librivox
2023-06-21 09:42:01,464 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.1911, 1.2934, 2.0949, 2.2245, 2.1813, 2.1569, 1.7841, 1.7188],
device='cuda:0'), covar=tensor([0.1696, 0.4060, 0.1661, 0.1975, 0.1970, 0.2132, 0.1748, 0.3224],
device='cuda:0'), in_proj_covar=tensor([0.0029, 0.0040, 0.0028, 0.0028, 0.0029, 0.0030, 0.0027, 0.0034],
device='cuda:0'), out_proj_covar=tensor([1.8266e-05, 3.2097e-05, 1.7461e-05, 1.6755e-05, 1.8651e-05, 1.9838e-05,
1.5794e-05, 2.3433e-05], device='cuda:0')
2023-06-21 09:42:04,999 INFO [decode.py:565] batch 0/?, cuts processed until now is 38
2023-06-21 09:43:09,460 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/modified_beam_search/recogs-test-fleurs-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:43:09,552 INFO [utils.py:561] [test-fleurs-beam_size_4] %WER 11.25% [10525 / 93580, 1811 ins, 3811 del, 4903 sub ]
2023-06-21 09:43:09,853 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/modified_beam_search/errs-test-fleurs-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:43:09,853 INFO [decode.py:604]
For test-fleurs, WER of different settings are:
beam_size_4 11.25 best for test-fleurs
2023-06-21 09:43:14,023 INFO [decode.py:565] batch 0/?, cuts processed until now is 121
2023-06-21 09:43:47,394 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.5738, 2.5492, 3.0284, 2.4510, 1.3782, 3.0004, 2.8027, 1.4081],
device='cuda:0'), covar=tensor([0.1153, 0.1301, 0.0459, 0.0990, 0.4808, 0.0525, 0.0757, 0.4425],
device='cuda:0'), in_proj_covar=tensor([0.0071, 0.0071, 0.0055, 0.0070, 0.0106, 0.0057, 0.0058, 0.0105],
device='cuda:0'), out_proj_covar=tensor([5.9638e-05, 6.0235e-05, 4.2007e-05, 5.4275e-05, 1.0845e-04, 4.2491e-05,
4.5487e-05, 9.8369e-05], device='cuda:0')
2023-06-21 09:44:30,935 INFO [decode.py:565] batch 20/?, cuts processed until now is 2809
2023-06-21 09:44:57,467 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/modified_beam_search/recogs-test-commonvoice-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:44:57,589 INFO [utils.py:561] [test-commonvoice-beam_size_4] %WER 14.31% [19002 / 132787, 3318 ins, 7575 del, 8109 sub ]
2023-06-21 09:44:57,887 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/modified_beam_search/errs-test-commonvoice-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
2023-06-21 09:44:57,888 INFO [decode.py:604]
For test-commonvoice, WER of different settings are:
beam_size_4 14.31 best for test-commonvoice
2023-06-21 09:44:57,888 INFO [decode.py:809] Done!