myinfer-v2-0528.py
using: runtime\python.exe myinfer-v2-0528.py 0 "I:\tmp\1950922385_1261.wav" "I:\Retrieval-based-Voice-Conversion-WebUI\weights\added_IVF725_Flat_nprobe_1.index" harvest "test.wav" "I:\Retrieval-based-Voice-Conversion-WebUI\weights\nmodel2333333.pth" 0.66 cuda:0 True 3 0 1 0.33 and getting an error :
Traceback (most recent call last):
File "I:\Retrieval-based-Voice-Conversion-WebUI\myinfer-v2-0528.py", line 167, in
wav_opt=vc_single(0,input_path,f0up_key,None,f0method,index_path,index_rate)
File "I:\Retrieval-based-Voice-Conversion-WebUI\myinfer-v2-0528.py", line 133, in vc_single
audio_opt=vc.pipeline(hubert_model,net_g,sid,audio,input_audio,times,f0_up_key,f0_method,file_index,index_rate,if_f0,filter_radius,tgt_sr,resample_sr,rms_mix_rate,version,protect,f0_file=f0_file)
File "I:\Retrieval-based-Voice-Conversion-WebUI\vc_infer_pipeline.py", line 382, in pipeline
self.vc(
File "I:\Retrieval-based-Voice-Conversion-WebUI\vc_infer_pipeline.py", line 185, in vc
logits = model.extract_features(**inputs)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\fairseq\models\hubert\hubert.py", line 535, in extract_features
res = self.forward(
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\fairseq\models\hubert\hubert.py", line 437, in forward
features = self.forward_features(source)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\fairseq\models\hubert\hubert.py", line 392, in forward_features
features = self.feature_extractor(source)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\fairseq\models\wav2vec\wav2vec2.py", line 895, in forward
x = conv(x)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\conv.py", line 313, in forward
return self._conv_forward(input, self.weight, self.bias)
File "I:\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\torch\nn\modules\conv.py", line 309, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
try:
cuda:0 True 3
->
cuda:0 False 3
try:
cuda:0 True 3
->
cuda:0 False 3
same
I get the 100 % same error