Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem using single_file_inference.sh #34

Open
vpurandara opened this issue Aug 26, 2022 · 0 comments
Open

Problem using single_file_inference.sh #34

vpurandara opened this issue Aug 26, 2022 · 0 comments

Comments

@vpurandara
Copy link

Installed vakyansh-wav2vec2-experimentation as mentioned in ReadMe.md

Also downloaded the Vakyansh Open Source Models.

No GPUs available.. Running in CPU.

scripts/inference/bash single_file_inference.sh
Screenshot from 2022-08-26 17-21-19

while executing..

  1. if Single Model for Inference is given for custom_model_path then following error occured....

(vwav2vec2) purandara@purandara:/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/scripts/inference$ bash single_file_inference.sh
Traceback (most recent call last):
File "../../utils/inference/single_file_inference.py", line 380, in
result = parse_transcription(args_local.model, args_local.dict, args_local.wav, args_local.cuda, args_local.decoder, args_local.lexicon, args_local.lm_path, args_local.half)
File "../../utils/inference/single_file_inference.py", line 363, in parse_transcription
result = get_results(wav_path=wav_path, dict_path=dict_path, generator=generator, use_cuda=cuda, model=model, half=half)
File "../../utils/inference/single_file_inference.py", line 315, in get_results
hypo = generator.generate(model, sample, prefix_tokens=None)
File "../../utils/inference/single_file_inference.py", line 113, in generate
emissions = self.get_emissions(models, encoder_input)
File "../../utils/inference/single_file_inference.py", line 119, in get_emissions
encoder_out = model(**encoder_input)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "../../utils/inference/single_file_inference.py", line 70, in forward
x = self.w2v_encoder(**kwargs)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 484, in forward
res = self.w2v_model.extract_features(**w2v_args)
File "/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/fairseq/fairseq/models/wav2vec/wav2vec2.py", line 778, in extract_features
res = self.forward(
File "/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/fairseq/fairseq/models/wav2vec/wav2vec2.py", line 599, in forward
features = self.feature_extractor(source)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/fairseq/fairseq/models/wav2vec/wav2vec2.py", line 895, in forward
x = conv(x)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 681, in forward
return F.gelu(input, approximate=self.approximate)
File "/home/purandara/anaconda3/envs/vwav2vec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1207, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GELU' object has no attribute 'approximate'

  1. if Finetuned Model is given for custom_model_path then following error occured....

(vwav2vec2) purandara@purandara:/media/purandara/harddisk/VWAV2VEC2/vakyansh-wav2vec2-experimentation/scripts/inference$ bash single_file_inference.sh
Traceback (most recent call last):
File "../../utils/inference/single_file_inference.py", line 380, in
result = parse_transcription(args_local.model, args_local.dict, args_local.wav, args_local.cuda, args_local.decoder, args_local.lexicon, args_local.lm_path, args_local.half)
File "../../utils/inference/single_file_inference.py", line 363, in parse_transcription
result = get_results(wav_path=wav_path, dict_path=dict_path, generator=generator, use_cuda=cuda, model=model, half=half)
File "../../utils/inference/single_file_inference.py", line 301, in get_results
model.eval()
AttributeError: 'dict' object has no attribute 'eval'

These errors are happened for both decoders i.e; viterbi and kenlm

I dont know where I went wrong. Can anyone help me with it.

Note: I installed flashlight using git clone https://github.com/flashlight/flashlight.git --branch v0.3.2 for this Reason

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant