You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Distilling GPT2 gives OOM what is the best way to fit both teacher student in single GPU and train?
Tried reducing batch size but that itself results into an error.
File "train.py", line 285, in main
distiller.train()
File "trabsformersexamples\distillation\distiller.py", line 340, in train
self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels)
File "trabsformersexamples\distillation\distiller.py", line 378, in step
s_logits, _, s_hidden_states = self.student(input_ids=input_ids, attention_mask=None) # (bs, seq_length, voc_size)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 549, in forward
inputs_embeds=inputs_embeds)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 439, in forward
inputs_embeds = self.wte(input_ids)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
❓ Questions & Help
Distilling GPT2 gives OOM what is the best way to fit both teacher student in single GPU and train?
Tried reducing batch size but that itself results into an error.
File "train.py", line 285, in main
distiller.train()
File "trabsformersexamples\distillation\distiller.py", line 340, in train
self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels)
File "trabsformersexamples\distillation\distiller.py", line 378, in step
s_logits, _, s_hidden_states = self.student(input_ids=input_ids, attention_mask=None) # (bs, seq_length, voc_size)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 549, in forward
inputs_embeds=inputs_embeds)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 439, in forward
inputs_embeds = self.wte(input_ids)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
The text was updated successfully, but these errors were encountered: