Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: The following model_kwargs are not used by the model: ['encoder_outputs'] while training the model on Cordv2 #53

Closed
deepanshudashora opened this issue Sep 19, 2022 · 4 comments

Comments

@deepanshudashora
Copy link

Epoch 0:   0%|                                                                                                                                                                                           | 0/793 [00:00<?, ?it/s]/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Epoch 2: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 793/793 [09:23<00:00,  1.41it/s, loss=0.364, v_num=ment]/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:241: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  category=PossibleUserWarning,
                                                                                                                                                                                                                                Traceback (most recent call last):                                                                                                                                                                        | 0/100 [00:00<?, ?it/s]
  File "train.py", line 149, in <module>
    train(config)
  File "train.py", line 133, in train
    trainer.fit(model_module, data_module)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 648, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
    results = self._run_stage()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
    return self._run_train()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
    self.fit_loop.run()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 201, in run
    self.on_advance_end()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 241, in on_advance_end
    self._run_validation()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 299, in _run_validation
    self.val_loop.run()
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 155, in advance
    dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 143, in advance
    output = self._evaluation_step(**kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 240, in _evaluation_step
    output = self.trainer._call_strategy_hook(hook_name, *kwargs.values())
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/strategies/ddp.py", line 358, in validation_step
    return self.model(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/pytorch_lightning/overrides/base.py", line 90, in forward
    return self.module.validation_step(*inputs, **kwargs)
  File "/root/deepanshu/donut/lightning_module.py", line 72, in validation_step
    return_attentions=False,
  File "/root/deepanshu/donut/donut/model.py", line 477, in inference
    output_attentions=return_attentions,
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/transformers/generation_utils.py", line 1146, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "/root/miniconda3/envs/donut/lib/python3.7/site-packages/transformers/generation_utils.py", line 862, in _validate_model_kwargs
    f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the"
ValueError: The following `model_kwargs` are not used by the model: ['encoder_outputs'] (note: typos in the generate arguments will also show up in this list)
Epoch 2: 100%|██████████| 793/793 [09:27<00:00,  1.40it/s, loss=0.364, v_num=ment]         
@atharvjairath
Copy link

same!

@SamSamhuns
Copy link
Contributor

Change "transformers>=4.11.3" to "transformers==4.21.1" inside setup.py. The new version of the transformers library is causing this issue.

    ),
    python_requires=">=3.7",
    install_requires=[
-      "transformers>=4.11.3",
+      "transformers==4.21.1",
        "timm",
        "datasets[vision]",
        "pytorch-lightning>=1.6.4",

@atharvjairath
Copy link

Thanks @SamSamhuns! It worked

@SamSamhuns
Copy link
Contributor

PR created to support new transformers version #56

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants