Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the EarlyStopping callback not working well on multi worker distribute training job #88

Open
taoyun951753 opened this issue Nov 15, 2022 · 0 comments

Comments

@taoyun951753
Copy link

taoyun951753 commented Nov 15, 2022

Current behavior

If there is only one worker ,training with EarlyStopping callback is ok. When multi workers with EarlyStopping callback doing distribute training, all workers will be hanging and waiting for synchronizing.

09D96DCB-F298-4941-8C85-CDB56A5C0ABB

Expected behavior

I want the EarlyStopping callback works well not only on one worker task but also on multi workers distribute training job.

System information

  • GPU model and memory:
  • OS Platform:
  • Docker version:
  • GCC/CUDA/cuDNN version:
  • Python/conda version:
  • TensorFlow/PyTorch version:

Code to reproduce

....
callbacks_list.append(EarlyStopping(monitor="val_loss",
min_delta=self.ctx.min_delta,
patience=self.ctx.patience,
verbose=verbose,
mode="min",
baseline=None,
restore_best_weights=True)
)

....

keras_model.fit(
x=None,
y=None,
validation_data=valid_ds,
steps_per_epoch=self.ctx.steps_per_epoch,
validation_steps=self.ctx.valid_steps_per_epoch,
epochs=self.ctx.callback_num,
callbacks=callbacks_list,
checkpoint_dir=self.ctx.model_save_path,
keep_checkpoint_max=1,
verbose=0)

Willing to contribute

Yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant