You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First I want to say thanks for the example, it is very helpful to those of us just getting started with distributed tensorflow.
I had an issue where only the chief worker would finish, and once the chief worker finished the other workers would hang.
I fixed the issue based on some advice I found in another post but I thought I would make you aware and if possible get some clarification on if it is a system problem on my end or truly a bug in your code.
The fix was as simple as changing the hook to this:
First I want to say thanks for the example, it is very helpful to those of us just getting started with distributed tensorflow.
I had an issue where only the chief worker would finish, and once the chief worker finished the other workers would hang.
I fixed the issue based on some advice I found in another post but I thought I would make you aware and if possible get some clarification on if it is a system problem on my end or truly a bug in your code.
The fix was as simple as changing the hook to this:
from this:
With that change in place, all workers are synchronized for each step of each epoch and they all finish.
The text was updated successfully, but these errors were encountered: