You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some general clone clean up that should be addressed:
Better location for wandb dir
The warning logger.warning("Length of on-the-fly transformed dataset is not really fixed.") is popping up too much. Try to decide if this is strictly necessary.
The complete run config (i.e. result of ... --cfg job --resolve) is not being logged on wandb. It should be.
Problem seems to be more of a wandb bug. When settings subconfigs (e.g. optimizer=class-recon) for single runs, it will correctly fill out the dictionary on wandb.ai, but somehow subconfig dicts are getting overwritten with key names during sweeps. Seems like a bug that may resolve itself.
Torchinfo output should be dumped to a file on initialization.
Fix command=clean not really detecting the existence of the dir properly
The problem is actually that even scan creates a directory, which isn't great but I'm not sure is worth avoiding.
The text was updated successfully, but these errors were encountered:
Some general clone clean up that should be addressed:
logger.warning("Length of on-the-fly transformed dataset is not really fixed.")
is popping up too much. Try to decide if this is strictly necessary.The complete run config (i.e. result of... --cfg job --resolve
) is not being logged on wandb. It should be.optimizer=class-recon
) for single runs, it will correctly fill out the dictionary onwandb.ai
, but somehow subconfig dicts are getting overwritten with key names during sweeps. Seems like a bug that may resolve itself.Fixcommand=clean
not really detecting the existence of the dir properlyscan
creates a directory, which isn't great but I'm not sure is worth avoiding.The text was updated successfully, but these errors were encountered: