You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry to disturb you again.
I have been trying to reproduce the result on the Tox21 dataset with your code and the weights shared, I unfortunately did not manage to do so. I ran:
python finetune.py --dataset tox21
And obtained the following results attached, where the logs display at the end of the eval loop is presented.
The best result is for epoch 88 (corresponding to line 88 of the file) with [INFO|finetune.py:299] 2024-06-17T15:47:55+0900 > train: 0.000000 val: 0.786658 test: 0.757151
This is pretty far from the 76.5 included in the paper, I know it is a mean over 10 runs, but I am wondering if the finetuning code is up-to-date ? What were the 10 seed you used for obtaining your results ?
Sorry to disturb you again.
I have been trying to reproduce the result on the Tox21 dataset with your code and the weights shared, I unfortunately did not manage to do so. I ran:
python finetune.py --dataset tox21
And obtained the following results attached, where the logs display at the end of the eval loop is presented.
The best result is for epoch 88 (corresponding to line 88 of the file) with
[INFO|finetune.py:299] 2024-06-17T15:47:55+0900 > train: 0.000000 val: 0.786658 test: 0.757151
This is pretty far from the
76.5
included in the paper, I know it is a mean over 10 runs, but I am wondering if the finetuning code is up-to-date ? What were the 10 seed you used for obtaining your results ?There is also a small mistake at
MoAMa-dev/loader.py
Line 301 in 77c7857
Where it should be
keys()
Full result of the run:
finetuning.txt
The text was updated successfully, but these errors were encountered: