-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update default net to nn-1ee1aba5ed4c.nnue #4782
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Created by retraining the master net on a dataset composed by: - adding Leela data from T60 jul-dec 2020, T77 nov 2021, T80 jun-jul 2023 - deduplicating and unminimizing parts of the dataset before interleaving Trained initially with max epoch 800, then increased near the end of training twice. First to 960, then 1200. After training, post-processing involved: - greedy permuting L1 weights with official-stockfish#4620 - greedy 2- and 3- cycle permuting with official-stockfish#4640 python3 easy_train.py \ --experiment-name 2048-retrain-S6-sk28 \ --training-dataset /data/S6.binpack \ --early-fen-skipping 28 \ --start-from-engine-test-net True \ --max_epoch 1200 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 In the list of datasets below, periods in the filename represent the sequence of steps applied to arrive at the particular binpack. For example: test77-dec2021-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack 1. test77 dec2021 data rescored with 16 TB of syzygy tablebases during data conversion 2. filtered with csv_filter_v6_dd.py - v6 filtering and deduplication in one step 3. minimized with the original mar2023 implementation of `minimize_binpack` in the tools branch 4. unminimized by removing all positions with score == 32002 (`VALUE_NONE`) Binpacks were: - filtered with: https://github.com/linrock/nnue-data - unminimized with: https://github.com/linrock/Stockfish/tree/tools-unminify - deduplicated with: https://github.com/linrock/Stockfish/tree/tools-dd DATASETS=( leela96-filt-v2.min.unminimized.binpack dfrc99-16tb7p-eval-filt-v2.min.unminimized.binpack # most of the 0dd1cebea57 v6-dd dataset (without test80-jul2022) # official-stockfish#4606 test60-novdec2021-12tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test77-dec2021-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test78-jantomay2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test78-juntosep2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test79-apr2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test79-may2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test80-jun2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test80-aug2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test80-sep2022-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test80-oct2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack test80-feb2023-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack # older Leela data, recently converted test60-octnovdec2020-2tb7p.min.unminimized.binpack test60-julaugsep2020-2tb7p.min.binpack test77-nov2021-2tb7p.min.dd.binpack # newer Leela data test80-mar2023-2tb7p.min.unminimized.binpack test80-apr2023-2tb7p.filter-v6-sk16.min.unminimized.binpack test80-may2023-2tb7p.min.dd.binpack test80-jun2023-2tb7p.min.binpack test80-jul2023-2tb7p.binpack ) python3 interleave_binpacks.py ${DATASETS[@]} /data/S6.binpack Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch1059 : 2.7 +/- 1.6 Passed STC: https://tests.stockfishchess.org/tests/view/64fc8d705dab775b5359db42 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 168352 W: 43216 L: 42704 D: 82432 Ptnml(0-2): 599, 19672, 43134, 20160, 611 Passed LTC: https://tests.stockfishchess.org/tests/view/64fd44a75dab775b5359f065 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 154194 W: 39436 L: 38881 D: 75877 Ptnml(0-2): 78, 16577, 43238, 17120, 84 bench 1672490
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.min.binpack test80-jan2023-16tb7p.v6-sk20.min.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-2tb7p.filter-v6-dd.min.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 L1 weights permuted with: ``` python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-2tb7p.filter-v6-dd.min.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 L1 weights permuted with: ``` python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: ``` 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 ``` L1 weights permuted with: ```bash python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: ``` 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 ``` L1 weights permuted with: ```bash python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch - permuting L1 weights with official-stockfish/nnue-pytorch#254 The datasets used in stages 1-5 were fully minimized. A strong epoch after each training stage was chosen for the next. The 6 stages were: ``` 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 ``` L1 weights permuted with: ```bash python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Sep 21, 2023
Creating this net involved: - a 6-stage training process from scratch. The datasets used in stages 1-5 were fully minimized. - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: ``` 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 ``` L1 weights permuted with: ```bash python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 bench 1246812
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
Sep 22, 2023
Creating this net involved: - a 6-stage training process from scratch. The datasets used in stages 1-5 were fully minimized. - permuting L1 weights with official-stockfish/nnue-pytorch#254 A strong epoch after each training stage was chosen for the next. The 6 stages were: ``` 1. 400 epochs, lambda 1.0, default LR and gamma UHOx2-wIsRight-multinet-dfrc-n5000 (135G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack 2. 800 epochs, end-lambda 0.75, LR 4.375e-4, gamma 0.995, skip 12 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack 3. 800 epochs, end-lambda 0.725, LR 4.375e-4, gamma 0.995, skip 20 leela93-v1-dfrc99-v2-T78juntosepT80jan-v6dd-T78janfebT79aprT80aprmay.min.binpack leela93-filt-v1.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test78-janfeb2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-apr2022-16tb7p.min.binpack test80-may2022-16tb7p.min.binpack 4. 800 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 24 leela96-dfrc99-v2-T78juntosepT79mayT80junsepnovjan-v6dd-T80mar23-v6-T60novdecT77decT78aprmayT79aprT80may23.min.binpack leela96-filt-v2.min.binpack dfrc99-16tb7p-filt-v2.min.binpack test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.binpack test79-may2022-16tb7p.filter-v6-dd.min.binpack test80-jun2022-16tb7p.filter-v6-dd.min.binpack test80-sep2022-16tb7p.filter-v6-dd.min.binpack test80-nov2022-16tb7p.filter-v6-dd.min.binpack test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.binpack test80-mar2023-2tb7p.v6-sk16.min.binpack test60-novdec2021-16tb7p.min.binpack test77-dec2021-16tb7p.min.binpack test78-aprmay2022-16tb7p.min.binpack test79-apr2022-16tb7p.min.binpack test80-may2023-2tb7p.min.binpack 5. 960 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 960 near the end of the first 800 epochs 5af11540bbfe dataset: official-stockfish#4635 6. 1000 epochs, end-lambda 0.7, LR 4.375e-4, gamma 0.995, skip 28 Increased max-epoch to 1000 near the end of the first 800 epochs 1ee1aba5ed dataset: official-stockfish#4782 ``` L1 weights permuted with: ```bash python3 serialize.py $nnue $nnue_permuted \ --features=HalfKAv2_hm \ --ft_optimize \ --ft_optimize_data=/data/fishpack32.binpack \ --ft_optimize_count=10000 ``` Speed measurements from 100 bench runs at depth 13 with profile-build x86-64-avx2: ``` sf_base = 1329051 +/- 2224 (95%) sf_test = 1163344 +/- 2992 (95%) diff = -165706 +/- 4913 (95%) speedup = -12.46807% +/- 0.370% (95%) ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move (vs. L1-2048 nn-1ee1aba5ed4c.nnue) ep959 : 16.2 +/- 2.3 Failed 10+0.1 STC: https://tests.stockfishchess.org/tests/view/6501beee2cd016da89abab21 LLR: -2.92 (-2.94,2.94) <0.00,2.00> Total: 13184 W: 3285 L: 3535 D: 6364 Ptnml(0-2): 85, 1662, 3334, 1440, 71 Failed 180+1.8 VLTC: https://tests.stockfishchess.org/tests/view/6505cf9a72620bc881ea908e LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 64248 W: 16224 L: 16374 D: 31650 Ptnml(0-2): 26, 6788, 18640, 6650, 20 Passed 60+0.6 th 8 VLTC SMP (STC bounds): https://tests.stockfishchess.org/tests/view/65084a4618698b74c2e541dc LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 90630 W: 23372 L: 23033 D: 44225 Ptnml(0-2): 13, 8490, 27968, 8833, 11 Passed 60+0.6 th 8 VLTC SMP: https://tests.stockfishchess.org/tests/view/6501d45d2cd016da89abacdb LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 137804 W: 35764 L: 35276 D: 66764 Ptnml(0-2): 31, 13006, 42326, 13522, 17 closes official-stockfish#4795 bench 1246812
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Dec 28, 2023
Created by retraining the master big net `nn-0000000000a0.nnue` on the same dataset with the ranger21 optimizer and more WDL skipping at training time. More WDL skipping is meant to increase lambda accuracy and train on fewer misevaluated positions where position scores are unlikely to correlate with game outcomes. Inspired by: - repeated reports in discord #events-discuss about SF misplaying due to wrong endgame evals, possibly due to Leela's endgame weaknesses reflected in training data - an attempt to reduce the skewed dataset piece count distribution where there are much more positions with less than 16 pieces, since the target piece count distribution in the trainer is symmetric around 16 The faster convergence seen with ranger21 is meant to: - prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima - research faster potential trainings by shortening each run ```yaml experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip training-dataset: /data/S6-514G.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip num-epochs: 1200 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Implementations based off of Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY - Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336 - Experiment 351 - more WDL skipping The version of the ranger21 optimizer used is: https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py The dataset is the exact same as in: official-stockfish#4782 Local elo at 25k nodes per move: nn-epoch619.nnue : 6.2 +/- 4.2 Passed STC: https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 46528 W: 11985 L: 11650 D: 22893 Ptnml(0-2): 154, 5489, 11688, 5734, 199 Passed LTC: https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 265326 W: 66378 L: 65574 D: 133374 Ptnml(0-2): 153, 30175, 71254, 30877, 204 This was additionally tested with the latest DualNNUE and passed SPRTs: Passed STC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 296128 W: 76273 L: 75554 D: 144301 Ptnml(0-2): 1223, 35768, 73617, 35979, 1477 Passed LTC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 75618 W: 19085 L: 18680 D: 37853 Ptnml(0-2): 45, 8420, 20497, 8779, 68 bench 1049162
Disservin
pushed a commit
that referenced
this pull request
Dec 30, 2023
Created by retraining the master big net `nn-0000000000a0.nnue` on the same dataset with the ranger21 optimizer and more WDL skipping at training time. More WDL skipping is meant to increase lambda accuracy and train on fewer misevaluated positions where position scores are unlikely to correlate with game outcomes. Inspired by: - repeated reports in discord #events-discuss about SF misplaying due to wrong endgame evals, possibly due to Leela's endgame weaknesses reflected in training data - an attempt to reduce the skewed dataset piece count distribution where there are much more positions with less than 16 pieces, since the target piece count distribution in the trainer is symmetric around 16 The faster convergence seen with ranger21 is meant to: - prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima - research faster potential trainings by shortening each run ```yaml experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip training-dataset: /data/S6-514G.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip num-epochs: 1200 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Implementations based off of Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY - Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336 - Experiment 351 - more WDL skipping The version of the ranger21 optimizer used is: https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py The dataset is the exact same as in: #4782 Local elo at 25k nodes per move: nn-epoch619.nnue : 6.2 +/- 4.2 Passed STC: https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 46528 W: 11985 L: 11650 D: 22893 Ptnml(0-2): 154, 5489, 11688, 5734, 199 Passed LTC: https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 265326 W: 66378 L: 65574 D: 133374 Ptnml(0-2): 153, 30175, 71254, 30877, 204 This was additionally tested with the latest DualNNUE and passed SPRTs: Passed STC vs. #4919 https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 296128 W: 76273 L: 75554 D: 144301 Ptnml(0-2): 1223, 35768, 73617, 35979, 1477 Passed LTC vs. #4919 https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 75618 W: 19085 L: 18680 D: 37853 Ptnml(0-2): 45, 8420, 20497, 8779, 68 closes #4942 Bench: 1304666
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jan 8, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S5-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1) This was a variant of experiments found to be promising from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 Bench: 1219824
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jan 8, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S5-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 Bench: 1219824
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jan 8, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 Bench: 1219824
Disservin
pushed a commit
that referenced
this pull request
Jan 8, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # #4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 closes #4972 Bench: 1219824
Disservin
pushed a commit
to Disservin/Stockfish
that referenced
this pull request
Jan 8, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 closes official-stockfish#4972 Bench: 1219824
rn5f107s2
pushed a commit
to rn5f107s2/Stockfish
that referenced
this pull request
Jan 14, 2024
Created by retraining the master big net `nn-0000000000a0.nnue` on the same dataset with the ranger21 optimizer and more WDL skipping at training time. More WDL skipping is meant to increase lambda accuracy and train on fewer misevaluated positions where position scores are unlikely to correlate with game outcomes. Inspired by: - repeated reports in discord #events-discuss about SF misplaying due to wrong endgame evals, possibly due to Leela's endgame weaknesses reflected in training data - an attempt to reduce the skewed dataset piece count distribution where there are much more positions with less than 16 pieces, since the target piece count distribution in the trainer is symmetric around 16 The faster convergence seen with ranger21 is meant to: - prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima - research faster potential trainings by shortening each run ```yaml experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip training-dataset: /data/S6-514G.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip num-epochs: 1200 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Implementations based off of Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY - Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336 - Experiment 351 - more WDL skipping The version of the ranger21 optimizer used is: https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py The dataset is the exact same as in: official-stockfish#4782 Local elo at 25k nodes per move: nn-epoch619.nnue : 6.2 +/- 4.2 Passed STC: https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 46528 W: 11985 L: 11650 D: 22893 Ptnml(0-2): 154, 5489, 11688, 5734, 199 Passed LTC: https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 265326 W: 66378 L: 65574 D: 133374 Ptnml(0-2): 153, 30175, 71254, 30877, 204 This was additionally tested with the latest DualNNUE and passed SPRTs: Passed STC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 296128 W: 76273 L: 75554 D: 144301 Ptnml(0-2): 1223, 35768, 73617, 35979, 1477 Passed LTC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 75618 W: 19085 L: 18680 D: 37853 Ptnml(0-2): 45, 8420, 20497, 8779, 68 closes official-stockfish#4942 Bench: 1304666
rn5f107s2
pushed a commit
to rn5f107s2/Stockfish
that referenced
this pull request
Jan 14, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 closes official-stockfish#4972 Bench: 1219824
windfishballad
pushed a commit
to windfishballad/Stockfish
that referenced
this pull request
Jan 23, 2024
Created by retraining the master big net `nn-0000000000a0.nnue` on the same dataset with the ranger21 optimizer and more WDL skipping at training time. More WDL skipping is meant to increase lambda accuracy and train on fewer misevaluated positions where position scores are unlikely to correlate with game outcomes. Inspired by: - repeated reports in discord #events-discuss about SF misplaying due to wrong endgame evals, possibly due to Leela's endgame weaknesses reflected in training data - an attempt to reduce the skewed dataset piece count distribution where there are much more positions with less than 16 pieces, since the target piece count distribution in the trainer is symmetric around 16 The faster convergence seen with ranger21 is meant to: - prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima - research faster potential trainings by shortening each run ```yaml experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip training-dataset: /data/S6-514G.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip num-epochs: 1200 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Implementations based off of Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY - Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336 - Experiment 351 - more WDL skipping The version of the ranger21 optimizer used is: https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py The dataset is the exact same as in: official-stockfish#4782 Local elo at 25k nodes per move: nn-epoch619.nnue : 6.2 +/- 4.2 Passed STC: https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 46528 W: 11985 L: 11650 D: 22893 Ptnml(0-2): 154, 5489, 11688, 5734, 199 Passed LTC: https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 265326 W: 66378 L: 65574 D: 133374 Ptnml(0-2): 153, 30175, 71254, 30877, 204 This was additionally tested with the latest DualNNUE and passed SPRTs: Passed STC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 296128 W: 76273 L: 75554 D: 144301 Ptnml(0-2): 1223, 35768, 73617, 35979, 1477 Passed LTC vs. official-stockfish#4919 https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 75618 W: 19085 L: 18680 D: 37853 Ptnml(0-2): 45, 8420, 20497, 8779, 68 closes official-stockfish#4942 Bench: 1304666
windfishballad
pushed a commit
to windfishballad/Stockfish
that referenced
this pull request
Jan 23, 2024
Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Ptnml(0-2): 34, 6162, 14834, 6535, 29 closes official-stockfish#4972 Bench: 1219824
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Feb 16, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 Bench: 1265463
Disservin
pushed a commit
that referenced
this pull request
Feb 17, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # #4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: #4942 - increasing loss when Q is too high: #4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes #5056 Bench: 1351997
xu-shawn
pushed a commit
to xu-shawn/Stockfish
that referenced
this pull request
Feb 17, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997
xu-shawn
pushed a commit
to xu-shawn/Stockfish
that referenced
this pull request
Feb 19, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997
xu-shawn
pushed a commit
to xu-shawn/Stockfish
that referenced
this pull request
Feb 19, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997
xu-shawn
pushed a commit
to xu-shawn/Stockfish
that referenced
this pull request
Feb 19, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997
TierynnB
pushed a commit
to TierynnB/Stockfish
that referenced
this pull request
Feb 22, 2024
Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997
TierynnB
added a commit
to TierynnB/Stockfish
that referenced
this pull request
Feb 23, 2024
commit 524083a Merge: 8a0206f 4a5ba40 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Fri Feb 23 08:42:30 2024 +1000 Merge branch 'TM_Change_2' of https://github.com/TierynnB/Stockfish into TM_Change_2 commit 8a0206f Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:13:26 2024 +1000 use current time instead of '1' for timeLeft formula. make timeLeft a double, timepoint seemed unecessary since it was always casting back to double anyway. fixed comments Squashed commits commit 4a5ba40 Merge: ce952bf 676a1d7 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Fri Feb 23 08:01:21 2024 +1000 Merge branch 'TM_Change_2' of https://github.com/TierynnB/Stockfish into TM_Change_2 commit ce952bf Author: cj5716 <125858804+cj5716@users.noreply.github.com> Date: Tue Feb 13 17:46:37 2024 +0800 Simplify PV node reduction Reduce less on PV nodes even with an upperbound TT entry. Passed STC: https://tests.stockfishchess.org/tests/view/65cb3a861d8e83c78bfd0497 LLR: 2.96 (-2.94,2.94) <-1.75,0.25> Total: 118752 W: 30441 L: 30307 D: 58004 Ptnml(0-2): 476, 14179, 29921, 14335, 465 Passed LTC: https://tests.stockfishchess.org/tests/view/65cd3b951d8e83c78bfd2b0d LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 155058 W: 38549 L: 38464 D: 78045 Ptnml(0-2): 85, 17521, 42219, 17632, 72 closes official-stockfish#5057 Bench: 1303971 commit 4acf810 Author: Linmiao Xu <linmiao.xu@gmail.com> Date: Tue Feb 6 11:21:15 2024 -0500 Update default main net to nn-b1a57edbea57.nnue Created by retraining the previous main net `nn-baff1edbea57.nnue` with: - some of the same options as before: ranger21, more WDL skipping - the addition of T80 nov+dec 2023 data - increasing loss by 15% when prediction is too high, up from 10% - use of torch.compile to speed up training by over 25% ```yaml experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28 training-dataset: # official-stockfish#4782 - /data/S6-514G-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack - /data/test80-nov2023-2tb7p.binpack - /data/test80-dec2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Epoch 819 trained with the above config led to this PR. Use of torch.compile decorators in nnue-pytorch model.py was found to speed up training by at least 25% on Ampere gpus when using recent pytorch compiled with cuda 12: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch See recent main net PRs for more info on - ranger21 and more WDL skipping: official-stockfish#4942 - increasing loss when Q is too high: official-stockfish#4972 Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52 LLR: 2.98 (-2.94,2.94) <0.00,2.00> Total: 78336 W: 20504 L: 20115 D: 37717 Ptnml(0-2): 317, 9225, 19721, 9562, 343 Passed LTC: https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 41016 W: 10492 L: 10159 D: 20365 Ptnml(0-2): 22, 4533, 11071, 4854, 28 closes official-stockfish#5056 Bench: 1351997 commit 40c6cdf Author: cj5716 <125858804+cj5716@users.noreply.github.com> Date: Tue Feb 13 17:50:16 2024 +0800 Simplify TT PV reduction This also removes some incorrect fail-high logic. Passed STC: https://tests.stockfishchess.org/tests/view/65cb3b641d8e83c78bfd04a9 LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 87968 W: 22634 L: 22468 D: 42866 Ptnml(0-2): 315, 10436, 22323, 10588, 322 Passed LTC: https://tests.stockfishchess.org/tests/view/65cccee21d8e83c78bfd222c LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 70794 W: 17846 L: 17672 D: 35276 Ptnml(0-2): 44, 7980, 19189, 8126, 58 closes official-stockfish#5055 Bench: 1474424 commit 9299d01 Author: Gahtan Nahdi <155860115+gahtan-syarif@users.noreply.github.com> Date: Sat Feb 10 03:51:05 2024 +0700 Remove penalty for quiet ttMove that fails low Passed STC non-reg: https://tests.stockfishchess.org/tests/view/65c691a7c865510db0286e6e LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 234336 W: 60258 L: 60255 D: 113823 Ptnml(0-2): 966, 28141, 58918, 28210, 933 Passed LTC non-reg: https://tests.stockfishchess.org/tests/view/65c8d0d31d8e83c78bfcd4a6 LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 235206 W: 59134 L: 59132 D: 116940 Ptnml(0-2): 135, 26908, 63517, 26906, 137 official-stockfish#5054 Bench: 1287996 commit 676a1d7 Merge: 7d0cd7b 3c3f88b Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Fri Feb 23 07:58:38 2024 +1000 Merge branch 'TM_Change_2' of https://github.com/TierynnB/Stockfish into TM_Change_2 commit 7d0cd7b Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:13:26 2024 +1000 parent 8b67b7e author Tierynn Byrnes <t.byrnes47@gmail.com> 1708290806 +1000 committer Tierynn Byrnes <t.byrnes47@gmail.com> 1708638981 +1000 use current time instead of '1' for timeLeft formula. make timeLeft a double, timepoint seemed unecessary since it was always casting back to double anyway. commit 3c3f88b Merge: 76c50a0 61e8083 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Fri Feb 23 07:54:46 2024 +1000 Merge branch 'TM_Change_2' of https://github.com/TierynnB/Stockfish into TM_Change_2 commit 76c50a0 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:13:26 2024 +1000 use current time instead of '1' for timeLeft formula. make timeLeft a double, timepoint seemed unecessary since it was always casting back to double anyway. fixed comments commit 61e8083 Merge: 5cf3f49 8afec41 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Thu Feb 22 19:44:21 2024 +1000 Merge branch 'TM_Change_2' of https://github.com/TierynnB/Stockfish into TM_Change_2 commit 5cf3f49 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:13:26 2024 +1000 use current time instead of '1' for timeLeft formula. make timeLeft a double, timepoint seemed unecessary since it was always casting back to double anyway. commit 8afec41 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:50:30 2024 +1000 fixed comments commit de4a3c4 Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:32:09 2024 +1000 make timeLeft a double, timepoint seemed unecessary since it was always casting back to double anyway. commit e1f6b87 Merge: 8b67b7e fc41f64 Author: Lemmy <10430540+TierynnB@users.noreply.github.com> Date: Mon Feb 19 07:14:27 2024 +1000 Merge branch 'official-stockfish:master' into TM_Change_2 commit 8b67b7e Author: Tierynn Byrnes <t.byrnes47@gmail.com> Date: Mon Feb 19 07:13:26 2024 +1000 use current time instead of '1' for timeLeft formula.
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Mar 5, 2024
Created by retraining the previous main net `nn-b1a57edbea57.nnue` with: - some of the same options as before: - ranger21, more WDL skipping, 15% more loss when Q is too high - removal of the huge 514G pre-interleaved binpack - removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack) - interleaving many binpacks at training time - training with some bestmove capture positions where SEE < 0 - increased usage of torch.compile to speed up training by up to 40% ```yaml experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28 nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more start-from-engine-test-net: True early-fen-skipping: 28 training-dataset: # similar, not the exact same as: # official-stockfish#4635 - /data/S5-5af/leela96.v2.min.binpack - /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack - /data/S5-5af/test80-2023-04-apr-2tb7p.binpack - /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack # official-stockfish#4782 - /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack - /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack # official-stockfish#4972 - /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack - /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack - /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack # official-stockfish#5056 - /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack - /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack num-epochs: 800 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` This particular net was reached at epoch 759. Use of more torch.compile decorators in nnue-pytorch model.py than in the previous main net training run sped up training by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12: https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile Skipping positions with bestmove captures where static exchange evaluation is >= 0 is based on the implementation from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 293 - only skip captures with see>=0 Positions with bestmove captures where score == 0 are always skipped for compatibility with minimized binpacks, since the original minimizer sets scores to 0 for slight improvements in compression. The trainer branch used was: https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more Binpacks were renamed to be sorted chronologically by default when sorted by name. The binpack data are otherwise the same as binpacks with similar names in the prior naming convention. Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c LLR: 2.92 (-2.94,2.94) <0.00,2.00> Total: 149792 W: 39153 L: 38661 D: 71978 Ptnml(0-2): 675, 17586, 37905, 18032, 698 Passed LTC: https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 64416 W: 16517 L: 16135 D: 31764 Ptnml(0-2): 38, 7218, 17313, 7602, 37 Bench: 1536373
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Mar 5, 2024
Created by retraining the previous main net `nn-b1a57edbea57.nnue` with: - some of the same options as before: - ranger21, more WDL skipping, 15% more loss when Q is too high - removal of the huge 514G pre-interleaved binpack - removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack) - interleaving many binpacks at training time - training with some bestmove capture positions where SEE < 0 - increased usage of torch.compile to speed up training by up to 40% ```yaml experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28 nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more start-from-engine-test-net: True early-fen-skipping: 28 training-dataset: # similar, not the exact same as: # official-stockfish#4635 - /data/S5-5af/leela96.v2.min.binpack - /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack - /data/S5-5af/test80-2023-04-apr-2tb7p.binpack - /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack # official-stockfish#4782 - /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack - /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack # official-stockfish#4972 - /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack - /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack - /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack # official-stockfish#5056 - /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack - /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack num-epochs: 800 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` This particular net was reached at epoch 759. Use of more torch.compile decorators in nnue-pytorch model.py than in the previous main net training run sped up training by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12: https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile Skipping positions with bestmove captures where static exchange evaluation is >= 0 is based on the implementation from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 293 - only skip captures with see>=0 Positions with bestmove captures where score == 0 are always skipped for compatibility with minimized binpacks, since the original minimizer sets scores to 0 for slight improvements in compression. The trainer branch used was: https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more Binpacks were renamed to be sorted chronologically by default when sorted by name. The binpack data are otherwise the same as binpacks with similar names in the prior naming convention. Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c LLR: 2.92 (-2.94,2.94) <0.00,2.00> Total: 149792 W: 39153 L: 38661 D: 71978 Ptnml(0-2): 675, 17586, 37905, 18032, 698 Passed LTC: https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 64416 W: 16517 L: 16135 D: 31764 Ptnml(0-2): 38, 7218, 17313, 7602, 37 Bench: 1373183
Disservin
pushed a commit
that referenced
this pull request
Mar 7, 2024
Created by retraining the previous main net `nn-b1a57edbea57.nnue` with: - some of the same options as before: - ranger21, more WDL skipping, 15% more loss when Q is too high - removal of the huge 514G pre-interleaved binpack - removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack) - interleaving many binpacks at training time - training with some bestmove capture positions where SEE < 0 - increased usage of torch.compile to speed up training by up to 40% ```yaml experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28 nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more start-from-engine-test-net: True early-fen-skipping: 28 training-dataset: # similar, not the exact same as: # #4635 - /data/S5-5af/leela96.v2.min.binpack - /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack - /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack - /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack - /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack - /data/S5-5af/test80-2023-04-apr-2tb7p.binpack - /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack # #4782 - /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack - /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack # #4972 - /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack - /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack - /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack # #5056 - /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack - /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack num-epochs: 800 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` This particular net was reached at epoch 759. Use of more torch.compile decorators in nnue-pytorch model.py than in the previous main net training run sped up training by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12: https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile Skipping positions with bestmove captures where static exchange evaluation is >= 0 is based on the implementation from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 293 - only skip captures with see>=0 Positions with bestmove captures where score == 0 are always skipped for compatibility with minimized binpacks, since the original minimizer sets scores to 0 for slight improvements in compression. The trainer branch used was: https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more Binpacks were renamed to be sorted chronologically by default when sorted by name. The binpack data are otherwise the same as binpacks with similar names in the prior naming convention. Training data can be found at: https://robotmoon.com/nnue-training-data/ Passed STC: https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c LLR: 2.92 (-2.94,2.94) <0.00,2.00> Total: 149792 W: 39153 L: 38661 D: 71978 Ptnml(0-2): 675, 17586, 37905, 18032, 698 Passed LTC: https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 64416 W: 16517 L: 16135 D: 31764 Ptnml(0-2): 38, 7218, 17313, 7602, 37 closes #5090 Bench: 1373183
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
May 17, 2024
Created by first retraining the spsa-tuned master net `nn-ae6a388e4a1a.nnue` with: - using v6-dd data without bestmove captures removed - addition of T80 mar2024 data - increasing loss by 20% when Q is too high - torch.compile changes for marginal training speed gains And then SPSA tuning weights of epoch 899 following methods described in: official-stockfish#5149 This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run: https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb Thanks to @Viren6 for suggesting usage of: - c value 4 for the weights - c value 128 for the biases Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in: https://github.com/linrock/nnue-tools/tree/master/spsa Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167 After initially training with max-epoch 800, training was resumed with max-epoch 1000. ``` experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8 nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more start-from-engine-test-net: False start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue early-fen-skipping: 28 training-dataset: /data/S11-mar2024/: - leela96.v2.min.binpack - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - test80-2022-06-jun-16tb7p.v6-dd.min.binpack - test80-2022-08-aug-16tb7p.v6-dd.min.binpack - test80-2022-09-sep-16tb7p.v6-dd.min.binpack - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack - test80-2023-05-may-2tb7p.v6.min.binpack # official-stockfish#4782 - test80-2023-06-jun-2tb7p.binpack - test80-2023-07-jul-2tb7p.binpack # official-stockfish#4972 - test80-2023-08-aug-2tb7p.v6.min.binpack - test80-2023-09-sep-2tb7p.binpack - test80-2023-10-oct-2tb7p.binpack # S9 new data: official-stockfish#5056 - test80-2023-11-nov-2tb7p.binpack - test80-2023-12-dec-2tb7p.binpack # S10 new data: official-stockfish#5149 - test80-2024-01-jan-2tb7p.binpack - test80-2024-02-feb-2tb7p.binpack # S11 new data - test80-2024-03-mar-2tb7p.binpack /data/filt-v6-dd/: - test77-dec2021-16tb7p-filter-v6-dd.binpack - test78-juntosep2022-16tb7p-filter-v6-dd.binpack - test79-apr2022-16tb7p-filter-v6-dd.binpack - test79-may2022-16tb7p-filter-v6-dd.binpack - test80-jul2022-16tb7p-filter-v6-dd.binpack - test80-oct2022-16tb7p-filter-v6-dd.binpack - test80-nov2022-16tb7p-filter-v6-dd.binpack num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 0.8 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch899.nnue : 4.6 +/- 1.4 Passed STC: https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 95232 W: 24598 L: 24194 D: 46440 Ptnml(0-2): 294, 11215, 24180, 11647, 280 Passed LTC: https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 320544 W: 81432 L: 80524 D: 158588 Ptnml(0-2): 164, 35659, 87696, 36611, 142 bench 1955748
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
May 17, 2024
Created by first retraining the spsa-tuned main net `nn-ae6a388e4a1a.nnue` with: - using v6-dd data without bestmove captures removed - addition of T80 mar2024 data - increasing loss by 20% when Q is too high - torch.compile changes for marginal training speed gains And then SPSA tuning weights of epoch 899 following methods described in: official-stockfish#5149 This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run: https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb Thanks to @Viren6 for suggesting usage of: - c value 4 for the weights - c value 128 for the biases Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in: https://github.com/linrock/nnue-tools/tree/master/spsa Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167 After initially training with max-epoch 800, training was resumed with max-epoch 1000. ``` experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8 nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more start-from-engine-test-net: False start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue early-fen-skipping: 28 training-dataset: /data/S11-mar2024/: - leela96.v2.min.binpack - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - test80-2022-06-jun-16tb7p.v6-dd.min.binpack - test80-2022-08-aug-16tb7p.v6-dd.min.binpack - test80-2022-09-sep-16tb7p.v6-dd.min.binpack - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack - test80-2023-05-may-2tb7p.v6.min.binpack # official-stockfish#4782 - test80-2023-06-jun-2tb7p.binpack - test80-2023-07-jul-2tb7p.binpack # official-stockfish#4972 - test80-2023-08-aug-2tb7p.v6.min.binpack - test80-2023-09-sep-2tb7p.binpack - test80-2023-10-oct-2tb7p.binpack # S9 new data: official-stockfish#5056 - test80-2023-11-nov-2tb7p.binpack - test80-2023-12-dec-2tb7p.binpack # S10 new data: official-stockfish#5149 - test80-2024-01-jan-2tb7p.binpack - test80-2024-02-feb-2tb7p.binpack # S11 new data - test80-2024-03-mar-2tb7p.binpack /data/filt-v6-dd/: - test77-dec2021-16tb7p-filter-v6-dd.binpack - test78-juntosep2022-16tb7p-filter-v6-dd.binpack - test79-apr2022-16tb7p-filter-v6-dd.binpack - test79-may2022-16tb7p-filter-v6-dd.binpack - test80-jul2022-16tb7p-filter-v6-dd.binpack - test80-oct2022-16tb7p-filter-v6-dd.binpack - test80-nov2022-16tb7p-filter-v6-dd.binpack num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 0.8 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch899.nnue : 4.6 +/- 1.4 Passed STC: https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 95232 W: 24598 L: 24194 D: 46440 Ptnml(0-2): 294, 11215, 24180, 11647, 280 Passed LTC: https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 320544 W: 81432 L: 80524 D: 158588 Ptnml(0-2): 164, 35659, 87696, 36611, 142 bench 1955748
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
May 17, 2024
Created by first retraining the spsa-tuned main net `nn-ae6a388e4a1a.nnue` with: - using v6-dd data without bestmove captures removed - addition of T80 mar2024 data - increasing loss by 20% when Q is too high - torch.compile changes for marginal training speed gains And then SPSA tuning weights of epoch 899 following methods described in: official-stockfish#5149 This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run: https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb Thanks to @Viren6 for suggesting usage of: - c value 4 for the weights - c value 128 for the biases Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in: https://github.com/linrock/nnue-tools/tree/master/spsa Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167 After initially training with max-epoch 800, training was resumed with max-epoch 1000. ``` experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8 nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more start-from-engine-test-net: False start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue early-fen-skipping: 28 training-dataset: /data/S11-mar2024/: - leela96.v2.min.binpack - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - test80-2022-06-jun-16tb7p.v6-dd.min.binpack - test80-2022-08-aug-16tb7p.v6-dd.min.binpack - test80-2022-09-sep-16tb7p.v6-dd.min.binpack - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack - test80-2023-05-may-2tb7p.v6.min.binpack # official-stockfish#4782 - test80-2023-06-jun-2tb7p.binpack - test80-2023-07-jul-2tb7p.binpack # official-stockfish#4972 - test80-2023-08-aug-2tb7p.v6.min.binpack - test80-2023-09-sep-2tb7p.binpack - test80-2023-10-oct-2tb7p.binpack # S9 new data: official-stockfish#5056 - test80-2023-11-nov-2tb7p.binpack - test80-2023-12-dec-2tb7p.binpack # S10 new data: official-stockfish#5149 - test80-2024-01-jan-2tb7p.binpack - test80-2024-02-feb-2tb7p.binpack # S11 new data - test80-2024-03-mar-2tb7p.binpack /data/filt-v6-dd/: - test77-dec2021-16tb7p-filter-v6-dd.binpack - test78-juntosep2022-16tb7p-filter-v6-dd.binpack - test79-apr2022-16tb7p-filter-v6-dd.binpack - test79-may2022-16tb7p-filter-v6-dd.binpack - test80-jul2022-16tb7p-filter-v6-dd.binpack - test80-oct2022-16tb7p-filter-v6-dd.binpack - test80-nov2022-16tb7p-filter-v6-dd.binpack num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 0.8 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch899.nnue : 4.6 +/- 1.4 Passed STC: https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 95232 W: 24598 L: 24194 D: 46440 Ptnml(0-2): 294, 11215, 24180, 11647, 280 Passed LTC: https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 320544 W: 81432 L: 80524 D: 158588 Ptnml(0-2): 164, 35659, 87696, 36611, 142 bench 1995552
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
May 18, 2024
Created by first retraining the spsa-tuned main net `nn-ae6a388e4a1a.nnue` with: - using v6-dd data without bestmove captures removed - addition of T80 mar2024 data - increasing loss by 20% when Q is too high - torch.compile changes for marginal training speed gains And then SPSA tuning weights of epoch 899 following methods described in: official-stockfish#5149 This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run: https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb Thanks to @Viren6 for suggesting usage of: - c value 4 for the weights - c value 128 for the biases Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in: https://github.com/linrock/nnue-tools/tree/master/spsa Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167 After initially training with max-epoch 800, training was resumed with max-epoch 1000. ``` experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8 nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more start-from-engine-test-net: False start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue early-fen-skipping: 28 training-dataset: /data/S11-mar2024/: - leela96.v2.min.binpack - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack - test80-2022-06-jun-16tb7p.v6-dd.min.binpack - test80-2022-08-aug-16tb7p.v6-dd.min.binpack - test80-2022-09-sep-16tb7p.v6-dd.min.binpack - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack - test80-2023-05-may-2tb7p.v6.min.binpack # official-stockfish#4782 - test80-2023-06-jun-2tb7p.binpack - test80-2023-07-jul-2tb7p.binpack # official-stockfish#4972 - test80-2023-08-aug-2tb7p.v6.min.binpack - test80-2023-09-sep-2tb7p.binpack - test80-2023-10-oct-2tb7p.binpack # S9 new data: official-stockfish#5056 - test80-2023-11-nov-2tb7p.binpack - test80-2023-12-dec-2tb7p.binpack # S10 new data: official-stockfish#5149 - test80-2024-01-jan-2tb7p.binpack - test80-2024-02-feb-2tb7p.binpack # S11 new data - test80-2024-03-mar-2tb7p.binpack /data/filt-v6-dd/: - test77-dec2021-16tb7p-filter-v6-dd.binpack - test78-juntosep2022-16tb7p-filter-v6-dd.binpack - test79-apr2022-16tb7p-filter-v6-dd.binpack - test79-may2022-16tb7p-filter-v6-dd.binpack - test80-jul2022-16tb7p-filter-v6-dd.binpack - test80-oct2022-16tb7p-filter-v6-dd.binpack - test80-nov2022-16tb7p-filter-v6-dd.binpack num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 0.8 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch899.nnue : 4.6 +/- 1.4 Passed STC: https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 95232 W: 24598 L: 24194 D: 46440 Ptnml(0-2): 294, 11215, 24180, 11647, 280 Passed LTC: https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 320544 W: 81432 L: 80524 D: 158588 Ptnml(0-2): 164, 35659, 87696, 36611, 142 closes official-stockfish#5254 bench 1995552
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by retraining the master net on a dataset composed by:
Trained initially with max epoch 800, then increased near the end of training twice. First to 960, then 1200.
After training, post-processing involved:
python3 easy_train.py \ --experiment-name 2048-retrain-S6-sk28 \ --training-dataset /data/S6.binpack \ --early-fen-skipping 28 \ --start-from-engine-test-net True \ --max_epoch 1200 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0
In the list of datasets below, periods in the filename represent the sequence of steps applied to arrive at the particular binpack. For example:
test77-dec2021-16tb7p.filter-v6-dd.min-mar2023.unminimized.binpack
minimize_binpack
in the tools branchVALUE_NONE
)Binpacks were:
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move:
nn-epoch1059 : 2.7 +/- 1.6
Passed STC:
https://tests.stockfishchess.org/tests/view/64fc8d705dab775b5359db42
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 168352 W: 43216 L: 42704 D: 82432
Ptnml(0-2): 599, 19672, 43134, 20160, 611
Passed LTC:
https://tests.stockfishchess.org/tests/view/64fd44a75dab775b5359f065
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 154194 W: 39436 L: 38881 D: 75877
Ptnml(0-2): 78, 16577, 43238, 17120, 84
bench 1672490