Based on vondele's deletepsqt branch:
https://github.com/vondele/Stockfish/commit/369f5b051
This huge simplification uses a weighted material differences instead of
the positional piece square tables (psqt) in the semi-classical complexity
calculation. Tuned weights using spsa at 45+0.45 with:
int pawnMult = 100;
int knightMult = 325;
int bishopMult = 350;
int rookMult = 500;
int queenMult = 900;
TUNE(SetRange(0, 200), pawnMult);
TUNE(SetRange(0, 650), knightMult);
TUNE(SetRange(0, 700), bishopMult);
TUNE(SetRange(200, 800), rookMult);
TUNE(SetRange(600, 1200), queenMult);
The values obtained via this tuning session were for a model where
the psqt replacement formula was always from the point of view of White,
even if the side to move was Black. We re-used the same values for an
implementation with a psqt replacement from the point of view of the side
to move, testing the result both on our standard book on positions with
a strong White bias, and an alternate book with positions with a strong
Black bias.
We note that with the patch the last use of the venerable "Score" type
disappears in Stockfish codebase (the Score type was used in classical
evaluation to get a tampered eval interpolating values smoothly from the
early midgame stage to the endgame stage). We leave it to another commit
to clean all occurrences of Score in the code and the comments.
-------
Passed non-regression LTC:
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 142542 W: 36264 L: 36168 D: 70110
Ptnml(0-2): 76, 15578, 39856, 15696, 65
https://tests.stockfishchess.org/tests/view/64c8cb495b17f7c21c0cf9f8
Passed non-regression LTC (with a book with Black bias):
https://tests.stockfishchess.org/tests/view/64c8f9295b17f7c21c0cfdaf
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 494814 W: 125565 L: 125827 D: 243422
Ptnml(0-2): 244, 53926, 139346, 53630, 261
------
closes https://github.com/official-stockfish/Stockfish/pull/4713
Bench: 1655985
since the introduction of NNUE (first released with Stockfish 12), we
have maintained the classical evaluation as part of SF in frozen form.
The idea that this code could lead to further inputs to the NN or
search did not materialize. Now, after five releases, this PR removes
the classical evaluation from SF. Even though this evaluation is
probably the best of its class, it has become unimportant for the
engine's strength, and there is little need to maintain this
code (roughly 25% of SF) going forward, or to expend resources on
trying to improve its integration in the NNUE eval.
Indeed, it had still a very limited use in the current SF, namely
for the evaluation of positions that are nearly decided based on
material difference, where the speed of the classical evaluation
outweights its inaccuracies. This impact on strength is small,
roughly 2Elo, and probably decreasing in importance as the TC grows.
Potentially, removal of this code could lead to the development of
techniques to have faster, but less accurate NN evaluation,
for certain positions.
STC
https://tests.stockfishchess.org/tests/view/64a320173ee09aa549c52157
Elo: -2.35 ± 1.1 (95%) LOS: 0.0%
Total: 100000 W: 24916 L: 25592 D: 49492
Ptnml(0-2): 287, 12123, 25841, 11477, 272
nElo: -4.62 ± 2.2 (95%) PairsRatio: 0.95
LTC
https://tests.stockfishchess.org/tests/view/64a320293ee09aa549c5215b
Elo: -1.74 ± 1.0 (95%) LOS: 0.0%
Total: 100000 W: 25010 L: 25512 D: 49478
Ptnml(0-2): 44, 11069, 28270, 10579, 38
nElo: -3.72 ± 2.2 (95%) PairsRatio: 0.96
VLTC SMP
https://tests.stockfishchess.org/tests/view/64a3207c3ee09aa549c52168
Elo: -1.70 ± 0.9 (95%) LOS: 0.0%
Total: 100000 W: 25673 L: 26162 D: 48165
Ptnml(0-2): 8, 9455, 31569, 8954, 14
nElo: -3.95 ± 2.2 (95%) PairsRatio: 0.95
closes https://github.com/official-stockfish/Stockfish/pull/4674
Bench: 1444646
This change removes one of the constants in the calculation of optimism. It also changes the 2 constants used with the scale value so that they are independent, instead of applying a constant to the scale and then adjusting it again when it is applied to the optimism. This might make the tuning of these constants cleaner and more reliable in the future.
STC 10+0.1 (accidentally run as an Elo gainer:
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 154080 W: 41119 L: 40651 D: 72310
Ptnml(0-2): 375, 16840, 42190, 17212, 423
https://tests.stockfishchess.org/tests/live_elo/64653eabf3b1a4e86c317f77
LTC 60+0.6:
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 217434 W: 58382 L: 58363 D: 100689
Ptnml(0-2): 66, 21075, 66419, 21088, 69
https://tests.stockfishchess.org/tests/live_elo/6465d077f3b1a4e86c318d6c
closes https://github.com/official-stockfish/Stockfish/pull/4576
bench: 3190961
Params found with the nevergrad TBPSA optimizer via nevergrad4sf modified to:
* use SPRT LLR with fishtest STC elo gainer bounds [0, 2] as the objective function
* increase the game batch size after each new optimal point is found
The params were the optimal point after TBPSA iteration 7 and 160 nevergrad evaluations with:
* initial batch size of 96 games per evaluation
* batch size increase of 64 games after each iteration
* a budget of 512 evaluations
* TC: fixed 1.5 million nodes per move, no time limit
nevergrad4sf enables optimizing stockfish params with TBPSA:
https://github.com/vondele/nevergrad4sf
Using pentanomial game results with smaller game batch sizes was inspired by:
Use of SPRT LLR calculated from pentanomial game results as the objective function was an experiment at maximizing the information from game batches to reduce the computational cost for TBPSA to converge on good parameters.
For the exact code used to find the params:
https://github.com/linrock/tuning-fork
Passed STC:
https://tests.stockfishchess.org/tests/view/63f4ef5ee74a12625bcd114a
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 66552 W: 17736 L: 17390 D: 31426
Ptnml(0-2): 164, 7229, 18166, 7531, 186
Passed LTC:
https://tests.stockfishchess.org/tests/view/63f56028e74a12625bcd2550
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 71264 W: 19150 L: 18787 D: 33327
Ptnml(0-2): 23, 6728, 21771, 7083, 27
closes https://github.com/official-stockfish/Stockfish/pull/4401
bench 3687580
If a global function has no previous declaration, either the declaration
is missing in the corresponding header file or the function should be
declared static. Static functions are local to the translation unit,
which allows the compiler to apply some optimizations earlier (when
compiling the translation unit rather than during link-time
optimization).
The commit enables the warning for gcc, clang, and mingw. It also fixes
the reported warnings by declaring the functions static or by adding a
header file (benchmark.h).
closes https://github.com/official-stockfish/Stockfish/pull/4325
No functional change
Joint work by Ofek Shochat and Stéphane Nicolet.
passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 93288 W: 24996 L: 24601 D: 43691
Ptnml(0-2): 371, 10263, 24989, 10642, 379
https://tests.stockfishchess.org/tests/view/63448f4f4bc7650f07541987
passed LTC:
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 84168 W: 22771 L: 22377 D: 39020
Ptnml(0-2): 47, 8181, 25234, 8575, 47
https://tests.stockfishchess.org/tests/view/6345186d4bc7650f07542fbd
================
It seems there are two effects with this patch:
effect A :
If Stockfish is winning at root, we have optimism > 0 for all leaves in
the search tree where Stockfish is to move. There, if (psq - nnue) > 0
(ie if the advantage is more materialistic than positional), then the
product D = optimism * (psq - nnue) will be positive, nnueComplexity will
increase, and the eval will increase from SF point of view.
So the effect A is that if Stockfish is winning at root, she will slightly
favor in the search tree (in other words, search more) the positions where
she can convert her advantage via materialist means.
effect B :
If Stockfish is losing at root, we have optimism > 0 for all leaves in
the search tree where the opponent is to move. There, if (psq - nnue) < 0
(ie if the opponent advantage is more positional than materialistic), then
the product D = optimism * (psq-nnue) will be negative, nnueComplexity will
decrease, and the eval will decrease from the opponent point of view.
So the effect B is that Stockfish will slightly favor in the search tree
(search more) the branches where she can defend by slowly reducing the
opponent positional advantage.
=================
closes https://github.com/official-stockfish/Stockfish/pull/4195
bench: 4673898
First things first...
this PR is being made from court. Today, Tord and Stéphane, with broad support
of the developer community are defending their complaint, filed in Munich, against ChessBase.
With their products Houdini 6 and Fat Fritz 2, both Stockfish derivatives,
ChessBase violated repeatedly the Stockfish GPLv3 license. Tord and Stéphane have terminated
their license with ChessBase permanently. Today we have the opportunity to present
our evidence to the judge and enforce that termination. To read up, have a look at our blog post
https://stockfishchess.org/blog/2022/public-court-hearing-soon/ and
https://stockfishchess.org/blog/2021/our-lawsuit-against-chessbase/
This PR introduces a net trained with an enhanced data set and a modified loss function in the trainer.
A slight adjustment for the scaling was needed to get a pass on standard chess.
passed STC:
https://tests.stockfishchess.org/tests/view/62c0527a49b62510394bd610
LLR: 2.94 (-2.94,2.94) <0.00,2.50>
Total: 135008 W: 36614 L: 36152 D: 62242
Ptnml(0-2): 640, 15184, 35407, 15620, 653
passed LTC:
https://tests.stockfishchess.org/tests/view/62c17e459e7d9997a12d458e
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 28864 W: 8007 L: 7749 D: 13108
Ptnml(0-2): 47, 2810, 8466, 3056, 53
Local testing at a fixed 25k nodes resulted in
Test run1026/easy_train_data/experiments/experiment_2/training/run_0/nn-epoch799.nnue
localElo: 4.2 +- 1.6
The real strength of the net is in FRC and DFRC chess where it gains significantly.
Tested at STC with slightly different scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c13a4002ba5d0a774d20d4
Elo: 29.78 +-3.4 (95%) LOS: 100.0%
Total: 10000 W: 2007 L: 1152 D: 6841
Ptnml(0-2): 31, 686, 2804, 1355, 124
nElo: 59.24 +-6.9 (95%) PairsRatio: 2.06
DFRC:
https://tests.stockfishchess.org/tests/view/62c13a5702ba5d0a774d20d9
Elo: 55.25 +-3.9 (95%) LOS: 100.0%
Total: 10000 W: 2984 L: 1407 D: 5609
Ptnml(0-2): 51, 636, 2266, 1779, 268
nElo: 96.95 +-7.2 (95%) PairsRatio: 2.98
Tested at LTC with identical scaling:
FRC:
https://tests.stockfishchess.org/tests/view/62c26a3c9e7d9997a12d6caf
Elo: 16.20 +-2.5 (95%) LOS: 100.0%
Total: 10000 W: 1192 L: 726 D: 8082
Ptnml(0-2): 10, 403, 3727, 831, 29
nElo: 44.12 +-6.7 (95%) PairsRatio: 2.08
DFRC:
https://tests.stockfishchess.org/tests/view/62c26a539e7d9997a12d6cb2
Elo: 40.94 +-3.0 (95%) LOS: 100.0%
Total: 10000 W: 2215 L: 1042 D: 6743
Ptnml(0-2): 10, 410, 3053, 1451, 76
nElo: 92.77 +-6.9 (95%) PairsRatio: 3.64
This is due to the mixing in a significant fraction of DFRC training data in the final training round. The net is
trained using the easy_train.py script in the following way:
```
python easy_train.py \
--training-dataset=../Leela-dfrc_n5000.binpack \
--experiment-name=2 \
--nnue-pytorch-branch=vondele/nnue-pytorch/lossScan4 \
--additional-training-arg=--param-index=2 \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--start-from-engine-test-net True \
--tui=False \
--seed=$RANDOM \
--max_epoch=800 \
--auto-exit-timeout-on-training-finished=900 \
--network-testing-threads 8 \
--num-workers 12
```
where the data set used (Leela-dfrc_n5000.binpack) is a combination of our previous best data set (mix of Leela and some SF data) and DFRC data, interleaved to form:
The data is available in https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG?usp=sharing
Leela mix: https://drive.google.com/file/d/1JUkMhHSfgIYCjfDNKZUMYZt6L5I7Ra6G/view?usp=sharing
DFRC: https://drive.google.com/file/d/17vDaff9LAsVo_1OfsgWAIYqJtqR8aHlm/view?usp=sharing
The training branch used is
https://github.com/vondele/nnue-pytorch/commits/lossScan4
A PR to the main trainer repo will be made later. This contains a revised loss function, now computing the loss from the score based on the win rate model, which is a more accurate representation than what we had before. Scaling constants are tweaked there as well.
closes https://github.com/official-stockfish/Stockfish/pull/4100
Bench: 5186781