Use less time on recaptures

Credit for the idea goes to peregrine on discord.

Passed STC 10+0.1:
https://tests.stockfishchess.org/tests/view/662652623fe04ce4cefc48cf
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 75712 W: 19793 L: 19423 D: 36496
Ptnml(0-2): 258, 8487, 20023, 8803, 285

Passed LTC 60+0.6:
https://tests.stockfishchess.org/tests/view/6627495e3fe04ce4cefc59b6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 49788 W: 12743 L: 12404 D: 24641
Ptnml(0-2): 29, 5141, 14215, 5480, 29

The code was updated slightly and tested for non-regression against the
original code at STC:

LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 41952 W: 10912 L: 10698 D: 20342
Ptnml(0-2): 133, 4825, 10835, 5061, 122
https://tests.stockfishchess.org/tests/view/662d84f56115ff6764c7e438

closes https://github.com/official-stockfish/Stockfish/pull/5189

Bench: 1836777
This commit is contained in:
xoto10
2024-04-28 16:27:40 +01:00
committed by Disservin
parent 49ef4c935a
commit 886ed90ec3
4 changed files with 23 additions and 11 deletions

View File

@@ -54,8 +54,8 @@ using namespace Search;
namespace {
static constexpr double EvalLevel[10] = {1.043, 1.017, 0.952, 1.009, 0.971,
1.002, 0.992, 0.947, 1.046, 1.001};
static constexpr double EvalLevel[10] = {0.981, 0.956, 0.895, 0.949, 0.913,
0.942, 0.933, 0.890, 0.984, 0.941};
// Futility margin
Value futility_margin(Depth d, bool noTtCutNode, bool improving, bool oppWorsening) {
@@ -446,9 +446,10 @@ void Search::Worker::iterative_deepening() {
double reduction = (1.48 + mainThread->previousTimeReduction) / (2.17 * timeReduction);
double bestMoveInstability = 1 + 1.88 * totBestMoveChanges / threads.size();
int el = std::clamp((bestValue + 750) / 150, 0, 9);
double recapture = limits.capSq == rootMoves[0].pv[0].to_sq() ? 0.955 : 1.005;
double totalTime = mainThread->tm.optimum() * fallingEval * reduction
* bestMoveInstability * EvalLevel[el];
* bestMoveInstability * EvalLevel[el] * recapture;
// Cap used time in case of a single legal move for a better viewer experience
if (rootMoves.size() == 1)