Don't allow LMP on PvNodes

I mentioned this a while back in discord, but nothing seems to have ever come from it.  Anyway, to the best of my knowledge most current training data gen is being done at relatively low fixed depths.  With this in mind, the change to not allow LMP in PvNodes should result in a fairly significant increase in strength and reliability of the PV.
This commit is contained in:
Joseph Ellis
2020-08-11 13:35:47 -05:00
committed by nodchip
parent e12a0cd9eb
commit 44a54b63f1

View File

@@ -1012,7 +1012,7 @@ moves_loop: // When in check, search starts from here
newDepth = depth - 1;
// Step 13. Pruning at shallow depth (~200 Elo)
if ( !rootNode
if ( !PvNode
&& pos.non_pawn_material(us)
&& bestValue > VALUE_TB_LOSS_IN_MAX_PLY)
{
@@ -2070,10 +2070,10 @@ namespace Learner
// Increase the generation of the substitution table for this thread because it is a new search.
//TT.new_search(th->thread_id());
// <EFBFBD><EFBFBD> If you call new_search here, it may be a loss because you can't use the previous search result.
// ª If you call new_search here, it may be a loss because you can't use the previous search result.
// Do not do this here, but caller should do TT.new_search(th->thread_id()) for each station ...
// <EFBFBD><EFBFBD>Because we want to avoid reaching the same final diagram, use the substitution table commonly for all threads when generating teachers.
// ¨Because we want to avoid reaching the same final diagram, use the substitution table commonly for all threads when generating teachers.
//#endif
}
}
@@ -2263,7 +2263,7 @@ namespace Learner
}
// Pass PV_is(ok) to eliminate this PV, there may be NULL_MOVE in the middle.
// <EFBFBD><EFBFBD> PV should not be NULL_MOVE because it is PV
// ¨ PV should not be NULL_MOVE because it is PV
// MOVE_WIN has never been thrust. (For now)
for (Move move : rootMoves[0].pv)
{