Cache line aligned TT

Let TT clusters (16*4=64 bytes) to hold on a singe cache line.
This avoids the need for the double prefetch.

Original patches by Lucas and Jean-Francois that has also tested
on his AMD FX:

BIG HASHTABLE

./stockfish bench 1024 1 18 > /dev/null

Before:
1437642 nps
1426519 nps
1438493 nps

After:
1474482 nps
1476375 nps
1475877 nps

SMALL HASHTABLE

./stockfish bench 128 1 18 > /dev/null

Before:
1435207 nps
1435586 nps
1433741 nps

After:
1479143 nps
1471042 nps
1472286 nps

No functional change.
This commit is contained in:
Marco Costalba
2013-04-26 18:45:54 +02:00
parent e508494a99
commit 083fe58124
4 changed files with 11 additions and 11 deletions

View File

@@ -237,10 +237,8 @@ void prefetch(char* addr) {
# if defined(__INTEL_COMPILER) || defined(_MSC_VER)
_mm_prefetch(addr, _MM_HINT_T0);
_mm_prefetch(addr+64, _MM_HINT_T0); // 64 bytes ahead
# else
__builtin_prefetch(addr);
__builtin_prefetch(addr+64);
# endif
}