Add missing docs.

This commit is contained in:
Tomasz Sobczyk
2020-11-22 18:08:14 +01:00
committed by nodchip
parent 9030020a85
commit 45e3335ee8
2 changed files with 6 additions and 0 deletions

View File

@@ -62,4 +62,6 @@ Currently the following options are available:
`sfen_format` - format of the training data to use. Either `bin` or `binpack`. Default: `binpack`.
`ensure_quiet` - this is a flag option. When specified the positions will be from the qsearch leaf.
`seed` - seed for the PRNG. Can be either a number or a string. If it's a string then its hash will be used. If not specified then the current time will be used.

View File

@@ -64,6 +64,10 @@ Currently the following options are available:
`newbob_decay` - learning rate will be multiplied by this factor every time a net is rejected (so in other words it controls LR drops). Default: 0.5 (no LR drops)
`assume_quiet` - this is a flag option. When specified learn will not perform qsearch to reach a quiet position.
`smart_fen_skipping` - this is a flag option. When specified some position that are not good candidates for teaching are skipped. This includes positions where the best move is a capture or promotion, and position where a king is in check.
`newbob_num_trials` - determines after how many subsequent rejected nets the training process will be terminated. Default: 4.
`auto_lr_drop` - every time this many positions are processed the learning rate is multiplied by `newbob_decay`. In other words this value specifies for how many positions a single learning rate stage lasts. If 0 then doesn't have any effect. Default: 0.