Check out Atomic Chess, our featured variant for November, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Alibaba. (Updated!) Jumps two orthogonally or diagonally.[All Comments] [Add Comment or Rating]
H. G. Muller wrote on Wed, Sep 19, 2012 08:46 PM UTC:
> Odd, I seem to remember reading about your Joker engine testing material values for FIDE and getting a Rook value that was unexpectedly low.

By now I am convinced that this was just a discrepancy between the value of a Rook in a closed position and in a position with many open files. It is well known Chess lore that it is quite important to get your Rooks on open files. A positional bonus that automatically gets added to the Rook value as the board empties, and virtually all files become open. The (opening)value I found was only a quarter-Pawn low, and awarding a bonus of that magniture for being on an open file does not seem excessive at all.

So I think this is not so much a problem with the empirical method, but more with the concept of piece values itself. Relative values of pieces are not constants of nature, but change as the game progresses from opening to end-game. The additive model for material value, where you add values of individual pieces to get the value of the army, is also far from perfect. The Bishop-pair bonus is a clear example of a cooperative effect, as is the dependence of the B-N difference as a function of the number of Pawns in the Kaufman model. How large these cooperative effects can grow is most convincingly shown by the fact that 7 Knights totally crush 3 Queens in the presence of Pawns, while the Kaufman values suggest that the Knight side is 'two minors down'.

>I also seem to recall the same engine testing the Bishop-Knight compound and finding its value unexpectedly high compared to the Queen and Rook-Knight (closer than the values of their component pieces). And I do not recall anyone offering a predictive theory capable of explaining that.

That the empirical method finds unexpected things by no means implies there is a problem with that method. It is much more likely there was a problem with the expectations. Not to mention the fact that 'predictive theories' are usually based on little else than the most simplistic analogies (like R>B, so RN must be > BN, and never mind B is color-bound and has no mating potential). I have intensely watched many dozens of long-TC 10x8 games between strong engines, and I have no doubt at all the empirical determination that RN - BN ~ 0.25P is correct. In all cases where an imbalance of BN vs separate R + N or R + B developed, the owner of the two lighter pieces was utterly and mercilessly slaughtered by the BN. (In the presence of several Pawns, of course; BN vs R is draw in itself, So BN + P vs R + B is already very drawish.) That piece is just so powerful...

> Betza also performed computer tests and human playtests on the value of the Commoner (nonroyal King), and was convinced that the computer value was wrong.

Well, I don't now what computer testing Betza did, knowing he rejected the use of commercial software like Zillions. I admit that my tests leave room for under-estimations of the Commoner value, as the engine with which it was done did not properly pay attention to mating potential. So it would fail to recognize the possibility to draw by sacrificing a piece for the opponent's last Pawn when the opponent's remaining piece lacked mating potential. Which might lead to unjust wins by the opponent, if it was too late to force such a trade when promotion came within the horizon.

> Have your formulas for short-range leaper values been verified by anything other than your own chess engine? 

No, they have not.

> Incidentally, did you ever finish that new chess engine you were working on that you said you wanted to complete before running more complicated tests? Spartacus, I think.

Unfortunately also little progress there. I am just too busy with other Chess(variant)-related projects. Like setting up an internet server, adapting WinBoard to play large Shogi variants, and writing an engine for those. But I really should put some more effort in it, because it already is at a level where it heavily outplays Fairy-Max in the variants that it plays. It is not so generally configurable for all kinds of unorthodox piece types as Fairy-Max is, though. But it certainly should be suitable to do a precision determination of the value of the Commoner, as it does take account of lack of mating potential in its evaluation function (multiplying the naive evaluation towards zero in 'drawish end-games', where the opponent can afford to sarifice his lightest pieces for your remaining Pawns).