Check out Janggi (Korean Chess), our featured variant for December, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Sac Chess. Game with 60 pieces. (10x10, Cells: 100) [All Comments] [Add Comment or Rating]
💡📝Kevin Pacey wrote on Fri, Dec 18, 2015 09:51 AM UTC:
H.G. posted some time ago:

"Some of your piece values are off, especially Archbishop, which is about C - 0.25 P = Q - 0.75 P (so 9.25 on your scale). The Amazon seems to be worth only Q+N, so 13 on your scale.
..."


I'm still wondering about the value of an Amazon asserted to be only Q + N, in your opinion (besides that of other people). Maybe it is since I am a fairy chess newbie, but I'm not clear on subsequent remarks you have made regarding synergy, in regard to them being fully in line with saying that an Amazon just = Q + N. Furthermore, assuming that value is the measurement your method yielded, that result is still a red flag for me presently, as far how infallible the method or its playtesting conditions might be. 

As I alluded to earlier, in particular the supreme double attacking powers of an Amazon make me value it more than just a Q + N. Currently I am more willing to accept that I gave a tentative value for an Archbishop that was too low, by contrast (though it may be that the value of an Archbishop in the context of Sac Chess, rather than when it is used just in an 8x8 variant approximating standard chess, still ought to be measured from scratch by testplaying using Sac Chess games, if anyone is willing). There is another red flag for me concerning the method, regarding comparing a bishop to a knight by such measurement, resulting in asserting they are of equal value. All about that further below.


H.G. posted more recently:

"The values were indeed measured by play-testing through self-play of computer programs. To measure the value of, say, an Archbishop, I set it up opening positions where one side has the Archbishop instead of a combination of other material expected to be similar in value (like Q, R+B, R+N+P, 2B+N, R+R). For any particular material imbalance the back-rank pieces are shuffled to promote game diversity. I then play several hundred games for each imbalance, to record the score. This is rarely exactly 50%, and then I handicap the winning side by deleting one of its Pawns, and run the test again. This calibrated which fraction of a Pawn the excess score corresponds to. E.g. Q vs A might end in a 62% victory for the Q, and if Q vs A+P then ends in a 54% victory for the A+P, I know the P apparently was worth 16%, so that the 62% Q vs A advantage corresponds to 0.75 Pawn. 

I tried this with two different computer programs, the virtually knowledgeless Fairy-Max, and the 400 Elo stronger Joker80. The results are in general the same (after conversion to Pawn units), and also independent of the time control. (I tried from 40moves/min to 40 moves/10min.) Typically they also are quite consistent: if two material combinations X and Y exactly balance each other (i.e. score 50%), then a combination Z usually scores the same against X and Y. 

The results furthermore reproduce the common lore about the value of orthodox Chess pieces. E.g. if I delete one side's Knights, and the other side's Bishops, the side that still has the Bishop pair wins (say) by 68%, and after receiving additional Pawn odds, loses by 68%. Showing that the B-pair is worth half a Pawn. Deleting only one N and one B gives a balanced 50% score, showing that lone Bishop and Knight are on the average equivalent. This is exactly what Larry Kaufman has found by statistical analysis of millions of GM games.
..."


First off, I think I see how the Archbishop's value was measured, in terms of being 0.75 Pawns less than a Queen, based on the percentages given. Whether just hundreds of games is a statistically satisfactory playtest sample size, I am not sure (note that in chess White is thought to have a standard statistical edge, by about 54% or 55% over Black, so I assume half the time the Archbishop was with White thoughout the playtests). It also could be important how highly rated the computer programs were. By way of illustration, in chess it takes a good degree of skill for human players to know how to defend against a queen using, say, R + B + P, in situations where they are worth at least the queen objectively, based on the current position on the board.

Larry Kaufmam is an International Master as far as chess goes, which puts him below Grandmaster or certainly world champion level, and such players have in the past and present certainly believed that though a bishop is close in value to a knight in terms of relative value, in general unless there is a special reason to prefer having a knight, situations favouring a bishop tend to happen more often - whether in actual game play, or in the many calculated variations that could have arisen from them (these alas do not appear in the playtesting process). So, if a grandmaster willingly gives up a bishop for a knight, he has reasons to do so based on other factors in the position. 

In any event, I have not heard of any reasonably strong human chess players changing their strategies in regard to trading bishops for knights in over the board play, based on Kaufman's result from his method. In regard to the millions of games Kaufman looked at, I am not sure all the chess Grandmasters in history have played close to a million games yet, especially against just each other. I have chess game databases with over a million games in them, but they include vast numbers of games played by players who were below Grandmaster level at the time.