[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]
Comments/Ratings for a Single Item
Derek, now it no longer is that easy. Because now in SMIRF piece values are only implemented in their statical part. Their mobility part will be covered by the detail evaluation. The '-X' versions of SMIRF have made a mixture of those, the '-0' version is completely without mobility fractions. This is a minor detail of my new approaches. Nevertheless if you would separate those components compiles are possible.
I understand. I wondered what the 'X' & 'O' designations for recent SMIRF versions meant. Do you still possess an older version of SMIRF (of satisfactory quality to you) that uses your current CRC material values? Since there is appr. 2-1/2 pawns difference between our models in our material values for the archbishop, I predict that my playtesting results would probably be worthwhile and decisive.
Joe Joyce and J.J. are referring to Minister ( Knight + Dabbabah + Wazir ) and Priestess ( Knight + Alfil + Ferz ). Ralph Betza's Chess Different Armies has FAD ( Ferz + Alfil + Dabbabah ). That took a minute to recall and find. I am quite sure (N+D+W) and (N+A+F) are not new and appear under different name(s) some time ago, and it would be less misleading to use earlier names. They did not originate with uncreative A.B.Shatranj or such other recently. When previous use(s) found, I will post them, as we have done with some other ''re-inventions.'' These pieces are unappealing, all three, because they have unnatural foreshortened Rook or Bishop dimension in their triple-compounding. There is no compelling logic. They are pulled out of a hat from hundreds possibilities. Why not use pieces going one-, two-, and three- either Rook- or Bishop-wise? No reason. No improvement of any CV set-up by limiting to up-to-two or -three radially. That is why Bishop and Rook themselves will always stand as perfection. Piece Values inherently, however, are interesting intellectual activity and topic. However, in perspective, not because of the utility of these particular mediocre choices, ''Minister,'' ''Priestess,'' FAD. (Another Comment may take up Amazon and the others as to their deficiencies.) Instead, because facility at computing values can then attempt to apply to better piece-movement concepts, such as Rococo units, these are worthwhile enough threads on Piece Values.
Well, Derek, I will use my own values for 8x8, if you have none new for Q,A,C ... I still have not published my current values (because they normally are not used inside of SMIRF, and only the mobility parts have been modified), I will use those then in the requested compiles: N,B,R,A,C,Q for 8x8: 3.0000, 3.4119, 5.1515, 6.7824, 8.7032, 9.0001 N,B,R,A,C,Q for 10x8: 3.0556, 3.6305, 5.5709, 7.0176, 9.1204, 9.6005
Your revised material values for SMIRF look fine to me. I have written them down for safekeeping. Which version will you be compiling? Of course, I do not plan to playtest anyone's material values for pieces upon the 8 x 8 board- only material values for CRC pieces upon the 10 x 8 board.
Derek, you will receive versions compiled using complete piece values.
Different armies in action: 4*Archbishop vs. 8*Knight Following game could be reviewed using the SMIRF donationware release from: http://www.chessbox.de/Compu/schachsmirf_e.html (but first replace the single quotes by double quotes before pasting) [Event 'SmirfGUI: Different Armies Games'] [Site 'MAC-PC-RS'] [Date '2008.05.02'] [Time '18:30:40'] [Round '60 min + 30 sec'] [White '1st Smirf MS-174c-0'] [Black '2nd Smirf MS-174c-0'] [Result '0-1'] [Annotator 'RS'] [SetUp '1'] [FEN 'nnnn1knnnn/pppppppppp/10/10/10/10/PPPPPPPPPP/A1A2K1A1A w - - 0 1'] 1. Aji3 Nd6 {(11.02) -1.791} 2. Aab3 Ne6 {(12.01=) -1.533} 3. c4 c5 {(12.01=) -0.992} 4. d4 cxd4 {(13.00) -0.684} 5. c5 Ne4 {(12.01) -0.535} 6. Ac2 d5 {(11.39) +0.189} 7. f3 N4xc5 {(11.01=) +0.465} 8. Ag3 Nac7 {(11.01) +0.900} 9. b4 Ncd7 {(11.01=) +1.475} 10. f4 g6 {(10.31) +1.750} 11. Ai5+ Ngh6 {(11.03+) +1.920} 12. g4 j6 {(12.01=) +2.225} 13. Aie1 Nig7 {(11.01=) +2.363} 14. Ac1d3 f6 {(10.20) +2.506} 15. a4 N8f7 {(11.01=) +2.707} 16. Kg1 a5 {(11.01) +2.803} 17. bxa5 Nc6 {(11.15) +2.910} 18. Ab3 Nji6 {(11.01) +2.570} 19. j4 f5 {(12.03=) +3.010} 20. gxf5 Ngxf5 {(11.01) +3.342} 21. a6 bxa6 {(11.01=) +3.998} 22. a5 Ne3 {(11.15) +4.156} 23. Aa4 Nb5 {(11.01=) +4.504} 24. Ab3 Nig7 {(11.03=) +5.244} 25. Aih4 Nf6 {(11.02) +5.324} 26. Aef2 Nfh5 {(10.19) +6.395} 27. Ah3 Nhxf4 {(11.01) +6.172} 28. Adxf4 Nxf4 {(14.01) +5.979} 29. Axf4 g5 {(12.14) +6.086} 30. Ahxg5 Nxg5 {(14.01=) +6.018} 31. Axg5 Kg8 {(14.11) +5.176} 32. Axe3 dxe3 {(16.01=) +5.117} 33. Axd5+ Kh8 {(16.01=) +5.117} 34. Axc6 Nhf5 {(14.18) +5.127} 35. Ab4 Nc7 {(15.00) +4.803} 36. Ad3 Ki8 {(15.00) +4.838} 37. Kh1 Nd6 {(14.01) +4.891} 38. j5 Ngf5 {(14.01=) +5.189} 39. Ac5 Ndb5 {(14.01) +5.248} 40. Ad3 Nbd4 {(14.01) +5.365} 41. Ae4 Ncb5 {(16.02) +5.631} 42. Ki1 e6 {(15.23) +5.932} 43. Ad3 h6 {(15.01) +5.250} 44. Ac4 h5 {(15.01=) +5.467} 45. i3 Kj7 {(15.12) +5.637} 46. Ad3 Nc3 {(15.09) +5.715} 47. Axa6 Ndxe2 {(15.00) +5.678} 48. Ad3 Ned4 {(14.01=) +6.117} 49. a6 Ncb5 {(14.01=) +6.602} 50. Kj1 e2 {(15.01=) +8.080} 51. Ae1 e5 {(15.01=) +11.59} 52. i4 e4 {(15.01=) +12.16} 53. ixh5 Nf3 {(14.02) +12.56} 54. Af2 e3 {(15.22) +14.61} 55. Ad3 e1=Q+ {(16.02) +16.00} 56. Axe1 Nxe1 {(17.01=) +23.09} 57. h6 ixh6 {(15.02=) +M~010} 58. h4 Nxh4 {(12.01=) +M~008} 59. a7 Nxa7 {(10.01=) +M~008} 60. Ki1 Neg2+ {(08.01=) +M~007} 61. Kh2 e2 {(06.01=) +M~006} 62. Kh3 e1=Q {(04.01=) +M~005} 63. Kg4 Qe4+ {(02.01=) +M~004} 64. Kh3 Qf3+ {(02.01=) +M~003} 65. Kh2 Qi3+ {(02.01=) +M~002} 66. Kh1 Qi1# {(02.00?) +M~001} 0-1 You will find out, that the handicap of being a big piece without having any exchangeable counterpart is dominating the kind of the battle.
Ha, finally my registration could be processed manually, as all automatic
procedures consistently failed. So this thread is now also open to me for
posting.
Let me start with some remarks to the ongoing discussion.
* I tried Reinhards 4A vs 8N setup. In a 100-game match of 40/1' games
with Joker80, the Knights are crushed by the Archbishops 80-20. So
although in principle I agree with Reinhard that such extreme tests with
setups that make the environment for the pieces very alien compared to
normal Chess could be unreliable, I certainly would not take it for
granted that his claim that 8 Knights beat 4 Archbishops is actually true.
Possible reasons for the discrepancy could be:
1) Reinhard did not base his conclusion on enough games. In my experience
using anything less than 100 games is equivalent to making the decision by
throwing dice. It often happens that after 30 games the side that is
leading by 60% will eventually lose by 45%.
2) Smirf does not handle the Archbishop well, because it is programmed to
underestimate its value, and is prepared to trade it to easily for two
Knights to avoid or postpone a Pawn loss, while Joker80 just gives the
Pawn and saves its Archbishops until he can get 3 Knights for it.
3) The shorter time control used does restrict search depth such that this
does not allow Joker80 to recognize some higher, unnatural strategy (which
has no parallel in normal Chess) where all Knights can be kept defending
each other multiple times, because they all have identical moves, and so
judges the pieces more on their tactical merits that would be relevant for
normal Chess.
* The arguments Reinhard gives against more realistic 'asymmetrical
platesting':
| Let me point to a repeatedly written detail: if a piece will be
| captured, then not only its average piece exchange value is taken
| from the material balance, but also its positional influence from
| the final detail evaluation. Thus it is impossible to create
| 'balanced' different armies by simply manipulating their pure material
| balance to become nearly equal - their positional influences probably
| would not be balanced as need be.
seem invalid. For one, all of us are good enough Chess players that we can
recognize for ourselves in the initial setup we use for playtesting if the
Archbishop or Knight or whatever piece is part of the imbalance is an
exceptionally strong or poor one, or just an average one. So we don't put
a white Knight on e5 defended by Pf4, while the black d- and f-pawn already
passed it, and we don't put it on a1 with white pawns on b3, c2 and black
pawns on b4, c3. In particular, I always test from opening positions,
where non of the pieces is on a particularly good square, but they can be
easily developed, as the opponent does not inderdict access to any of the
good squares either. So after a few opening moves, the pieces get to
places that, almost by definition, are the average where you can get
them.
Secondly, when setting up the position, we get the evaluation of the
engine for that position telling us if the engine does consider one of the
sides highly favored positionally (by taking the difference between the
engine evaluation and the known material difference for the piece values
we know the engine is using). Although I would trust this less than my own
judgement, it can be used as additional confirmation.
Like Derek says, averaging over many positions (like I always do: all my
matches are played starting from 432 different CRC opening positions) will
tend to have avery piece on the average in an average position. If a
certain piece, like A, would always have a +200cP 'positional'
contribution, (e.g. calculated as its contribution to mobility) no matter
where you put it, then that contribution is not positional at all, but a
hidden part of the piece value. Positional contributions should average to
zero, when averaged over all plausible positions. Furthermore, in Chess
positional contributions are usually small compared to material ones, if
they do not have to do with King safety or advanced passers. And none of
the latter play a role in the opening positions I use.
* Symettrical playtesting between engines with different piece-value sets
is known to be a notoriously unreliable method. Dozens of people have
reported trying it, often with quite advanced algorithms to step through
search space (e.g. genetic algorithms, or annealing). The result was
always the same: in the end (sometimes after months of testing) they
obtained piece values that, when pitted against the original hand-tuned
values, would consistently lose.
The reason is most likely that the method works in principle, but requires
too many games in practice. Derek mentioned before, that if two engines
value certain piece combinations differently, they often exchange them for
each other, creating a material imbalance, which then affects their winning
chances. Well, 'often' is not the same as 'always'. For very large
errors, like putting AR the
undervaluation of A only can lead to much more complicated bad trades, as
you have to have at least two pieces for A. The probability that this
occurs is far smaller, and only 10-20% of the games will see such a
trade.
Now the problem is that the games in which the bad trades do NOT happen
will not be affected by the wrong piece value. So this subset of games
will have a 50-50 outcome, pushing the outcome of the total score average
towards 50%. If A vs R+N gives you 60% winning chance,(so 10% excess), if
it is the only bad trade that happens (because you set A slightly under
8), and happens in only 20% of the cases, the total effect you would see
(and on which you would have to conclude the A value is suboptimal) would
be 52%. But the 80% of games that did not contribute to learning anything
about A value, because in the end A was traded for A, will contribute to
the statistical noise! To recognize a 2% excess score in stead of a 10%
excess score you need a 5 times lower statistical error. But statistical
errors only decrease as the SQUARE ROOT of the number of games. So to get
it down a factor 5, you need 25 times as many games. You could not
conclude anything before you had 2500 games!
Symmetrical playtesting MIGHT work if you first discard all the games that
traded A for A (to eliminate the noise they produce, and they can't say
anything about the correctness of the A value), and make sure you have
about 100 games left. Otherwise, the result will be garbage.
Well, H.G.M., if you are believing in your value model, and your engine is using it, then this engine will avoid valid trades (as I regard them to be). If you would trust in your model, you could easily add a black Knight and remove some white Pawns and still have a value sum 'advantage' of the white Archbishops' team. So why do you not test these arrays?
The arrays as I have tested with SMIRF have had an advantage for White of 3.1296 in my model.
In your model (normalized to a Pawn = 1) the advantage has been about 12.944 (more than a Queen's value).
P.S.: Why not have some test games between SMIRF using Black having 9 Knights against your program having 4 Archbishops, each having 10 Pawns? In your value model it should be nearly impossible for Black to gain any victory at all.
P.P.S.: The game as proposed is no subject for Blitz, because it is decided by deep positional effects. So I used 60 min / game + 30 sec / move for the time frame, which is important.
The arrays as I have tested with SMIRF have had an advantage for White of 3.1296 in my model.
In your model (normalized to a Pawn = 1) the advantage has been about 12.944 (more than a Queen's value).
P.S.: Why not have some test games between SMIRF using Black having 9 Knights against your program having 4 Archbishops, each having 10 Pawns? In your value model it should be nearly impossible for Black to gain any victory at all.
P.P.S.: The game as proposed is no subject for Blitz, because it is decided by deep positional effects. So I used 60 min / game + 30 sec / move for the time frame, which is important.
Sorry my original long post got lost. As this is not a position where you can expect piece values to work, and my computers are actually engaged in useful work, why don't YOU set it up?
Well, Harm, you know, that I failed in using 10x8 Winboard GUIs, so I discontinued trying that.
It seems to me that that is bad strategy. If you fail you should keep trying until you succeed. Only when you succeed you can stop trying...
You will find a (hopefully) actual table of several piece value sets at: http://www.10x8.net/Compu/schachveri1_e.html
I have adequate confidence in my latest material values to ask you to publish them upon your web page (instead of my previous material values). CRC material values of pieces http://www.symmetryperfect.com/shots/values-capa.pdf They are, in principle, similar to Muller's set for every piece except that they run on a comparatively compressed scale. Even though I have not yet playtested them, I consider my tentative confidence rational (although admittedly premature and risky) because I trust Muller's methods of playtesting his own material values and I think my latest revisions to my model are conceptually valid.
Derek, I have changed your values again within my piece value table.
I hope, you will report on some 9*N vs. 4*A games using your special SMIRF
engine modified to your values. I am very convinced, that the effect of
value reduced unbalanced big pieces exists.
P.S.: Here is a hint to check out my marginally refined approach at page:
http://www.10x8.net/Compu/schachansatz1_e.html
http://www.10x8.net/Compu/schachansatz1_e.html
To summarize the state of affairs, we now seem to have sets of piece values for Capablanca Chess by: Hans Aberg (1) Larry Kaufman (1) Reinhard Scharnagl (2) H.G. Muller (3) Derek Nalls (4) 1) Educated guessing based on known 8x8 piece values and assumptions on synergy values of compound pieces 2) Based on board-averaged piece mobilities 3) Obtained as best-fit of computer-computer games with material imbalance 4) Based on mobilities and more complex arguments, fitted to experimental results ('playtesting') I think we can safely dismiss method (1) as unreliable, as the (clearly stated) assumptions on which they are based were never tested in any way, and appear to be invalid. Method (3) and (4) now are basically in agreement. Method (2) produces substantially different results for the Archbishop. One problem I see with method (2) is that plain averaging over the board does not seem to be the relevant thing to do, and even inconsitent at places: suppose we apply it to a piece that has no moves when standing in a corner, the corner squares would suppress the mobility. If otoh, the same piece would not be allowed to move into the corner at all, the average would be taken over the part of the board that it could access (like for the Bishop), and would be higher than for the piece that could go there, but not leave it (if there weren't too many moves to step into the corner). While the latter is clearly upward compatible, and thus must be worth more. The moral lesson is that a piece that has very low mobility on certain squares, does not lose as much value because of that as the averaging suggest, as in practice you will avoid putting the piece there. The SMIRF theory doe not take that into account at all. Focussing on mobility only also makes you overlook disastrous handicaps a certain combination of moves can have. A piece that has two forward diagonal moves and one forward orthogonal (fFfW in Betza notation) has exactly the same mobility as that with forward diagonal and backward orthogonal moves (fFbW). But the former is restricted to a small (and ever smaller) part of the board, while the latter can reach every point from every other point. My guess is that the latter piece would be worth much more than the former, although in general forward moves are worth more than backward moves. (So fWbF should be worth less than fFbW.) But I have not tested any of this yet. I am not sure how much of the agreement between (3) and (4) can be ascribed to the playtesting, and how much to the theoretical arguments: the playtesting methods and results are not extensively published and not open to verification, and it is not clear how well the theoretical arguments are able to PREdict piece values rather than POSTdict them. IMO it is not possible to make an all encompasisng theory with just 4 or 6 empirical piece values as input, as any elaborate theory will have many more than 6 adjustable parameters. So I think it is crucial to get accurate piece values for more different pieces. One keystone piece could be the Lion. This is can make all leaps to targets in a 5x5 square centered on it (and is thus a compound of Ferz, Wazir, Alfil, Dabbabah and Knight). This piece seems to be 1.25 Pawn stronger than a Queen (1075 on my scale). This reveals a very interesting approximate law for piece values of short-range leapers with N moves: value = (30+5/8*N)*N For N=8 this would produce 280, and indeed the pieces I tested fall in the range 265 (Commoner) to 300 (Knight), with FA (Modern Elephant), WD (Modern Dabbabah) and FD in between. For N=16 we get 640, and I found WDN (Minister) = 625 and FAN (High Priestess) and FAWD (Sliding General) 650. And for the Lion, with N=24, the formula predicts 1080. My interpretation is that adding moves to a piece does not only add the value of the move itself (as described by the second factor, N), but also increases the value of all pre-existing moves, by allowing the piece to better manouevre in place for aiming them at the enemy. I would therefore expect it is mainly the captures that contribute to the second factor, while the non-captures contribute to the first factor. The first refinement I want to make is to disable all Lion moves one at a time, as captures or as non-captures, to see how much that move contributes to the total strength. The simple counting (as expressed by the appearence of N in the formula) can then be replaced by a weighted counting, the weights expressing the relative importance of the moves. (So that forward captures might be given a much bigger weight than forward non-captures, or backward captures along a similar jump.) This will require a lot of high-precision testing, though.
Oh Yes, I forgot about: [name removed] (5) 5) Based on safe checking I am not sure that safe checking is of any relevance. Most games are not won by checkmating the opponent King in an equal-material position, but by annihilating the opponent's forces. So mainly by threatening Pawns and other Pieces, not Kings. A problem is that safe checking seems to predict zero value for pieces like Ferz, Wazir and Commoner, while the latter is not that much weaker than the Knight. (And, averaged over all game stages, might even be stronger than a Knight.) This directly seems to falsify the method. [The above has been edited to remove a name and/or site reference. It is the policy of cv.org to avoid mention of that particular name and site to remove any threat of lawsuits. Sorry to have to do that, but we must protect ourselves. -D. Howe]
H.G.M wrote: ... Focussing on mobility only also makes you overlook disastrous handicaps a certain combination of moves can have. A piece that has two forward diagonal moves and one forward orthogonal (fFfW in Betza notation) has exactly the same mobility as that with forward diagonal and backward orthogonal moves (fFbW). But the former is restricted to a small (and ever smaller) part of the board, while the latter can reach every point from every other point. My guess is that the latter piece would be worth much more than the former, although in general forward moves are worth more than backward moves. (So fWbF should be worth less than fFbW.) But I have
not tested any of this yet.
Before I try to think over this argument, remember, all (non Pawn) pieces of the CRC piece set have non orientated gaits. Thus this argument could not change anything in value discussion of the CRC piece set, especially concerning the value of an Archbishop.
Before I try to think over this argument, remember, all (non Pawn) pieces of the CRC piece set have non orientated gaits. Thus this argument could not change anything in value discussion of the CRC piece set, especially concerning the value of an Archbishop.
Reinhard, why do you attach such importance to the 4A-9N position. I think that example is totally meaningless. If it would prove anything, it is that you cannot get the value of 9 Knights by taking 9 times the Knight value. It will prove _nothing_ about the Archbishop value. Chancellor and Queen will encounter exactly the same problems facing an army of 9 Knights. The problem is that there is a positional bonus for identical pieces defending each other. This is well known (e.g. connected Rooks). Problem is that such pair interactions grow as the square of the number of pieces, and thus start to dominate the total evaluation if the number of identical pieces gets extremely high (as it never will in real games). Pieces like A, C and Q (or in particular the highest-valued pieces on the board) will not get such bonuses, as the bonus is asociated with the safety of mutually defending each other, and tactical security in case the piece is traded, because the recapture then replaces it by an identical one, preserving all defensive moves it had. In absence of equal or higher pieces, defending pieces is a useless exercise, as recapture will not offer compensation. If you are attacked, you will have to withdraw. So the mutual-defence bonus is also dependent on the piece makeup of the opponent, and is zero for Archbishops when the opponent only has Knights, and very high for Knights when the opponent has only Archbishops. If you want to playtest material imbalances, the positional value of the position has to be as equal as possible. The 4A-9N position violates that requirement to an extreme extent. It thus cannot tell us anything about piece values. Just like deleting the white Queen and all 8 black Pawns cannot tell us anything about the value of Q vs P.
H.G.M. wrote: ... It thus cannot tell us anything about
piece values. Just like deleting the white Queen and all 8 black Pawns
cannot tell us anything about the value of Q vs P.
I fully agree with that. Because my A vs. N example has not been intended to calculate piece values. Instead it should put light on some obscure details. The strange effect is not caused by the ability of N to cover each other. This also holds for A. It is caused by the absence of exchangeable counterparts for A of equal (or bigger) value size.
My example should demonstrate the existence of new effects in games of different armies. And that implies, that one should be carefully, when trying to calculate or verify piece values by having series of matches between different armies. Such effects as demonstrated in my N vs. A example should be discussed, eliminated or if not to be avoided to be integrated inside a formula. I suggested to reduce the values of such unbalanced big pieces somehow (I am not yet sure how exactly) in the equations you are using to find out special piece values. But without such purification attempts misinterpretations are not to be avoided.
I fully agree with that. Because my A vs. N example has not been intended to calculate piece values. Instead it should put light on some obscure details. The strange effect is not caused by the ability of N to cover each other. This also holds for A. It is caused by the absence of exchangeable counterparts for A of equal (or bigger) value size.
My example should demonstrate the existence of new effects in games of different armies. And that implies, that one should be carefully, when trying to calculate or verify piece values by having series of matches between different armies. Such effects as demonstrated in my N vs. A example should be discussed, eliminated or if not to be avoided to be integrated inside a formula. I suggested to reduce the values of such unbalanced big pieces somehow (I am not yet sure how exactly) in the equations you are using to find out special piece values. But without such purification attempts misinterpretations are not to be avoided.
Well, Reinhard, there could be many explanations for the 'surprising' strength of an all-Knight army, and we could speculate forever on it. But it would only mean anything if we could actually find ways to test it. I think the mutual defence is a real effect, and I expect an army of all different 8-target leapers to do significantly worse than an army of all Knights, even though all 8-target leapers are almost equally strong. But it would have to be tested. Defending each other for Archbishops is useless (in the absence of opponet Q, C or A), as defending Archbishop in the face of Knight attacks is of zero use. So the factthey can do it is not worth anything. Nevertheless, the Archbishops do not do so bad as you want to make us believe, and I think they still would have a fighting chance against 9 Knights. So perhaps I will run this tests (on the Battle-of-the-Goths port, so that everyone can watch) if I have nothing better to do. But currently I have more important and urgent things to do on my Chess PC. I have a great idea for a search enhancement in Joker, and would like to implement and test it before ICT8.
re: Muller's assessment of 5 methods of deriving material values for CRC pieces 'I am not sure how much of the agreement between (3) and (4) can be ascribed to the playtesting, and how much to the theoretical arguments ...' As much playtesting as possible. Unfortunately, that amount is deficient by my standards (and yours). I have tried to compensate for marginal quantity with high quality via long time controls. You use a converse approach with opposite emphasis. Given enough years (working with only one server), this quantity of well-played games may eventually become adequate. ' ... and it is not clear how well the theoretical arguments are able to PREdict piece values rather than POSTdict them.' You have pinpointed my greatest disappointment and frustration thusfar with my ongoing work. To date, my theoretical model has not made any impressive predictions verified by playtesting. To the contrary, it has been revised, expanded and complicated many times upon discovery that it was grossly in error or out of conformity with reality. Although the foundations of the theoretical model are built upon arithmetic and geometry to the greatest extent possible with verifiable phenomena important to material values of pieces used logically for refinements, mathematical modelling can be misused to postulate and describe in detail the existence of almost any imaginable non-existent phenomena. For example, the Ptolemy model of the solar system.
H.G.M. wrote: ... Defending each other for Archbishops is useless (in the absence of opponet
Q, C or A), as defending Archbishop in the face of Knight attacks is of
zero use. So the factthey can do it is not worth anything. ...
Now you have got it. The main reason is the missing of counterparts of equal (or bigger) value. That is, what makes any effective covering impossible. And this is a payload within an (I confess very extremely designed) game between different armies.
P.S.: any covering of A also by P is useless then ...
Now you have got it. The main reason is the missing of counterparts of equal (or bigger) value. That is, what makes any effective covering impossible. And this is a payload within an (I confess very extremely designed) game between different armies.
P.S.: any covering of A also by P is useless then ...
Well, I got that from the beginning. But the problem is not that the A cannot be defended. It is strong and mobile enough to care for itself. The problem is that the Knights cannot be threatened (by A), because they all defend each other, and can do so multiple times. So you can build a cluster of Knights that is totally unassailable. That would be much more difficult for a collection of all different pieces. This will be likely to have always some weak spots, which the extremely agile Archbishops then seek out and attack that point with deadly precision. But I don't see this as a fundamental problem of pitting different armies against each other. After an unequal trade, andy Chess game becomes a game between different armies. But to define piece values that can be helpful to win games, it is only important to test positions that could occur in chames, or at least are not fundamentally different in character from what you might encounter in games. and the 4A-9N position definitely does not qualify as such. I think this is valid critisism against what Derek has done (testing super-pieces only against each other, without any lighter pieces being present), but has no bearing on what I have done. I never went further than playing each side with two copies of the same super-piece, by replacing another super-piece (which was then absent in that army). This is slightly unnatural, but I don't expect it to lead to qualitatively different games, as the super-pieces are similar in value and mobility. And unlike super-pieces share already some moves, so like and unlike super-pieces can cooperate in very similar ways (e.g. forming batteries). It did not essentially change the distribution of piece values, as all lower pieces were present in normal copy numbers. I understand that Derek likes to magnify the effect by playing several copies of the piece under test, but perhaps using 8 or 9 is overdoing it. To test a difference in piece value as large as 200cP, 3 copies should be more than enough: This can still be done in a reasonably realistic mix of pieces, e.g. replacing Q and C on one side by A, and on the other side by Q and A by C, so that you play 3C vs 3A, and then give additional Knight odds to the Chancellors. This would predict about +3 for the Chancellors with the SMIRF piece values, and -2.25 according to my values. Both imbalances are large enough to cause 80-90% win percentages, so that just a few games should make it obvious which value is very wrong.
25 comments displayed
Permalink to the exact comments currently displayed.