[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]
Comments by DerekNalls
'If the result would be different from playing at a a more 'normal' TC, like one or two hours per game, it would only mean that any conclusions you draw on them would be irrelevant for playing Chess at normal TC.' Conclusions drawn from playing at normal time controls are irrelevant compared to extremely-long time controls. It is desirable to see what secrets can be discovered from a rarely viewed vantage of extremely well-played games. Are not you interested at all to analyze move-by-move games played better than almost any pair of human players are capable? You do not seem to understand that I, too, am discontent with the probability of a small number of wins or losses in a row. This is a compensation that reduces the chance that the games were randomly played to the greatest extent attainable and consequently, the winner or loser randomly determined. _____________________________ '... playing 2 games will be like flipping a coin.' Correction- Playing 1 game will be like flipping a coin ... once. Playing 2 games will be like flipping a coin ... twice. The chance of getting the same flip (heads or tails) twice-in-a-row is 1/4. Not impressive but a decent beginning. Add a couple or a few or several consecutive same flips and it departs 'luck' by a huge margin. _______________________________________________________________ 'The result, whatever it is, will not prove anything, as it would be different if you would repeat the test. Experiments that do not give a fixed outcome will tell you nothing, unless you conduct enough of them to get a good impression on the probability for each outcome to occur.' I have wondered why the performance of computer chess programs is unpredictable and varied even under identical controls. Despite their extraordinary complexity, I think of computer hardware, operating systems and applications (such as Joker80) as deterministic. The details of the differences in outcomes do not concern me. In fact, to the extent that your remarks are true, they will support my case if my playtesting is successful that the unlikelihood of achieving the same outcome (i.e., wins or losses for one player) is extreme. I am pleased to report that I estimate it will be possible, over time, to generate enough experiments using Joker80 to have meaning for a high-quality, low-quantity advocate (such as myself) and even a moderate-quality, moderate-quantity advocate (such as Scharnagl). As for a low-quality, high-quantity advocate (such as you), you will always be disappointed as you are impossible to please.
'Actually the chance for twice the same flip in a row is 1/2.' ______________________________________________________ Really? You obviously need a lesson on probability. Let us start with elementary stuff. Mathematical Ideas fifth edition Miller & Heeren 1986 It is an old college textbook from a class I took in the mid-90's. [Yes, I passed the class.] ______________________ It says interesting things such as- 'The relative frequency with which an outcome happens represents its probability.' 'In probability, each repetition of an experiment is a trial. The possible results of each trial are outcomes.' ____________________________________________ An example of a probability experiment is 'tossing a coin'. Each 'toss' (trial of the experiment) has only two equally-possible outcomes, 'heads' or 'tails' ... assuming the condition that the coin is fair (i.e., not loaded). probability = p heads = h tails = t number of tosses = x addition = + involution = ^ [This is a substitute upon a single line for superscript representation of an exponent to the upper right of a base.] probability of heads = p(h) probability of tails = p(t) p(h) is a base. p(t) is a base. x is an exponent. p(h) = 0.5 p(t) = 0.5 _________________ What follows are examples of the chances of getting the same result upon EVERY consecutive toss. 1 time x = 1 p(h) ^ x = 0.5 ^ 1 = 0.5 p(t) ^ x = 0.5 ^ 1 = 0.5 Note: In this case only ... p(h) + p(t) = 1.0 2 times x = 2 p(h) ^ x = 0.5 ^ 2 = 0.25 p(t) ^ x = 0.5 ^ 2 = 0.25 3 times x = 3 p(h) ^ x = 0.5 ^ 3 = 0.125 p(t) ^ x = 0.5 ^ 3 = 0.125 Etc ... ______________________ By a function that is the inverse of successive exponents of base 2, the chance for consecutive tosses to yield the same result rapidly becomes extremely small. When this occurs, there are only two possibilities- 'random good-bad luck' or an unfair advantage-disadvantage exists (i.e., 'the coin is loaded'). The sum of these two possibilities always equals 1. random luck (good or bad) = l unfair (advantage or disadvantage) = u luck (heads) = l(h) luck (tails) = l(t) unfair (heads) = u(h) unfair (tails) = u(t) p(h) ^ x = l(h) p(t) ^ x = l(t) l(h) + u(h) = 1 l(t) + u(t) = 1 Therefore, as the chances of 'random good-bad luck' become extremely low in the example, the chances of an advantage-disadvantage existing for 'one side of the coin' or (if you follow the analogy) 'one side of the gameboard' or 'one player' or 'one set of piece values' become likewise extremely high. Only if it can be proven that an advantage-disadvantage does not exist for one player, then can it be accepted that the extremely unlikely event by 'random good-bad luck' is indeed the case. It is essential to understand that random good luck or random bad luck cannot be consistently relied upon. From this fact alone, firm conclusions can be responsibly drawn with a strong probability of correctness. ____________________________________________________________ 1 time x = 1 p(h) ^ x = 0.5 u(h) = 0.5 p(t) ^ x = 0.5 u(t) = 0.5 2 times x = 2 p(h) ^ x = 0.25 u(h) = 0.75 p(t) ^ x = 0.25 u(t) = 0.75 3 times x = 3 p(h) ^ x = 0.125 u(h) = 0.875 p(t) ^ x = 0.125 u(t) = 0.875 Etc ...
'... in Joker the source of indeterminism is much less subtle: it is programmed explicitly.' This renders Joker80 totally unsuitable for my playtesting purposes. [I am just relieved that you told me this bizarre fact now before I invested large amounts of computer time and effort.] It is critically important that any AI program attempt (to its greatest capability) to pinpoint the single, very best possible move in the time allowed upon every move in the game even if this means that it would often-sometimes repeat an identical move from an identical position. Do not you realize that forcing Joker80 to do otherwise must reduce its playing strength significantly from its maximum potential?
Well, when you said ... 'Actually the chance for twice the same flip in a row is 1/2.' ... that was vague and misleading. I thought you meant 'heads' twice OR 'tails' twice equals a chance of 1/2 instead of the sum of 'heads' twice AND 'tails' twice equals a chance of 1/2. Since English is a second language to you, of course I will overlook this minor mis-communication and even apologize for implicitly accusing you of incompetence. However, you should expect that you will draw critical reactions from others when you have previously, falsely, explicitly accused them of incompetence in a subject matter.
The reason you have never been able find any correlation between winning probabilities for one army and time controls [contrary to the experiences of people using other AI programs] in asymmetrical playtests using Joker80 is that you have destructively randomized the algorithm within your program to such an extent that it fails to measurably improve the quality of its moves as a function of time or plies completed. A program with serious problems of this nature may do well in speed chess but at truly long time controls against quality programs that improve as they should with time or plies per move, it cannot consistently win. I have two useful, important pieces of news for you: 1. All of the statistical data you have generated using Joker80 (appr. 20,000+ games) is corrupt. It must all be thrown out and started over from scratch after you repair Joker80. 2. All of your material values for CRC pieces are unreliable since they are based upon and derived from #1 (corrupt statistical data). I hope you can handle constructive advice.
I am slightly relieved and surprised that Joker80 measurably improves the quality of its moves as a function of time or plies completed over a range of speed chess tournaments. Nonetheless, completing games of CRC (where a long, close, well-played game can require more than 80 moves per player) in 0:24 minutes - 36 minutes does NOT qualify as long or even, moderate time controls. In the case of your longest 36-minute games, with an example total of 160 moves, that allows just 13.5 seconds per move per player. In fact, that is an extremely short time by any serious standards. I consider 10 minutes per move a moderate time that produces results of marginal, unreliable quality and 60-90 minutes per move a long time that produces results of acceptable, reliable quality. Ask Reinhard Scharnagl or ET about the longest time per move they have used testing openings with their programs playing 'Unmentionable Chess'- 24 hours per move! It is noteworthy that you are now resorting to playing dirty by using the 'exclusivist argument' that essentially 'since I am not a computer chess programmer, I cannot possibly know what I am talking about when I dare criticize an important working of your Joker80 program'. What you fail to take into account is that I am a playtester with more experience than you at truly long time controls. If you will not listen to what I am trying to tell you, then why will you not listen to Scharnagl? After all, he is also a computer chess programmer with a lot of knowledge in important subject matters (such as mathematics). You really should not be laughing. This is a serious problem. Your sarcastic reaction does nothing to reassure my trust or confidence that you will competently investigate it, confirm it and fix it. Now, please do not misconstrue my remarks? My intent is not to overstate the problem. I realize Joker80 in its present form is not a totally random 'woodpusher'. It would not be able to win any short time control tournaments if that were the case. In fact, I believe you when you state that you have not experienced any problems with it but ... I think this is strictly because you have not done any truly long time control playtesting with it. You must decide upon and define the best primary function for your Joker80 program: 1. To pinpoint the single, very best move available from any position. [Ideally, repeats could produce an identical move.] OR 2. To produce a different move from any position upon most repeats. [At best, by randomly choosing amongst a short list of the best available moves.] These two objectives are mutually exclusive. It is impossible and self-contradictory for a program to somehow accomplish both. Virtually every AI game developer in the world except you chooses #1 as preferable to #2 by a long shot in terms of the move quality produced on average. If you do not even commit your AI program to TRYING to find the single best move available because you think variety is just a whole lot more interesting and fun, then it will be soft competition at truly long time controls facing other quality AI programs that are frequently-sometimes pinpointing the single, best move available and playing it against you.
'Joker80's strength increases with time as expected, in the range from 0.4 sec to 36 sec per move, in a regular and theoretically expected way.' 'The effect you mention is observed NOT to occur and thus cannot explain anything that was observed to occur.' Admittedly, I have no proof ... yet. Of course, this is due to Joker80 never have been playtested at truly long time controls (to my point of view). _______________________________________________________________ 'Now if you want to conjecture that this will all miraculously become very different at longer TC, you are welcome to test it and show us convincing results. I am not going to waste my computer time on such a wild and expensive goose chase.' I respect your bravery to issue the challenge. Although I would surely find the results of a randomized Joker80 vs. non-randomized Joker80 tournament at 60 minutes per move (on average) interesting, I am not willing either to invest a few (3-4) months of my computer time that I estimate it would require to playtest 16 games under acceptable, reliable conditions. My refusal is due to it not being extremely important or worthwhile to me just to keep the chess variant community from losing one potentially great talent to numerology (or some such). Besides, I have nothing to gain and nothing new to learn by conducting this long, difficult experiment. Only you stand to benefit tangibly from its results. I just cannot understand how any rational, intelligent man could believe that introducing chaos (i.e., randomness) is beneficial (instead of detrimental) to achieving a goal defined in terms of filtering-out disorder to pinpoint order. When you reduce the power of your algorithm in any way to filter-out inferior moves, you thereby reduce the average quality of the moves chosen and consequently, you reduce the playing strength of your program- esp. at long time controls. In other words, you are counteracting a portion of everything desirable that you achieve thru advanced pruning techniques used elsewhere within your program. Since you argue that randomization is no problem at all and I argue that randomization is a moderate-major problem, everything we say to one another is becoming purely argumentative. Only tests (that neither one of us intend to perform) can prove who is correct and settle the issue. ___________________________________________________________________ 'As I explained, it is very easy to switch this feature off. But you should be prepared for significant loss of strength if you do that.' To the contrary, you should be prepared for a significant gain of strength if you do that. Notably, you do not dare. In any event, the addition of the completely-unnecessary module of code used to create the randomization effect within Joker80 that you desire irrefutably makes your program larger, more complicated and slower. Can that be a good thing?
'It would be very educational then to get yourself acquainted with the current state of the art of Go programming ...' Go is a connection game that is not related to Chess or its variants. The only thing Go has in common with Chess is that it is played upon a board using pieces. You did not directly address my comment.
Rest assured, I intend to drop this futile topic of conversation soon and leave you alone. The following is my impression of how the limited randomization of move selection that you have described as being at work within Joker80 must be harmful to the quality of moves made (on average) at long time controls. Since you have experience and knowledge as the developer of Joker80, I will defer to you the prerogative to correct errors in my inferred, general understanding of its workings. _______________________________________________________ short time control 1x At an example time control of 10 seconds per move (average), Joker80 cuts thru 8 plies before it runs out of time and must produce a move. At the moment the time expires, it has selected 12 high-scoring moves as candidates out of a much larger number of legal moves available. Generally, all of them score closely together with a few of them even tied for the same score. So, when Joker80 randomly chooses one move out of this select list, it has probably not chosen a move (on average) that is beneath the quality of the best move it could have found (within those severe time constraints) by anything except a minor amount. In other words, the damage to playing strength via randomization of move selection is minimized under minimal time controls. ___________________________ long time control 360x At an example time control of 60 minutes per move (average), Joker80 cuts thru 14 plies (due to its sophisticated advance pruning techniques) before it runs out of time and must produce a move. At the moment the time expires, it has selected only 4 high-scoring moves as candidates out of a much larger number of legal moves available. Generally, all of them score far apart with a probable best move scored significantly higher than the probable second best move. So, when Joker80 randomly chooses one move out of this select list, the chances are 3/4 that it has ignored its probable best move. Furthermore, it may not have chosen the probable second best move, either. It just as likely could have chosen the probable third or fourth best move, instead. Ultimately, it has probably chosen a move (on average) that is beneath the quality of the best move it may have successfully found by a moderate-major amount. In other words, the damage to playing strength via randomization of move selection is maximized under maximal time controls. _______________________________________ The moral of the story is that randomization of move selection reduces the growth in playing strength that normally occurs with time and plies completed.
I have read that most computer chess programmers use the brute force method initially when the plies can be cut thru quickly and then switch to use advanced pruning techniques to focus the search from then on. This lead to my mis-interpretation that Joker80 would have more moves under consideration as the best at short time controls than long time controls. Some moves that score highly-positive after only a few-several plies will score lowly-positive, neutral or negative after more plies. Thus, I do not see how the number of moves under consideration as the best could prevent being reduced slightly with plies completed. As a practical concern, there is rarely any benefit in accepting the CPU load associated with, for example, checking a low-score positive move returned after 13-ply completion thru 14-ply completion (for example) when other high-score positive moves exist in sufficient number.
Upon reflection, I have no conceivable reason to be distrustful of using Joker80 IF I shut-off its limited randomization of move selection which Winboard F activates by default. Could you please give me example lines within the 'winboard.ini' file that would successfully do so? I need to make sure every character is correct.
Muller: Thank you for the helpful response. Frankly, I considered my own question so obvious as to be borderline-stupid but I just wanted to be certain. The following entries within the 'winboard.ini' file should enable me to playtest (limited) randomized and non-randomized versions of Joker80 against one another. Does it look alright? If/When I run out of more pressing playtesting missions, I may undertake this one after all. /firstChessProgramNames={'Joker80 22' /firstInitString='new\n' 'Joker80 22' } /secondChessProgramNames={'Joker80 22' /secondInitString='new\n' 'Joker80 22' } Unfortunately, I no longer plan to playtest sets of CRC piece values by Muller, Scharnagl and Nalls against one another. I think having the pawn set to 85 and the queen set to 950 (as required by Joker80) for all three sets of material values would have the unintentional side effect of equalizing their scales (which are normally different). This means that the Muller set would, in fact, be tested against something other than a true, accurate representation of the Scharnagl and Nalls sets. I am currently in the midst of conducting several 'minimized asymmetrical playtests' using SMIRF at moderate time controls. I want to tentatively determine who is correct in disagreements between our models involving 2:1 or 1:2 exchanges (with supreme pieces). I have to avoid its checkmate bug, though. This requires me to take back one move whenever the program declares checkmate and 'call the game' if a sizeable material and/or positional advantage indisputably exists for one player. Fortunately, this is almost always the case. I will give a report in a few-several weeks.
'Of course you could also use Joker80 or TJchess10x8, which do not suffer from such problems.' ____________________ While you were on vacation, I started a series of 'minimized asymmetrical playtests' using SMIRF. So, I will complete them using SMIRF. Joker80, running under Winboard F, has never acted buggy in computer vs. computer games. However, TJChess cannot handle my favorite CRC opening setup, Embassy Chess, without issuing false 'illegal move' warnings and stopping the game.
Hecker: It was fairly easy for me to replicate the bug I experienced. In fact, I have never successfully played a computer vs. computer game to completion using TJChess10x8 in my life. So, you should be able to replicate the bug I experienced using the information I have provided. I hope you can fix it as well. Bug Report TJChess10x8 http://www.symmetryperfect.com/report
Using the mirror of Embassy Chess as a *.fen, TJChess10x8 runs fine now under Winboard F. Thanks!
Inconclusive Report One type of 1:2 or 2:1 exchanges I have been playtesting using SMIRF (versions MS-174b-O and MS-174c-O) involves a player missing 1 archbishop OR 1 chancellor versus a player missing 1 rook and 1 bishop. Generally, the results were favoring the Muller model in which any 1 supreme piece in CRC (archbishop, chancellor, queen) has a material value significantly higher than any other 2 pieces (except 2 rooks). Embassy Chess (player without 1 archbishop) vs. (player without 1 rook + 1 bishop) 10 minutes per move (player without 1 rook + 1 bishop) wins 2 games (playing white & black) 75% (3/4) probability of correctness (player without 1 chancellor) vs. (player without 1 rook + 1 bishop) 15 minutes per move (player without 1 rook + 1 bishop) wins 2 games (playing white & black) 75% (3/4) probability of correctness Unfortunately, since I used standard versions of SMIRF loaded with Scharnagl CRC material values, the results became tainted due to a game between the (player without 1 chancellor) and the (player without 1 rook + 1 bishop) at 10 minutes per move. The player with the potentially game-winning 3:2 advantage in supreme pieces unnecessarily permitted the exchange of its 1 archbishop for 2 minor power pieces (i.e., 1 bishop + 1 knight). Eventually, a 3-fold repetition draw occurred. Scharnagl: Please raise the material value of your archbishop within your CRC model? My experience has convinced me that it is obviously 1-2 pawns too low. Otherwise, I will be forced to abandon the use of SMIRF in favor of a program (such as Joker80) with more reliable CRC piece values when I return to this unresolved playtesting issue.
Muller & Scharnagl: Please note that I have revised my model again in consideration to recent playtesting results. This affects material values of 'supreme pieces' in both FRC and CRC. CRC material values of pieces http://www.symmetryperfect.com/shots/values-capa.pdf pawn 10.00 knight 30.77 bishop 37.56 rook 59.43 archbishop 98.22 chancellor 101.48 queen 115.18 FRC material values of pieces http://www.symmetryperfect.com/shots/values-chess.pdf pawn 10.00 knight 30.00 bishop 32.42 rook 50.88 queen 98.92 For details, please see: universal calculation of piece values revision- July 1, 2008 http://www.symmetryperfect.com/shots/calc.pdf 65 pages Consequently ... My current CRC model is more similar to the Muller model than any other. My current FRC model is more similar to the Kaufmann model than any other. Unfortunately, a 65-page explanation, even if it is 'elaborate sense', is not conducive to the 'short, convincing argument' you seek.
Conclusive Report (but without any evidence) I began this round of playtesting using SMIRF MS-174b-O which contained a bad checkmate bug. Since I regard it as inconsistent to me to: 1. present saved games unaltered whenever the checkmate bug did not present itself. YET 2. present saved games altered whenever the checkmate bug did present itself. ... I chose to present no saved games at all for the sake of consistency. In fact, I did not save any games at all generated via SMIRF playtests. This puts me in the strange position of playtesting mainly for my own interest since I do not have the right to demand that anyone else take my word for the playtesting results I am reporting. [The latest version of SMIRF recently given to me by Reinhard Scharnagl, MS-174c-O, has never shown me a checkmate bug. Hopefully, it never will.] _____________________________________________________________________ Since I have been convinced thru playtesting recommended by Muller that the archbishop has a material value nearly as great as the chancellor in CRC, the desirability of confirming the order of material values for the 'supreme pieces' (i.e., queen, chancellor, archbishop) used in all reputable CRC models occurred to me. Accordingly, 3 asymmetrical playtests were devised. These are 1:1 exchanges involving a player missing 1 given supreme piece versus a player missing 1 different supreme piece. Generally, the results were normal as expected. Embassy Chess (player without 1 archbishop) vs. (player without 1 chancellor) 10 minutes per move (player without 1 archbishop) wins 2 games (playing white & black) 75% (3/4) probability of correctness (player without 1 chancellor) vs. (player without 1 queen) 10 minutes per move (player without 1 chancellor) wins 2 games (playing white & black) 75% (3/4) probability of correctness (player without 1 archbishop) vs. (player without 1 queen) 10 minutes per move (player without 1 archbishop) wins 2 games (playing white & black) 75% (3/4) probability of correctness order of material values of CRC pieces (from highest to lowest) 1. queen 2. chancellor 3. archbishop By transitive logic, the third playtest could have been considered totally unnecessary. Nonetheless, I conducted it as a double-check to the consistency of the results from the first and second playtests. Although a 75% (3/4) probability per test could be improved upon greatly with a couple-few more games, I am already satisfied that the results are correct and that something unexpected is not the reality. So, I will not be playtesting this issue further. There are more interesting and pressing mysteries to me awaiting tests.
This is a new feature request: I have 2 versions of SMIRF (version 0 [standard] and version 2) I would like to playtest against one another using SMIRF-o-glot and Winboard F. Currently, it is impossible. 1. SMIRF-o-glot only executes the standard name of the SMIRF program. 2. SMIRF-o-glot requires the SMIRF program to be with it in the same directory to work. 3. Winboard F requires SMIRF-o-glot to be with it in the same directory to work. 4. Two seperate installations of Winboard F (having two different versions of SMIRF) cannot communicate to work in coordination. Manually playtesting 2 versions of SMIRF against one another (without using Winboard F and SMIRF-o-glot) would probably take 2-3 times as long. So, any solution that is not too labor-intensive for you, the programmer, would be greatly appreciated.
'I think that you could even put a single version of Smirfoglot in the WinBoard directory, as long as you tell it with the /fd argument where to look for the engine DLL if the Smirf directory is a subdirectory of the WinBoard directory.' Yes, the shortened argument works fine. Consider it tested. Thank you for the tech support.
Hecker: I am keenly interested to know what material values this strong program uses for CRC pieces.
World Chessboxing Organization http://site.wcbo.org/content/index_en.html
This must be changed to 'Unmentionable Chess Live'!
I appreciate the 3 versions of SMIRF loaded with different CRC material values that you sent me for testing purposes. I realize compiling them was not a productive use of your time toward developing Octopus or creating future versions of SMIRF. So, I sincerely hate to complain. Internal Playtesting- Scharnagl http://www.symmetryperfect.com/pass Push the 'download now' button. I played one game of Embassy Chess (mirror) at 40 minutes per move. The white player was version 0 (standard) and the black player was version 2 (highest archbishop value). The black player won. However, the victory was not attributable to the white player valuing its archbishop too low in an exchange. Instead, it was attributable to the white player valuing its queen too low in an exchange. White traded its 1 queen for 1 knight + 1 rook belonging to black. This gave black a 3:2 advantage in supreme pieces which, over the course of the game, was reduced to a 1:0 advantage in supreme pieces which gave black the ability to out-position white in the endgame, gain material and win. The game was not even close or long ... ending in 53 moves. I have seen this happen many times before. Of course, with version 0 and version 2 having identical material values for the queen, rook and knight, it could have just an likely 'thrown the game away' to the other player. That is the reason I cannot continue playtesting with what you provided to me. Under the Nalls model (for example), there are 3 supreme piece(s) enhancements: the non color-bound enhancement, the non color-changed enhancement and the compound enhancement. In CRC, they total a 43.75% bonus for the archbishop above the material value of its components (the bishop and the knight), a 12.50% bonus for the chancellor above the material value of its components (the rook and the knight) and a 18.75% bonus for the queen above the material value of its components (the rook and the bishop). The entire purpose of the supreme piece(s) enhancements is to provide a measurably appropriate deterrent to trading any supreme pieces too lightly to your opponent thereby ending-up with a potentially game-losing disadvantage in the ratio of supreme pieces. The Muller model is similar in this respect. If I had to choose only ONE foundation, experimental or theoretical, for my model, then I would choose experimental without apprehension. Of course, I am allowed to use both. So, I do because I remain hopeful that eventually, thru relentless effort, my theory will attain a worthwhile condition (that has previously eluded it) whereby the theoretical and experimental foundations will become mutually reinforcing. I would characterize my position as regarding both the experimental and theoretical foundations as important (although I definitely consider the experimental foundation primary). I would characterize Muller's position as being that the experimental foundation is everything that matters and the theoretical foundation is just an unneeded crude, inaccurate approximation to experimental numbers decorated with arbitrary words and concepts. Maybe so? I would characterize Scharnagl's position as being that the theoretical foundation is supremely important as it must dictate and predict the optimum experimental numbers. [I agree that a great theory should be expected to do so.] Furthermore, the theory must be elegantly simple and intuitively accessible. [I consider this expectation unrealistic and impossible. Generally, the optimum material values for chess variants are too complex in their estimation-calculation to be reducible to simple formulae without sacrificing accuracy to an unacceptable extent.] Scharnagl: Please reconsider revising your CRC model even if doing so unavoidably renders your theory somewhat more complicated in its concepts and formulae? The playing strength of SMIRF (standard version) can probably be improved significantly by taking such steps.
25 comments displayed
Permalink to the exact comments currently displayed.