Greg Strong wrote on Tue, Feb 27, 2007 01:55 AM UTC:
Michael,
I, for one, am happy to see you back. I know that communication in such
forums can be frustrating ... In the late 80's I thought BBS's were
pretty cool, and then went down-hill when more of the general public
jumped on board. Then, in the early 90's the Internet, and UseNet in
particular, was very cool and filled with good information. Then the
Internet went public in 1995 and the signal-to-noise ratio on UseNet
became unbearable. And, yes, this site has gone down-hill as well, and I
feel partly responsible. The whole user-submission of games was
originally my idea, and I'm sorry that I ever suggested it. But I think
that everyone who cares about Chess Variants should stick around, lest the
signal-to-noise ratio decline even further. If a user or a discussion
angers you, just try to ignore it ...
Mats,
My previous signal-to-noise ratio comment is not aimed at you (for the
most part.) Your recent comments have, in my humble opinion, been a
little less delicate than they could have been, but my primary beef is
with a couple of other users who will remain nameless. Regarding
playtesting with Zillion, I would say that it is true that it can be of
some value, especially if you carefully tweek it, but there is still no
substitution for human playtesting. Consider this: even if you tweek the
material values of the pieces, the computer is then playing with your
values! To a large extent, this is a self-fulfilling prophecy. In a game
between people, each person has a different idea about the value of the
pieces, and so the game helps to determine who's ideas are closer to
correct. Now you could do the same thing with Zillions by doing lots of
trials, testing one copy of Zillions against another, and giving each a
different evaluation of pieces, and continue to repeat this proceedure,
refining each time until you zero-in on the actual relative values. But
this takes lots of CPU time. Probably best is a combination. You play a
game, with a person, you discuss the results, then you do some playtesting
with Zillions using the opinions of the different players, and try actual
board positions from the game between humans... then, as Zillions tries
different moves than what the people did, get the people to try those
positions... etc. Go back and forth. I think that this sort of rigerous
study is best. But please don't get me wrong, I'm not criticizing you
for not doing this, as almost nobody does this. 99% of the games on here
are not really tested much, if at all. All I'm saying is don't jump to
conclusions about how throughly tested your games are. If you want
through evaluation of the games and the pieces, why not pick a couple of
them, and get a game-courier preset for them created, get a couple of
games going, get some discussion going, etc.
Michael,
I, for one, am happy to see you back. I know that communication in such forums can be frustrating ... In the late 80's I thought BBS's were pretty cool, and then went down-hill when more of the general public jumped on board. Then, in the early 90's the Internet, and UseNet in particular, was very cool and filled with good information. Then the Internet went public in 1995 and the signal-to-noise ratio on UseNet became unbearable. And, yes, this site has gone down-hill as well, and I feel partly responsible. The whole user-submission of games was originally my idea, and I'm sorry that I ever suggested it. But I think that everyone who cares about Chess Variants should stick around, lest the signal-to-noise ratio decline even further. If a user or a discussion angers you, just try to ignore it ...
Mats,
My previous signal-to-noise ratio comment is not aimed at you (for the most part.) Your recent comments have, in my humble opinion, been a little less delicate than they could have been, but my primary beef is with a couple of other users who will remain nameless. Regarding playtesting with Zillion, I would say that it is true that it can be of some value, especially if you carefully tweek it, but there is still no substitution for human playtesting. Consider this: even if you tweek the material values of the pieces, the computer is then playing with your values! To a large extent, this is a self-fulfilling prophecy. In a game between people, each person has a different idea about the value of the pieces, and so the game helps to determine who's ideas are closer to correct. Now you could do the same thing with Zillions by doing lots of trials, testing one copy of Zillions against another, and giving each a different evaluation of pieces, and continue to repeat this proceedure, refining each time until you zero-in on the actual relative values. But this takes lots of CPU time. Probably best is a combination. You play a game, with a person, you discuss the results, then you do some playtesting with Zillions using the opinions of the different players, and try actual board positions from the game between humans... then, as Zillions tries different moves than what the people did, get the people to try those positions... etc. Go back and forth. I think that this sort of rigerous study is best. But please don't get me wrong, I'm not criticizing you for not doing this, as almost nobody does this. 99% of the games on here are not really tested much, if at all. All I'm saying is don't jump to conclusions about how throughly tested your games are. If you want through evaluation of the games and the pieces, why not pick a couple of them, and get a game-courier preset for them created, get a couple of games going, get some discussion going, etc.