18

As far as I understand, roughly speaking, chess engines work by:

  1. calculating all possible variations (game tree) up to some depth
  2. evaluating the final position based on some criteria (material, piece activity...)
  3. based on this evaluation decide for the best move

I fully understand that to have an efficient engine there are ways to prune certain lines, limiting the depth, etc; but this is not my question.

Question is: Are there any alternative attempts to program a (not necessarily strong, but not random either) chess engine, which does not follow this scheme?

user1583209
  • 20,735
  • 3
  • 41
  • 97
  • 1
    Botvinnik tried to prune at the root move by having the computer search only the best candidate move. There hasn't been a successful attempt of this, and chess is better for it. – Fred Knight Aug 17 '17 at 08:11

3 Answers3

16

In the beginning years of computer chess, people have actually tried to teach computers chess in the same way as they do with humans, explaining strategic concepts like a healthy pawn structure or the initiative. These attempt were soon abandoned because the method you describe was much more successful.

Recently, there has been another attempt to let an engine teach itself chess via Deep Learning (probably encouraged by the success of Google's Go AI). According to the article I linked to, they were quite successful and managed to reach IM strength.

Glorfindel
  • 24,825
  • 6
  • 68
  • 114
  • 2
    According to my understanding, Alpha Go works exactly like the OP described and only the eval function from (2.) is based on / created through Deep Learning. The other answer seems to kinda-agree. – Hermann Döppes Jan 04 '17 at 18:17
  • According to the article I linked to, they were quite successful and managed to reach IM strength but if you will read the original paper, you will see that the article highly exaggerated the success. – Salvador Dali Jan 04 '17 at 20:30
  • 2
    @HermannDöppes No, AlphaGo is based on Monte-Carlo tree search. – SmallChess Jan 05 '17 at 03:37
3

@Glorfindel is not wrong but deep learning approach to chess is really a fancy term for parameter tuning in chess programming.

Deep learning allows a chess engine to learn a evaluation function, something usually hand written by a programmer. During a game, it works like a normal chess engine.

Other possibilities:

  • GPU chess programming
  • Monte-Carlo tree search
SmallChess
  • 22,476
  • 2
  • 45
  • 82
  • 1
    Most of your answer should be a comment under [Glorfindel's answer](http://chess.stackexchange.com/a/16293/2789. The part that actually addresses the question consists of just seven words, which isn't nearly enough for an answer. Also, "GPU chess programming" is just a way of parallelizing whatever algorithm you might otherwise be using, so I don't think it's really an "approach" in the sense that the question is looking for; rather, it's just an implementation method. – David Richerby Jan 04 '17 at 20:34
  • mis-conception about machine learning. Well I guess I should look at the dates. By now even SF engine followers and developpers, having to train NNue to approximate exhaustive search part of engine, would know that while this is also of global optimization parameter search, there is a crucial aspect of generalization to new inputs. In reinforcement learning it is not as clear as in supervised learning, but it is still there in the exploration versus "exploitation" dilemma remaining room to learn from less than all the input about other input best answers. – dbdb Nov 22 '23 at 21:02
-4

Certainly! IF you truly just mean.... "in theory, are there other methods to code a chess engine?"... Then, yes!!

For instance.... one could store a copy of every possible position in chess (a huge number, I know), and have an evaluation for each one. Then, it's answer to any given question (i.e., "best move for white in X position"), will be known immediately just by looking up that board. Is the current state of computer hardware such that this would make any sense? Nah. But you didn't ask that.