Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Engine moves aren't really like that though. They aren't dramatic strokes of brilliance. They're slightly odd moves that pay off 20-30 moves later.

And it's a little bit of a meta game. Engines are ultimately playing against themselves when determining the best move. So they might discard a brilliant looking move because of some line that no human is reasonably going to come up with. Similarly they might make a brilliant move that humans don't recognize as such because no human can analyze as deeply as ane engine. So when a computer looks at a board and says there is/isn't a great move that's not necessarily the same as what a person would think.



I don't entirely disagree but consider the following:

1) To be useless to a top player, the decision tree needs to have only long term gains and be barren in terms of short term human-readable profits.

2) In the same vein, the metagaming pruning process you have described has to unfold a certain way, in the context of a game where most of the previous moves have been played by humans and not machines and where there is therefore much more of a chance of an unbalance compared to a purely computed position.

3) the above criteria of long-term, short-term, readability etc. must be defined according to players that are already 95% or so precise. The missing percentages mean that they always get smoked by machines over the course of a game, but can have a remarkable ability to see good lines.


The problem is if we're sending 1 bit of information that is something along the lines of "go for it" or "something's there" or whatever. That's just not something an engine can tell you outside of specific scenarios where there is a really good move to be made.

You probably could rejigger the engine to factor into account the likelihood of a human finding the right moves and relative downside. Like an engine might reject a move that leaves them a half pawn down if the opponent makes 25 difficult to find moves in a row. But that risk/reward would be worth it going against a flawed human player.

Perhaps it could also work in conjunction with another GM that you train with regularly. They can use an engine and do the evaluation above themselves.


I see what you mean. This would indeed be most useful with an accomplice, and a lot dicier and ambiguous without.


Humans can find engine moves, especially when they have hours on the clock like in classical chess. If you can consistently spend the majority of your time on the most critical moments when game-breaking moves could be found with enough calculation, that's enough to tip the scales dramatically over many games.

Sure, the human won't find the move every time, but it's still a huge edge.


But the point is that there's not really ever this mythical one great move that's going to win your the game. Look at Stockfish's analysis of the game:

https://lichess.org/broadcast/sinquefield-cup--grand-chess-t...

There's not a single move that significantly improved black's position.


There were several missed opportunities that the engine gave a 0.3 to 0.5 point advantage to. One point in the engine rating is roughly equivalent to one pawn. Many end games are won or lost based on having an extra pawn or two. Two moves of 0.5 points adds up to a significant improvement.

From my experience watching tournaments, it happens regularly that missed opportunities would have given someone a 1 or 2 point engine rating advantage.


Chess.com has already "fine-tuned" AIs that play at the N-elo level. E.g. 1800 elo, 2200 elo, IM, GM. I'm sure you could constrain stockfish to play at N+1 of your target opponent and thus avoid this issue.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: