Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve seen a video of Magnus Carlsen playing on Lichess before. Does Irwin ever accidentally flag him or people like him? Do these sorts of folks have to be verified in some fashion?


What it looks like to cheat is more than just playing relatively accurate moves. Average move time, centipawn loss over multiple games, blunder/mistake frequency across multiple games, strengh of moves while in time trouble, etc. Cheaters tend to stand out when you look at a short history of games.


That would clearly work for someone who is cheating by letting an engine do all the work, but how about someone who mostly plays the game themselves just using the engine rarely?

Give Nakamura a minute with Stockfish any two times of his choosing in each game, and he would have probably won the Candidates.

Heck, just give a good player a blunder alert that tells them after they have made a blunder that they have done so and it could make a big difference.

There were games in the Candidates where a player would make a blunder that would completely turn the game around if the opponent found the one move that exploited it, but the opponent didn't see it. The first player could have then saved themselves but had not yet realized they blundered so didn't. Then the other player realized what was going on and exploited the blunder.


I don't think it's that helpful if you're letting Stockfish make a move or two for you per game, or at least at the level I'm at.

The engine is so good that it often makes moves that are incomprehensible, setting itself up for an attack in n moves where n is often 10+.

If you did want to cheat (but what's the point?) a chrome extension that prevented you from making moves where you lost more then some certain amount of centipawns would be the way to do it.


Anand talked about that a while back. He said even 1 bit of information from Stockfish per game would result in a significant amount of rating points, and be quite hard to detect. I.e. you are in a position where you could choose to play a solid move, or alternatively to launch a risky combination. Stockfish explores the combination and gives you a 1 or 0 saying whether it will work.

I believe cheat detection is done partly by humans. Computers flag something as suspicious and then they show the suspicious moves to some grandmasters who might immediately say "no human would do that", or else "yeah that move looks weird but I could imagine someone making it", that type of thing. There was a youtube of Nakamura looking at such a position a while back. The person had a chance to sac some material in order to simplify to a trivially winning endgame, but instead carried out a ridiculously complicated maneuver that kept the material. Just the sort of thing a computer would do, and Naka pointed it out.


> That would clearly work for someone who is cheating by letting an engine do all the work, but how about someone who mostly plays the game themselves just using the engine rarely?

Thats exactly what those engines try to detect. If you are average player, but every time you start loosing you get significantly better, then there is great possibility that you are cheating. Thats whats looking through history gives you. It's about actually finding deviations from your usual behaviour.


Subtle cheating is always hard to automatically detect. You can't tell with high confidence that someone used an engine for one move every couple games for instance because sometimes humans find those rare moves on their own. Thankfully a lot cheaters seem to have an ego that pushes them to cheat more and more though so they tend to get weeded out.


[flagged]


It is a comprehensive answer.

Top human players tend to not make the moves engines make. They also stay relatively strong when under time pressure, where lots of humans reentering moves from a computer fail.

Playing someone taking 10 seconds for each move— whether there is only one valid move to capture back or the situation is complicated— you get suspicious. And then when they forget how to play when they have 3 seconds per move stands out.


Chess.com has a measurement for how accurate a player plays compared to top engine moves called Computer Aggregated Precision Score (or "CAPS"), and top humans do play incredibly accurately. Magnus has CAPS of 98.36 for example.

https://www.chess.com/article/view/who-was-the-best-world-ch...


I think that as a measure of skill vs. a computer that statistic doesn't tell you all that much. Note that even in that measure Magnus only picks the top move ~85% of the time, and that includes many moves where the top move is "obvious" which will inflate the average a bit. And it needs to be kept in mind that Magnus cannot win against a modern computer, so clearly that 85% only means so much, computer play is still on a whole different level from human play.

Beyond that, time is a very important factor, running out the clock is one of the most effective ways to win against cheating opponents. Magnus only finds the top move ~85% of the time and takes probably 10x to 100x the time to do it. And like the other commenter noted, variance in the move time is an extremely obvious tell. Magnus will blitz out certain moves (even in classical chess) and then think for ~40 minutes in other complicated positions. A computer can do both positions in basically the same amount of time, and cheating players typically don't know when they "should" be thinking vs. playing fast.


Note that even his "top engine match" is 85%. This is not a fantastically high number, given that maybe 40% of moves are obvious/must-moves. I bet I match the engine ~60%.

I also think this is his classical games, not rapid, blitz, or bullet.

Top human players tend to make safer, lower variance moves than things engines prove super safe on evaluation. They may give up a few centipawns for ease of analysis.


Are you trolling? No it isn't?

If nothing else, the time issue is a huge one. Cheaters have very consistent seconds-per-move while actual masters make obvious moves instantly and pause on tougher moves.


I suppose that only completely newbie cheaters would think of taking the same amount of time to make each move. It’s like the first rule of pretending to be a human: add a random delay to all of your actions.


And yet time and time again, chess streamers and content producers run across people with newly formed accounts and perfect win records playing the best engine move at regular measured intervals.

If you are cunning enough to hold off on the best move for a few extra seconds to appear unaided by an engine, or if you blunder X% moves in your game (or simply play the 3rd or 4th rated move which still probably wins against most humans), chances are you'd do fine learning some chess strategy and playing the game unaided.

People keep asking why one would cheat at chess. I'm sure there are some bad actors who aim to disrupt the game, in a manner consistent with cheater motivations in other games. I'd imagine many cheaters are simply looking for some quick dopamine after being frustrated by a plateau in their skill.


A random delay is also a tell, though. An obvious move shouldn't take 15 seconds, but if that's what the random delay for that move is, that looks suspicious. A more difficult move shouldn't take only 3 seconds, but if that's what the random delay for that move is, that also looks suspicious.


Definitely, random time is adding no correlation where it would count. Move time is an easily captured psychometric observation, the clever bit is that it’s intermingled with automated chess analysis.

Feels like there could be a lot of surprising inferences to think about here … just a few quick thoughts - how long does someone pause after a blunder, how does one react to unpredictable moves. Can definitely imagine AI being of significant utility here.


In general you’re right, but there are some tells because humans don’t have a random delay. It’s connected with the complexity (from the perspective of how humans think, which is different from engine calculation) of the position. One of the things that makes cheaters stand out is they will have a random weird delay on various obvious moves.


I thought it was a reasonably good answer considering that the question doesn't explicitly spell out why it might flag someone like Magnus Carlsen (who I learned is a grandmaster and World Chess Champion). To rephrase it, Irwin doesn't flag people for playing well — it alerts moderators when it finds suspicious signals across several dimensions.


High level humans and chess engines play differently. You can see commentary on chess YouTube videos when they run across cheaters.

Also the chess engines people use are accessible, you can compare what a suspected cheater does with what the cheater does and if they’re exactly the same consistently, you have a pretty strong signal.

One of the bigger tells are strange moves that set up a many move series resulting in a victory, things that humans just can’t find quickly.


It's not that simple. I'm an amateur player (2200 at lichess) but there are plenty of situations where I simply _know_ the best variation. Plenty of chess players analyze their own games using an engine and then use their memory when they are confronted with the same position. The same with opening theory: when I'm using my opening preparation, I'm playing at GM level as these are just the moves played by GMs in that position, and I did not need to find/calculate them, I just know them.


This is a very strong claim that is almost certainly false.

Would you be willing to reveal your account so that it can be independently verified? For example I'm 2116 on lichess and looking over the last 10 opponents who are in the neighborhood of 2200, it is never the case that their moves are optimal compared to a chess engine. For the first 10 or so moves yeah sure, just play a book opening, but beyond that people at 2200 make plenty of mistakes every single game including blunders.

The idea that you can consistently make optimal moves over the course of a 30-40 move game beyond the book opening requires some kind of evidence because in examining the last 10 games of 10 accounts arbitrarily picked, there isn't a single one that isn't absolutely full of inaccuracies and mistakes.


Sure, I blunder and I never claimed it to be always the case, but sometimes I know a tree of variations really deep, just because I remembered it from looking at it together with a computer. Anyway, here's my account on lichess: toolslive


Your explanation doesn't really distinguish high-level human play (not obvious cheating) from engine play (obvious cheating). There are probably plenty of games in which your first 20 moves (or 5-10 for me) are in GM databases and evaluated favorably by the engine, ass a 2200 you're probably booked up pretty well. But you know computer moves aren't so common in openings; they're much more common in complex middlegames and late games when the computer can calculate more combinations that we can and is able to produce lines that break intuition and principles but are strictly best.


My point is that high level human play online can be caused by computer analysis offline. One cannot observe a difference, as the moves are exactly the same.


But for the vast majority of players there will be a difference in computer lines and human evaluation; even if you're playing a strong game there is still a world of difference between 2500 lichess and ~3300 FIDE stockfish. This is more true for the median ~1500 lichess player. Even if you put two GMs up against each other and give one a computer, some portion of games would include an obvious computer line that a ~2800 FIDE human wouldn't evaluate the same way as an engine.


> The same with opening theory: when I'm using my opening preparation, I'm playing at GM level as these are just the moves played by GMs in that position, and I did not need to find/calculate them, I just know them.

I wouldn't count that as playing the opening at GM level unless you understand why GMs play those moves.

Around 1990 there was a chess teacher and coach named Richard Shorman that would come to a public chess club that met weekly in Sunnyvale and give free advice to people who had hit plateaus and just couldn't seem to get better no matter how much they played and studied and analyzed. People would show him their games from recent tournaments and he'd analyze them and give advice to get unstuck. This was all out in the open so even those of us who had not brought games could watch and learn.

The people attending these sessions typically ranged from beginners who if they were rated were somewhere under 1000 USCF all the way to people in the 2000-2200 range who had been stuck in that range for years.

On of the big problems Shorman found with pretty much everyone there was that everyone wanted to play like a Karpov or a Kasparov. They studied the opening such players played, memorized all the variants of those openings from ECO, bought and read books on those openings, and studied the games with those openings from top tournaments.

So yeah...they might play 25 moves of a game just like Karpov or Kasparov would have because they are copying from a Karpov of Kasparov game that followed the same line. But what happens when their opponent plays a bad move? If it is so bad there is an immediate tactical refutation maybe they find that (especially if they are around 2100). But at the Karpov/Kasparov level there are a lot of bad moves where the move isn't bad for some short term tactical reason. It's bad because it gives some small weakness that a GM over the course of the next 20 or 30 moves can exploit to eventually allow some winning tactic.

And if your opponent does stay in your opening book to the end...then you just find yourself in a position that is supposed to be good for you, but without knowing why. A GM would know why they are better and how to use that.

It always does eventually come down to tactics, but the deeper you understand tactics the more you start to understand positional concepts and how they make it so certain tactics will or will not work. You can't really understand the positional stuff until you understand the tactical stuff. When you try to play like a GM too soon, you don't yet have the tactical skill to understand the positional stuff, and you get stuck.

One way Shorman put it was something like "Before you can play good chess you have to be good at playing bad chess".

For the lower rated players Shorman would tell them to play gambits. They might not be sound against high rated players but that's not who lower rated players are playing. They should be aiming for unbalanced positions and playing the most aggressive moves that they can't see a tactical refutation for.

As for books, what he'd tell the lower rated players to get was a collection of Morphy's games and skip to the end where it has the games where he gave odds or was playing simuls against amateurs--the games where Morphy needed to crush people.

That's what he meant by "bad chess"...the kind of chess people played in the 19th century.

For higher rated people, like a couple friends of mine who were stuck around 2000-2100, he'd still tell them to play unbalanced openings and play aggressively, but not in the balls out channeling Morphy way that worked for the lower rated people. Gambits were still recommended, but now ones that were not played at top level because the other side could equalize or get a slight advantage too easy rather than because they might actually be unsound.

That got both my friends off the long standing plateaus.

I was only around 1600, and wasn't playing tournament chess anymore (I was instead playing in Go events at the Palo Alto Go Club), so never got a chance to see if Shorman could get me unstuck, but I brought a list of my chess books to Shorman to see which if any I should actually read.

Younger chess players might not realize just how many chess books even casual chess players would accumulate back in the days before internet. Here was my list [1], and this was by no means a large collection for someone around 1600.

Shorman praised a few of them as good books that could teach a lot--and then told me to set them aside and read specific ones of them such as "My System" whenever I hit 2200, and in the meantime go get Morphy's games and skip to the odds games. (I never did get back into tournament chess, except for a couple of events).

I think Shorman's points and methods are still sound, but now with internet and online chess and computers that can automatically generate tactics training from real games we can probably go about applying them more efficiently.

We don't have to seek unbalanced positions and play aggressively in them in order to get tactical practice now--we've got tactics trainers. And now we can play more serious games against good opponents in a week then we might be able to get in a whole year of tournaments in 1990.

[1] https://pastebin.com/mw3q1784


> everyone wanted to play like a Karpov or a Kasparov.

I wonder if there's a modern analog to this with how super GM styles have mostly converged. Trying to play like Magnus is as silly as trying to play like an engine. Even the most aggressive players (Nepo, Shak?, Rapport) aren't so wildly different in style.

Maybe the 2010-2020s version of this is bandwagoning onto popular theory, like all the Najdorf lines I know I'll never understand.


> High level humans and chess engines play differently. You can see commentary on chess YouTube videos when they run across cheaters.

That's similar to people describing how to catch other frauds, such as fake Amazon comments or bots. It's medieval 'science': They usually have no evidence of their accuracy, either false negatives (frauds who they overlook) and false positives (people falsely accused of fraud). So it's easy to say, 'this is how to identify them' - nobody will ever test your claim.

Regarding false negatives, for example, there is reason to believe that people detect only the obvious frauds, and that our detection becomes tuned for the obvious. Regarding false positives, people will cite the 'obvious' positives - e.g., some humanly impossible property - but even if they are correct, the problem is the cases in the grey area. False accusations are no joke.

Ironically, now we want a bot to solve our problems. What data do we have to say that it's accurate, or any more accurate than we are?


> It's medieval 'science': They usually have no evidence of their accuracy,

I mean, not really. The difference between human and computer chess play-styles is well-documented, to the extent that in the earlier days of chess engines, human chess disciplines were developed to counter the way computers play ("anti-computer chess").


If it makes you feel better (or worse), signal "fingerprinting" is used in laboratory science to verify things like purity and identity. Detecting a lack of divergence from a known chess bot seems like a good fingerprint to me.


People have been asking for a transparency report to verify accuracy for various countermeasures for awhile. The platforms simply won't release that kind of information (chess.com & lichess.com)

They have an incentive to show their game isn't a den of cheaters, and yet they don't release which means there is a stronger incentive to hide that information.

Makes you wonder what kind of incentives are preventing them from releasing that information. Marketing says 100's of millions of games. Are they games between two people, or potentially a lot of matches against computers (where you don't know they are computers up front). Food for thought.


Thing is... if your methods work well enough to help you avoid the thing you wish to avoid; does it matter if it's right or wrong? It works, right?


> if your methods work well enough to help you avoid the thing you wish to avoid; does it matter if it's right or wrong?

You don't know if it's helping you at all; that's the issue. The latter question is a bit bizarre.


Not lichess, but Alireza Firouzja (World Rank #3) was banned from chess.com when he was younger.

It was some time ago so probably their cheat engine detection, and also lichess's should give less false positives.


> Not lichess, but Alireza Firouzja (World Rank #3) was banned from chess.com when he was younger.

This has turned into a bit of an urban legend. The automated system flagged him based on both his rapid rise in rating and reports from several verified titled players.

On inspection by a human, he was cleared.

Danny exaggerates when he tells the story cause it's a funny anecdote.

This was also a very old version of the anti-cheat like you mentioned. Personally, part of the reason I prefer chess.com is their much better cheat detection than Lichess.


My understanding is that that was not about his actual play, but instead about his rapid rise in rating.


I would assume they verify titled players to avoid any potential liability or defamation legal issues but they aren't the most professional bunch so who really knows. There are a lot of questionable practices they've done over the years as an organization.

As for Irwin, its detection rate, and thresholds, it has fairly high false-positive rates inherent in the model.

From my experience, if you get banned don't expect any kind of due process. They aren't professional, they don't respond. Not even the legal contact on their charity.

Not everyone gets banned for cheating.

They do ban people from the lichess site for many other unprofessional reasons such as contributing to the project (when they don't like what you had to say on an issue and you weren't spamming).

I have to wonder how many similar-named accounts got axed alongside mine when they decided to go after me for what I disclosed. Its not like my github username was connected in any way to my lichess account (or that similarly named).

Needless to say, I don't use their platform anymore because its more bots than anything else, and I don't volunteer my time to people that don't deserve it.

As a side note, their lack of standard professional practices make me wonder what kind of fraud is actually going on behind the scenes.

As a business, the only arguably beneficial reason for not following GAAP+ other standard business practices is to commit a fraud.


> Do these sorts of folks have to be verified in some fashion?

As far as I understand, yes. In Chess.com at least (not sure about Lichess), there will be some kind of human verification for very high-level players, asking to prove that they are IMs or GMs. They can stay anonymous, but they will have the rank appear IIRC. I'm not sure on all the details, I'm very (very very) far from that level :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: