Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is Hans Niemann cheating? – Expert analyzes (chessbase.com)
54 points by doener on Sept 23, 2022 | hide | past | favorite | 98 comments


Fabiano Caruana (former number 2 and World Championship contender) is skeptical of Regan's analysis:

> I would take Regan’s analysis with the large grain of salt, and the reason why is not because I have any insight into his algorithm or his methods, but because I know of a case of, a very high profile case, where with absolute certainty I can say that someone was cheating in an important event. And the person was investigated and was also exonerated based on Regan’s analysis. And I am certain that there was cheating. There is no doubt in my mind that this person was cheating and they got away with it.

https://www.chessdom.com/fabiano-caruana-i-would-take-regans...


Regan's algorithm cannot prove that you did not cheat; it can only fail to prove that you cheated.

The question that Regan's algorithm is this: "Did you cheat?"

If the algorithm comes back with "Yes," then you cheated. But it cannot come back with "No." It can only come back with "I failed to prove that you were cheating."

You need another hypothesis: "Did you not cheat?" But this is an impossible test.


Agreed, this is also likely a high precision, low recall approach to avoid false positives ruining a player's career


However, allowing cheaters to get away with it could eventually ruin chess — that is, ruin all players’ careers.

A delicious conundrum.


Like a bloom filter!


I believe Fabi, but the question must be asked: how does he know (with certainty!) that the player was cheating. Did the player confess? Was he blatantly looking on his phone in front of him?


Imagine I see someone after an event, flushing evidence of cheating.

I am completely certain, because I saw the evidence myself.

But to anyone else? There's no hard evidence - just he-said-she-said. Maybe I was mistaken - or trying to get my rival into trouble.

Knowing how it would look, maybe I keep my mouth shut rather than throwing around wild, baseless allegations about cheating.


Maybe it was Fabi!


This seems like the most obvious answer but social dynamics make it hard to suggest.


It's suggested here now :) More spicy would be if Kramnik was cheating in his match against Topalov.


Making moves perceived to be outside his skill class I assume.


If it's that, then it's pretty arrogant - I mean, that's like saying: "he's too stupid to make such a good move, so he must be cheating"? Even (relatively) bad players are bound to make good moves from time to time, either through dumb luck or through occasional sparks of inspiration, so that's hardly a reason to be 100% sure they are cheating...


There are moves and lines that only Super GM’s will calculate. Not IMs, not FMs, not GMs. It’s not arrogant, it’s the level of play.


When you are #2 in the world at something you are allowed to be a bit arrogant about it.

(down to #8 now, I suppose)


Not so much "outside his skill class" as "unlikely to be played by a human". Computers have a really good idea of what the optimal move is, and it's sometimes not a move any human being would play. The main way to detect cheating is to figure out that they're playing closer to the computer's line than anybody without computer assistance.

In this case, his off-the-board behavior, where he seems unable to explain some of the moves he made, is also suspicious. His skill set is such that there's no move he's not capable of playing, though he should at least be able to tell you what he was thinking when he played it.


Move in a critical moment, speed of decision for said move, cheating at this level is probably extremely hard to do it alone which already creates space for leaks...


Proving a positive result is much easier than a negative. A positive “this move right here is not reasonable without a chess computer suggestion ♟” vs “we didn’t find anything suspicious”.

And if you’re cheating, adding another layer of “try to find moves that aren’t suspicious” would be theoretically possible.


That isn't really proving a positive though - maybe the player just got lucky. There aren't that many moves in any given turn, sometimes even a novice will make an inspired move.


True - but identifying a particular move gives a discussion point at least.


The other factor is that Regan's process is designed to not produce false positives - if it says you are cheating, you are almost guaranteed to be cheating.


Exactly - the lack of evidence isn't evidence of lack; a common mistake.

All we know now is that an "obvious" method of cheating wasn't used. That may be because there was no cheating at all, or a new method of cheating has been developed.


The fact that Caruana has not said why he has "absolute certainty" the person cheated is cause to be skeptical of Caruana's skepticism.


I think the discussion misses the important problem.

It is possible to design a system where the algorithm will try to imitate a human being making human errors.

There is, I think, couple of ways to do it. One way would be to not be greedy and only get help from time to time. So if I was the player I would not rely for the algorithm for every move, just sometimes. Just enough so that it makes a difference.

If you remember, Allies broke Enigma but decided to restrict the use of intelligence thus obtained to not alert Axis to the fact. Which is essentially the same idea here -- the question is "How much help can I use before it becomes provable that I am getting help?"


> There is, I think, couple of ways to do it. One way would be to not be greedy and only get help from time to time. So if I was the player I would not rely for the algorithm for every move, just sometimes. Just enough so that it makes a difference.

This is something Magnus has said before as well (paraphrasing): "If I got engine help for just one or two positions a game I would win every time."


At their level of play, a single one-bit message can tip the scales in dramatic fashion. All you have to know is that there is a great move to be found in a particular position, without even knowing what the move is.


Engine moves aren't really like that though. They aren't dramatic strokes of brilliance. They're slightly odd moves that pay off 20-30 moves later.

And it's a little bit of a meta game. Engines are ultimately playing against themselves when determining the best move. So they might discard a brilliant looking move because of some line that no human is reasonably going to come up with. Similarly they might make a brilliant move that humans don't recognize as such because no human can analyze as deeply as ane engine. So when a computer looks at a board and says there is/isn't a great move that's not necessarily the same as what a person would think.


I don't entirely disagree but consider the following:

1) To be useless to a top player, the decision tree needs to have only long term gains and be barren in terms of short term human-readable profits.

2) In the same vein, the metagaming pruning process you have described has to unfold a certain way, in the context of a game where most of the previous moves have been played by humans and not machines and where there is therefore much more of a chance of an unbalance compared to a purely computed position.

3) the above criteria of long-term, short-term, readability etc. must be defined according to players that are already 95% or so precise. The missing percentages mean that they always get smoked by machines over the course of a game, but can have a remarkable ability to see good lines.


The problem is if we're sending 1 bit of information that is something along the lines of "go for it" or "something's there" or whatever. That's just not something an engine can tell you outside of specific scenarios where there is a really good move to be made.

You probably could rejigger the engine to factor into account the likelihood of a human finding the right moves and relative downside. Like an engine might reject a move that leaves them a half pawn down if the opponent makes 25 difficult to find moves in a row. But that risk/reward would be worth it going against a flawed human player.

Perhaps it could also work in conjunction with another GM that you train with regularly. They can use an engine and do the evaluation above themselves.


I see what you mean. This would indeed be most useful with an accomplice, and a lot dicier and ambiguous without.


Humans can find engine moves, especially when they have hours on the clock like in classical chess. If you can consistently spend the majority of your time on the most critical moments when game-breaking moves could be found with enough calculation, that's enough to tip the scales dramatically over many games.

Sure, the human won't find the move every time, but it's still a huge edge.


But the point is that there's not really ever this mythical one great move that's going to win your the game. Look at Stockfish's analysis of the game:

https://lichess.org/broadcast/sinquefield-cup--grand-chess-t...

There's not a single move that significantly improved black's position.


There were several missed opportunities that the engine gave a 0.3 to 0.5 point advantage to. One point in the engine rating is roughly equivalent to one pawn. Many end games are won or lost based on having an extra pawn or two. Two moves of 0.5 points adds up to a significant improvement.

From my experience watching tournaments, it happens regularly that missed opportunities would have given someone a 1 or 2 point engine rating advantage.


Chess.com has already "fine-tuned" AIs that play at the N-elo level. E.g. 1800 elo, 2200 elo, IM, GM. I'm sure you could constrain stockfish to play at N+1 of your target opponent and thus avoid this issue.


Correct. They're often already playing at ~95% engine matching moves and/or with negligible difference in centipawn loss. Getting the help to push that to 98% would be a dominant advantage.

Funnily (to me) post-game analysis often highlights the computers finding incredible defensive resources instead of attacking ones. But it's exactly these kinds of moves - the fearless counter-thrust to equality while your king's position looks hopeless - that are the most inhuman of all and likely to draw suspicion.


Good players could cheat with a bot. Great players know exactly when they’d use a bot on a single move.


There are already chessbots that can mimic medium-strength human players thanks to ML


I wonder if the explanation for Carlsen's move could be far simpler than people are making it out to be. This is the guy who decided not to compete for the world title, simply because he didn't feel like it. He loves chess, and wants to keep playing it for the joy that it gives him, and nothing else.

He could have resigned early simply because going through with the game would have left a bad taste in his mouth. I'll bet it's annoying to play someone with a history of cheating, regardless of whether they are still currently known to be breaking the rules or not. Imagine having to sit through a game that you love to play, only with a nagging thought in the back of your mind that sucks the joy out of the experience and turns it into a chore.


>Imagine having to sit through a game that you love to play, only with a nagging thought in the back of your mind that sucks the joy out of the experience and turns it into a chore.

That's a completely reasonable position for Magnus to take. It would be even more reasonable for him to simply state that that's why he's acted as such, and yet...


Perhaps it is also reasonable for Magnus to wait until the tournament is over before making such a statement.


I imagine he'd open himself up to a lawsuit if he accused a player of cheating without hard evidence.


Accusing him of cheating is not at all what we were talking about.


I think this is being approached incorrectly. The thing that stands out to me is that people find his analysis of games to be poor. If he can't explain his reasoning, it suggests he may not have reasoned it. This is interesting as there's two sides to this problem. One is that he is good at chess and could be cheating. The other is that he is actually not great at chess and could be cheating.

If people think his analysis skills are weak, then it suggests the latter case, which seems easier to test.


Early on in this debacle, I remember someone pointing out that Niemann's performance could be explained relatively easily by him having forward knowledge that Carlsen intended to use the unusual opening he went for. This would amount to cheating "in spirit", by way of having an unfair advantage over his opponent, while still being tournament legal.

Whatever happened to this version of events? It struck me as being a lot more credible than engine assistance.


I also thought it sounded like a plausible explanation, but it's been dismissed as exceedingly less likely than engine assistance by those who know far more than I.

At the Sinquefield (the tournament in question), Magnus had just one team member with him and it's someone who has worked with for many years. The breakdown in the logic of the 'leaked prep' is that no one can come up with a plausible motivation for it to have been leaked - and no serious chess GMs seem to think it is a plausible theory.


Sounds about right. Thank you.


It's still widely talked about. It's just that the total amount of drama is overpowering.


I honestly wish that people wouldn't put out rubbish analysis like this.

>his conclusion is there is no reason whatsoever to suspect him of cheating.

No. His conclusion is that based on this really weird brute force statistical analysis there is no statistical indicator that he cheated.

Firstly, how you can state there's no reason whatsoever to suspect him of cheating is absurd. You know he has a previous record of cheating so as a starting point that statement is absurd. But secondly and more importantly, no one is suggesting that Neimann is a moron. We're not saying he's a 1200 ELO lucking out against a grand master. Any account of Neimann cheating is going to include the fact that he's already a very good chess player - he's not going to be cheating through his matches with any average player. And as other GMs have pointed out, cheating could literally involve 1 advantageous decision at a key point in a match.

What we have here is very simple. Neimann was quasi-accused of cheating in 1 match, and this genius has gone off and analysed thousands of other matches and come to the conclusion that there's no statistical evidence of cheating. A methodology that is basically designed to ensure almost no one could ever be proven to be cheating using this method.


Regan's analysis is a good deal more detailed than that. He claims that if you check the computer just three times a game, he is going to catch you within three games. Now, if you check just once, it becomes a lot harder.

For the game against Carlsen, he looked at the key moves in that game. The first 20 or so moves were theory. You can't ever prove cheating in theory. After they diverged from theory, Regan said there were only two really key moves. For both of those, Niemann did not use the computers top choice. One of them he played a downright inaccurate move that could have cost him a tempo.

I don't know if Niemann cheated or did not, or even what exactly Carlsen suspects, but Regan's analysis seems to me to be strong enough that it counterbalances the known character deficiency of Niemann.


If there are several good moves, it seems logical to cheat by not picking the computer’s top choice.


If there are three or four good moves of approximately equal strength in a position, then it just isn't a position where someone at grandmaster level needs to cheat or benefits from doing so. They were going to find one of those moves anyway.


Is it cheating if you make decisions to purposefully not gain an advantage (ignoring gambling-induced outcomes, like throwing a match)?

"He was cheating! An outside influence was telling him which move to make and he purposefully didn't do it!"


Like I said, it depends on the available set of moves. If #1 is +1 and #2 is +0.7, but the cheater wanted to play #5 at -0.5 then that’s an outcome-changing difference.

Chess engines like Hiarcs can show a series of good, ok, questionable and bad moves for each touched piece color-coded on the board for example.


I agree. The number one error made by people with statistical knowledge is to assume independence of events. Even very smart and very educated people tend to make this mistake rampantly. I hypothesize it is due to the education curriculum being very slanted in the direction of assuming independence everywhere because it makes the math easier, but I can't prove this. In the real world, true and full independence should be considered an exception rather than the norm.

Saying that Neimann is a very good chess player is basically a restatement of the fact that his previous records are likely clean. Nobody was accusing him of constant cheating before. (Cheating once is relatively easy. Cheating continuously, in multiple different environments, and leaving no trace, is hard.) The entire question at hand is essentially "Is Neimann's probability of cheating correlated with playing Magnus Carlson?" (and a family of similar questions can be asked) and no amount of analysis of his games not played against Magnus Carlson will answer that.

As for why the probability of cheating would be correlated with playing Magnus Carlson, again, no amount of consulting the statistics will be able to answer that question, but for anyone who is a human being with human foibles the answer is fairly obvious.

Someone used to double checking premises will observe that there's not much in this question statement specific to Neimann. The fact that he's cheated before does raise his particular probability, but you could ask this about anyone. And my reply to that would be, yes, that is indeed how things work, and is exactly why these high stakes games are played with more formality and more security than a pick-up game in a city park. Neimann's prior cheating does raise the question with a bit more urgency, but we generally assume a priori that anyone might be interested in cheating in a game with Magnus Carlson, and high level play in general, and act accordingly.

agarden in a sibling post suggests that the analysis itself goes deeper, which sounds more interesting. I am speaking only to what is in the linked article. It would be far from the first time a reported simplified a complex and nuanced argument into something useless, and I thought the point about independence is still worth posting.


Awesome take. Reminds me of a book that teaches us how one can lie with statistics.

While reading the whole article, I was convinced over the "data centric analysis". Afterall, how can data lie?

But, its revealing that cheating in a chess game with a GM is literally making the right guided move at the right time.


>this really weird brute force statistical analysis

Care to explain why you call the method "really weird"? Is there a flaw in the statistics that you can correct?

Here's [1] his publications - all well cited, his h-index is good, and I find no refutation of complaints in the literature about his methods. He seems quite competent in this area.

>how you can state there's no reason whatsoever to suspect him of cheating is absurd

If cheating does not show up in performance then is it cheating? If there is any performance gain then it should show up at some statistical level.

[1] https://scholar.google.com/citations?user=8nk9k5oAAAAJ&hl=en...


It's not that the statistics are wrong, it's that you can't apply statistics to prove this at all. Specifically because the person you're analyzing knows all about chess. They know that a super strong weird move will just expose them, so instead they're going to pick lines that just slightly increase their strength. This is like the statistical analyses that show election rigging by highlighting a statistically improbable distribution of results - that analysis works if the person rigging the election doesn't consider the statistical analysis when they're doing their cheating, it is completely avoidable if you cheat competently.

Which brings us back to a sort of basic question - if this guy is cheating, do we think he's doing it fairly competently or not? If he's not cheating, the statistics will show normal play, if he is cheating and he's fairly competent the statistics will show normal play. So what has this analysis done? It's proved that he's not totally incompetent, which we already knew because he's pretty well established as a good chess player even he isn't truly a 2700ELO.


>it's that you can't apply statistics to prove this at all

If there are not statistical differences, then there are no performance differences. Cheating by definition should imply performance differences.

If he is cheating, then at some point in the future, if that method becomes detectable and he has to stop, then his play will suddenly suffer, which would be more evidence.

Claiming that statistics cannot answer this question with statistics is not true. It may be hard, or the current sample too small, but claiming stats is not usable is a misunderstanding of statistics.

>This is like the statistical analyses that show election rigging by highlighting a statistically improbable distribution of results

This only works on the public, and is not what professional statisticians that analyze elections do.

And even here, if the event is rare enough, say 1 part in quadrillions, and the analysis is correct, then yes, we would certainly conclude there was rigging.

All human knowledge is statistical. Things we claim to be true are only statistically true to large odds, so even for election rigging, if the stats reach some level of certainty, then it is completely valid proof that would hold in court.

The pop idiocy of election rigging claims has never risen to that level.

>it is completely avoidable if you cheat competently

No, it is not. It may only lower the signal to noise ratio, but there is still detectable differences. If you continually improve the statistics and are forced to lower the signal, eventually the signal would be so low as to not affect the system, which in this case is chess games.

Physics, for example, can tease events out of on the order of 1 part in trillions and demonstrate signal. Plenty of other things do the same.


What is the difference between a 2500 ELO player and a cheater with a computer calibrated to 2500 ELO? Statistically, nothing. That's what I'm saying. What's the difference between a 2300 player taking that odd tip to boost their ELO to 2500. Statistically, again, probably nothing.


>Statistically, again, probably nothing.

Yes, there still is. Any program tuned to play some level does so by making suboptimal moves enough to get that lower rating, while making really profound moves more often than a player at say 2500 would make. A 2500 human has a certain level of understanding and all of their moves show this.

Ever play a top engine set to play weaker? They make amazing moves far too often for a weak player, and try to compensate by making poor moves once in a while. No weak human plays this way.

Play chess? I've played for decades, and you can see the difference (and certainly top GMs see it too). You hear in commentaries on analysis of top engines making moves that are not human. Any engine does that - tuning it down only makes it do them less, but still enough that someone that plays a lot at a high level will notice.

A human playing 2500 and a computer playing 2500 are not making the same moves, and it shows even there.

If you know any GMs, ask them the same question and see what answer you get. You will get the one I just gave.


Yeah, the "1 in a million" hand-waving is weird. If I had a mechanism that allowed me to cheat at poker, I would only use it on a push. I certainly wouldn't be using every hand or even at all in penny ante games.


What's ever worse is that FIDE feel justified exonerating people based on this statistical analysis alone, despite the fact there seems to be a general sense within the GM community that something is wrong and has been for a while.


I've heard of two ways one might cheat in chess. This analysis covers receiving engine moves, but does not address the other accusation, stealing opponents' prep work.


I guess when doing prep work, you analyze most of the top moves computer would suggest, so simply stealing it won't help that much, unless you really missed something. You'd need to find some move/line that your opponent has not prepared, analyze it deeper than he did and hope that actual game goes exactly as planned to implement it. The deeper it goes into a game - less chances you end up with that position on the board.

Also accusing someone should be backed by some kind of proof. Even a subpar player with some luck might memorize something a computer would suggest and implement it, despite that it is way over his expected level, yet not getting actual suggestions during a game. Is that cheating?

There are lots of videos on youtube like "This trap will help you win lots of games", that 600 elo player can memorize and then get labeled as cheater, coz it's way over his elo level.

Next step would be to ponder whether computer assisted prep in general is cheating or not, really.


Stealing prep might work in something like a world championship game where the two participants play dozen or more games. It would be much harder to steal the prep of half of your opponents in a round robin tournament. Also, if you assume that Niemann is cheating and his real rating is 2450, not high 2600s like it is officially now, then no amount of prep stealing will make him competitive with Magnus.


But that's literally the situation. He did beat Magnus, claims that he happened to study deep into a fairly esoteric line that Magnus played, then seemed unfamiliar with the variations of it when asked about it in a later interview. Magnus seems to think Neimann's coach - another cheater - got information about Magnus' team's preparation.


That explains one game, but he consistently seems to play at the high 2600 level.


There is even a third, which would be pretty much undetectable. The engine only alerts him when he has to specially focus on the next move.

That way he knows when to be careful, but he still needs to figure out what exactly to do on his own.


Exactly, if he has any form of binary signal to alert him when there simply *exists* a move that gives him advantage, that's all that is necessary to have an extreme edge.


You either have evidence or you don't. This quasi economic modeling of chess players is ridiculous. He says his models don't know anything about playing chess. OK, let's apply your models to all sorts of areas where we suspect fraudulent play.

This chess drama is corrosive. Speculation from day 1 starting with carlsen.


The model described here seems. Like bunk. It set of alarm bells for me when they said it wasn't based on an understanding of chess.

But regarding whether it's possible to make such an argument, I'd refer you to the more compelling case the moderators of a Minecraft speedrunning forum made against the steamer Dream. https://youtu.be/-MYw9LcLCb4

The allegation is that the drop rates of key items were manipulated. It's a statistical argument, but it's founded in the mechanics of the game.

After offering counterarguments and denying it for months, Dream eventually admitted to it, though they claim it was an accident (in a manner which is plausible, though I personally doubt it).


This is a fair point, but I think there is confusion around the meaning of the fact that Regan's models do not "know anything about chess". I believe this refers to prior domain knowledge embedded in the algorithm itself - but the idea of exploiting data from a large number of GM games is precisely so the model can extract domain "knowledge" from statistical indicators. As for "applying the model to all sorts of areas", yes conceptually if your model can infer anything from any kind of data distribution this applies. In practice it just won't.

If you take a step back and look at the AlphaGo project, the story shows us a very similar model architecture could be applied to a different game, chess, with AlphaZero. And the "zero" in this case means the model is build with zero human knowledge - not even GM games, only self-play. These models do entirely different things - A0'model given a position, tries to find the best lines, while KR's tool, given a set of games from a given player, will try to price the probability of foul play - but in both cases, no programmer-introduced chess heuristics are part of the model. Yet AlphaZero performs the way it does and appears to "know" quite a bit about the game. So if a purely statistical approach can yield a very strong engine, which has been qualified as displaying some sort of "creativity" (for lack of a better term) in its play, I don't think it is too far-fetched to imagine other models extracting knowledge about what human play should look like.

Totally agree that this is wrong from Magnus, and cheating from Niemann or not he is responsible for the toxic drama.


>You either have evidence or you don't. This quasi economic modeling of chess players is ridiculous

Modeling provides the best evidence in all aspects of life (design, science, invention, marketing, medicine, ....). All human knowledge is built in modeling and statistical evidence. Nothing is 100% certain except mathematics, and even that is often fuzzier than 100%.

So how is modeling yet another thing ridiculous when it provides empirically the best methods in so many other domains?


The problem is that cheating does not only mean that he used a chess computer to calculate the next best move. E.g. just knowing when you are at a critical point and should think for a for moments longer then usual, can already tip the outcome of the game.


While I'm not interested in chess in general, I'm deeply curious how Carlsen (assuming good intent) could figure out Niemann was cheating.

The ideas I have right now would be:

1. To run each early game phase (up to, let's say 3-5 moves) through multiple AI engines and find selection that is: - happening constantly in most of the engines - don't have deviation in multiple rules (i.e. have stable consequence) - aren't well known (that is, aren't well known plays described in literature)

From that probably I'd have some honeypot sequence that I could try to lure cheater out.

2. Find opponent's openings that are: - Successful (i.e. usually winning) - Not recommended by any AI engine whatsoever (so probably could be described as "lucky") - Used often by opponent

...and try to get him into this exact opening seeing how it'll go.

Beyond discovery I find also immensely interesting how to deal with such knowledge. Exposing method would put it in statistical argument of outcome vs chance and would be super easy to walk around in the future. Proving it would true would be almost impossible as well. In the end the only route would be to push hard (exactly as Carlsen did) with a method defendable enough that if there's push-back it can be exposed to save oneself from disgrace.

I doubt we'll ever see resolution, but I find all of it amazing nonetheless.


The most convincing theory around this drama is that Magnus employed just such a honeypot by playing an obscure move Niemann responded perfectly to but was unable to later explain.


On the other hand, if Niemann did exactly the same and researched less-known moves, he'd know how to play against them too. That won't actually prove he is cheating at that game. Just the fact that he analyzed them earlier. You'd need a significant number of such honeypot "incidents" to properly accuse someone of cheating.


That'd be an interesting exercise to figure out how big sample should be and how unlikely the moves would be. This is something for chess analysts I suppose.

I believe that there will never be decisive proof of Niemann cheating. But on the other hand chess is a game of strategy which seems to seep out of the playing board this time.


This has been on my mind in reading about this. It would explain a lot.


I think he just didn't believe he could play that well considering his level


Is it possible that Carlsen himself is cheating, and the reason he's angry is because he's cheating, and still gets beat, so the other guy MUST be cheating?


It's not impossible, technically. But it would contradict more than a decade of behavior and performance along with having no circumstantial or hard evidence to back it up.


That’s a good point, and I agree, I was just throwing out this weird devils advocate argument that came to me. I think about cycling, and other events in sports where drugs are involved. People claim they are not on drugs, they go on a crusade against the cheaters, only to be found later to be cheaters themselves.


I think he is, but at this point we will never know for sure unless he admits it.

But I think now that the eyes of the chess world are upon him, he will suddenly start performing much worse.


Chess became a joke today. People decide if you cheat or not just based on your moves and results. It's nonsense. It's like deciding if an athlete is cheating or not without doping tests - solely based on the "distribution" of success. And Magnus behavior - hinting accusations again and again - is far from ethical too.


I think people decided he’s cheating because he admitted to cheating the two times he got caught cheating online, denied ever cheating besides those times, has had a meteoric rise in terms of moving from low GM to his current state, was unable to explain his play in a reasonable way, played almost perfectly, beat a substantially better player, and that player feels he cheated despite having never made accusations like that against people before.

I’m not saying it’s a slam dunk because it’s not. I can respect people who come down in either side based on the current evidence. But it seems like willful ignorance to suggest that none of the above has any bearing at all.


1. Cheating in online chess as a teen is a factor, but not that strong. Also, the details of cheating are still private, so we can't rely on that. BTW, chess.com belongs to Magnus, partially.

2. It's normal for teens to have jumps in their development.

3. I watched his explanations, they are not that great, but not unreasonable either.

4. They played a couple of weeks before, with the result 3-1 for Magnus [1]. Magnus lost the first game and won other 3. Magnus losing to Niemann wasn't something extraordinary in general. Also, a player can have much better morale while playing with a King, it simply gives more motivation.

5. Magnus hinting cheating accusations is certainly a new thing. But refusing to defend his World Champion title this year is also a new thing. I guess he started to feel way too entitled in chess. Also, all this affects his position in chess business, since he owns a lot of it. He is not just a player anymore. It's a conflict of interests.

[1] https://en.wikipedia.org/wiki/Champions_Chess_Tour_2022#FTX_...


I think people are right to suspect him of cheating given everything you said.

Reading through your list, I can't help but be fascinated by the amount of hubris it would take to actually cheat as such, and then proceed to live with yourself. Hell, given his comments, he seems pleased. We could be dealing with a real sicko.


I'll give you an upvote purely because I too am curious about your premise and that's definitely going to be a 3rd rail here.

However, I'm not close enough to chess to understand his accusations. When you're the greatest thing in [insert topic] for the past 10 years, you get leeway on those sorts of things.


If you play at a grandmaster level, then sometimes you only need to cheat on one or two critical moves in the game. The rest you can handle yourself. A smart and talented cheater wouldn't use stockfish for 10 to 20 moves in a 90 move game, you only use the game changing move or two. This is likely impossible to catch and prove.


Regan claims that if you consult the computer just three times during a game, he will catch you after three games. If you consult just once, it gets a lot harder.


He seems to define "consult" as "receive a specific move from the engine". GMs have stated that they could throw a game simply by being alerted that a good move existed. I doubt that would ever show up on a statistical analysis of moves. It would at the very least require an analysis correlating think time, move, and depth of analysis required to correctly analyze the current position. You'd have to say "hey wait player doesn't usually notice there's some X depth analysis opportunity and actually take it." And that doesn't seem to be the analysis which happened here by a long shot.


A bell curve. That's it.

Is Hans Niemann cheating? Ken Regan is cheating people into calling him an expert.


Agreed. The analyses based solely on a comparison of average centipawn loss is so so flawed. It only takes using an engine move once or twice to completely demolish a much better opponent in a game. These types of analyses don't find this type of cheating.

A much better analysis imo would be trying to find the probability of someone at his ELO finding surprising moves. EG I played a 1900 online recently who happened to completely turn around a game by setting up a forced mate in 6, with several branching moves a few moves down which all happened to result in mate because of incredibly lucky piece positions. I can't calculate the probability of someone at a relatively low level like that finding such a move, but I bet it's very low. This is the type of analysis which I'm guessing Magnus is using to assess Niemann as a cheater.


>It only takes using an engine move once or twice to completely demolish a much better opponent in a game.

That's not true. Pick an engine, set to a few hundred points above your strength, then try to beat it using only one or two moves from an engine. You will lose nearly every game, because so many of your other moves will be so below the other player that the 1-2 good moves cannot make it up.

This is demonstrated quite often by the games where GMs are "helped" by others in multi-player games, and it shows that help against a much better opponent takes far more than 1-2 moves.

Among really close players it helps. But not once you get a few 100 ELO points apart.


Regan's analysis works, in part, the way you suggest is better. He looks for tricky moves with non-obvious consequences and looks at one's success rate in those. For the Niemann-Carlsen game, he identified two such moves and Niemann chose suboptimally for both of them. For one is those, Niemann played a nice that could have cost him a tempo. Definitely an inaccuracy.


Niemanns moves fit perfectly for someone using a slightly dated version of Stockfish.


As a guy who tried writing my own chess engine, it's pretty trivial to modify a chess engine in such a way that looking for "chess engine like moves" won't work.

Take a high end engine, start messing with it's piece square tables. Modify or turn off the opening book and end game tablebases, and add slight changes to the evaluation function (e.g. slightly modify piece values.

Even doing just two of the things listed above is enough to destroy fingerprinting techniques.


I do not believe Hans Niemann is cheating. I used to play competitive chess, and while I never got above an ELO of 1100, I think Carlsen has made a tremendous blunder.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: