Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was tutoring people in college for their computer science classes I was struck with how some people could do reasonably well on programming assignments and then when presented with anything even slightly novel would be completely unable to reason their way to a solution.

A classic tell of this is people handling out of bounds errors in loops by trying to randomly add or subtract 1 from their for-loop parameters.

I realized that they didn't have a mental model for what a loop did, they had simply memorized the syntax for a loop and were doing advanced pattern matching. Code repeats = write the for-loop syntax I've memorized. And then after seeing that fail with out of bounds exceptions, they learned a new rule: modify the loop parameters and see if that fixes the problem.

When I think about how I write code, or I compare their approach to the other cohort of students I saw, it's a different process. I see in my mind's eye a type of 'machine' that performs the actions that I want to take place. I simulate running that machine in my mind and tweak its design until it works the way I want it to. Only then do I think about syntax and try to translate what's already happening in my mind into source code.

I've seen people get shockingly far into software engineering careers using the pattern matching / guess and check approach. I've wondered if a lot of the handwringing you see on programming forums about the 'leetcode grind' is coming from people who do this pattern matching approach. To them it must seems like the only way to solve these problems is to simply train their internal pattern matching neural networks on huge amounts of examples.

The code that I see GPT generate looks eerily similar to what I saw from those programmers. And that makes sense because I think that functionally they're doing the same thing. Only GPT does it at a superhuman level.

That seems to me to indicate that there's something that at least some humans do with a mental model that our current LLMs lack. If someone figures out how to simulate those mental processes in a computer program I think we'll see a huge inflection point and that's what the original comment (as I read it) is referring to.



> the handwringing you see on programming forums about the 'leetcode grind'

In fairness and compassion to that crowd, a lot of it comes from the fact that a modern interview for a coveted FAANG job often requires 1-2 LC Medium (or Hard) problems cranked out in 45-60 minutes. Depending on the company and the org, the overall interview loop may well be multiple such one-hour sprints.

It's quite a pressure-cooker of an interview setting. Given that, it's understandable why many people converge on memorizing and brute-force pattern-matching as their interview strategy — if they can just memorize enough, the odds are actually pretty decent. (And the payoff is not bad, either.)


I'd argue that the time pressure in those situations encouraged just trying to substraction or add 1 to the loop's boundary, since it had a good success rate and is much faster than thinking through simulating the loop/algorithm. Learning is rarely rewarded in those interview situations that build on leed code.


that's me! i call myself a fake network engineer because there is simply too much info to retain, my brain just won't retain things i'm not constantly doing, so i have a complete understanding of maybe 20% of things but outside of that comparing to other configs and pattern matching are my main ways of solving problems and to be fair for my job (fixing network faults) like 90% of faults i've seen before and can fix, for 10% i can't i'm lucky that i have escalation points

at the same time i do feel like pattern matching limits my growth, if i had a complete understanding of a majority of networking principals id be much higher up in my career


If you compare yourself to colleagues at your level, particularly how _they_ would describe themselves, do you think they would be as self-aware?

As long as it isn't making you feel like a complete fraud, this level of introspection is a good thing imo.

"I know that I know nothing"


You mean that's now how everyone else programs? Obviously you do fuzz testing and static analysis and possibly some sort of theorem prover verification so you don't get too embarrassed.


I predict it’s a visual/spatial form of what it already does with language. It won’t do it until it can see.

There’s all kinds of other things it won’t do until it hears. And touches. Smell and taste might help too I guess!?

As a byproduct it can also be taught truth is what it can verify with sensors.


subtracting 1 from your loop variable and running again is common sense and the quickest way to narrow the problem. also, some people can still think while typing, still think while compiling and running.

this is all assuming that someone is trying to be productive rather than stop and ponder the abstraction that is a loop and divine its nature in a rigorous way

if the students are having problems with loops, that's not surprising considering that computer science doesn't teach software development skills. like... at all.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: