Can someone explain why, in an interview context, someone with the technical ability to understand and assess and write and communicate a deep set of domain-specific knowledge like this....might still be asked to do some in-person leetcode tests? How does on the fly recursive algo regurgitation sometimes mean more than being able to demonstrate such depth of knowledge?
If you can't differentiate buzzwords from signals you shouldn't be interviewing.
Plus how do you expect to get good signals from a leetcode whiteboard interview for someone who spends most of their time designing systems and only writes code when it gets to the point where it's faster and less frustrating to pair program vs explaining what needs to get implemented?
To clarify, I don't have a good answer: I still participate in leetcode style interviews (though system design is another component) - but although I sing the song and can't come up with anything better, I don't think it's the best way to go
Or what about having a repository proving these very concepts with almost 2000 commits?
In my experience things like publications, online code repositories, and facts are more than irrelevant but not much more because people don’t know how to independently evaluate these things. Worse, attempting to evaluate such only exposes the insecurity of people not qualified to be there in the first place.
Far more important are numbers of NPM downloads and GitHub stars. Popularity is an external validation any idiot can understand. But popularity is also something that you can bullshit, so just play it safe treat everyone as a junior developer and leet code them out of consideration.
It's easy to fake projects and commits. And when they aren't faked, you can't have an expectation of every candidate having these because it's so much work. And if you decide to bypass some of your usual interview process because a candidate has a project, now you aren't assessing all candidates equally.
I've interviewed people for roles as low-level C++/kernel programmers who did not know what hexadecimal was. Having a quick "What's 0x2A in decimal? Feel free to use paper & pen."[1] question can be a significant time-saver / direct people to more appropriate roles, if any.
[1] Starting to do math with non-base-10 numbers was already a pass, regardless of the number you reach, you'd normally use a computer for that. But it really isn't too hard to do in your head directly, for anyone who's dealt with binary data.
And intentionally excluding candidates is the whole point of designing an interview process, it's lazy to lift your arms up and declare that they're all biased.
On that point, can anyone recommend any good reading, info sources about technical interviewing methods etc? I recently had an interview that was just being asked to remember linux commands like for a certification exam off the top of my head, and It made me wonder what the point was, and if there is better ways.
Because puzzle solving is an iq proxy and iq is correlated with job performance? But really, just do an iq test. Maybe because interviewers are bad at distinguishing gpt style BSing from actual knowledge and need a baseline test.
The correlation between IQ and job performance is typically weak[0] (weaker than the correlation between conscientiousness+agreeableness on job performance in some studies[1]) with a more modest correlation for "high complexity jobs".
Interesting excerpt from [0]:
> Finally, it seems that even the weak IQ-job performance correlations usually reported in the United States and Europe are not universal. For example, Byington and Felps (2010) found that IQ correlations with job performance are “substantially weaker” in other parts of the world, including China and the Middle East, where performances in school and work are more attributed to motivation and effort than cognitive ability.
Sorry for not replying earlier, but I am really grateful for you providing those links. I had not known that the iq-job perf had been challenged and it means I need to adjust my priors when looking at candidates.