Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can someone explain why, in an interview context, someone with the technical ability to understand and assess and write and communicate a deep set of domain-specific knowledge like this....might still be asked to do some in-person leetcode tests? How does on the fly recursive algo regurgitation sometimes mean more than being able to demonstrate such depth of knowledge?


Because world is full of bullshiters who know enough buzzwords and buzzsentences and interview time is too short.

On lower levels leetcode style stuff gives more quantifiable signals per minute/session.

Plus, do you really want to fully explain something like paxos or raft in interview context?

My personal pet peeves are

1. “event sourcing” and derivatives. It really attracts people who love to talk but never built anything large using it

2. Adepts of uncle martin


If you can't differentiate buzzwords from signals you shouldn't be interviewing.

Plus how do you expect to get good signals from a leetcode whiteboard interview for someone who spends most of their time designing systems and only writes code when it gets to the point where it's faster and less frustrating to pair program vs explaining what needs to get implemented?

To clarify, I don't have a good answer: I still participate in leetcode style interviews (though system design is another component) - but although I sing the song and can't come up with anything better, I don't think it's the best way to go


Raft is for event sourcing.


Or what about having a repository proving these very concepts with almost 2000 commits?

In my experience things like publications, online code repositories, and facts are more than irrelevant but not much more because people don’t know how to independently evaluate these things. Worse, attempting to evaluate such only exposes the insecurity of people not qualified to be there in the first place.

Far more important are numbers of NPM downloads and GitHub stars. Popularity is an external validation any idiot can understand. But popularity is also something that you can bullshit, so just play it safe treat everyone as a junior developer and leet code them out of consideration.


It's easy to fake projects and commits. And when they aren't faked, you can't have an expectation of every candidate having these because it's so much work. And if you decide to bypass some of your usual interview process because a candidate has a project, now you aren't assessing all candidates equally.


All I see from your comment is that candidate evaluation requires too much effort, so you exchange one bias for another.


I've interviewed people for roles as low-level C++/kernel programmers who did not know what hexadecimal was. Having a quick "What's 0x2A in decimal? Feel free to use paper & pen."[1] question can be a significant time-saver / direct people to more appropriate roles, if any.

[1] Starting to do math with non-base-10 numbers was already a pass, regardless of the number you reach, you'd normally use a computer for that. But it really isn't too hard to do in your head directly, for anyone who's dealt with binary data.


I am not sure how that has anything to do with what I posted.


It's a choice to treat all candidates equally.

And intentionally excluding candidates is the whole point of designing an interview process, it's lazy to lift your arms up and declare that they're all biased.


It's not one or ther other; many interviews assess both because they're both meaningful signals about a candidate.

If you're coming from the position that one is

"understand and assess and write and communicate a deep set of domain-specific knowledge"

and the other is

"on the fly recursive algo regurgitation"

then it will be hard to change your mind about any of this.

---

You could have just as easily called system design interviews regurgitation and coding interviews "communicating deep domain-specific knowledge".


On that point, can anyone recommend any good reading, info sources about technical interviewing methods etc? I recently had an interview that was just being asked to remember linux commands like for a certification exam off the top of my head, and It made me wonder what the point was, and if there is better ways.


Because puzzle solving is an iq proxy and iq is correlated with job performance? But really, just do an iq test. Maybe because interviewers are bad at distinguishing gpt style BSing from actual knowledge and need a baseline test.


The correlation between IQ and job performance is typically weak[0] (weaker than the correlation between conscientiousness+agreeableness on job performance in some studies[1]) with a more modest correlation for "high complexity jobs".

Interesting excerpt from [0]:

> Finally, it seems that even the weak IQ-job performance correlations usually reported in the United States and Europe are not universal. For example, Byington and Felps (2010) found that IQ correlations with job performance are “substantially weaker” in other parts of the world, including China and the Middle East, where performances in school and work are more attributed to motivation and effort than cognitive ability.

[0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4557354/ [1]: https://www.academia.edu/download/50754745/The_Interactive_E...

Sorry about the PDF link to [1]. The APA link has a paywall otherwise I'd link there.


Sorry for not replying earlier, but I am really grateful for you providing those links. I had not known that the iq-job perf had been challenged and it means I need to adjust my priors when looking at candidates.


What are you referencing? Did someone make Martin Fowler do a leetcode-style interview? (I wouldn't be against that, just curious.)


Not Martin per se but there have been a few submitters in the past with thoroughly robust content who have also shared bad interview experiences.



It seems Unmesh Joshi wrote this article on MartinFowler.com.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: