To me the issue isn't seeming inhuman, but cost. Employers often seem happy to impose rediculous time costs on the people they're hiring: take home tests, long series of interviews, etc. What held that back is they also paid a price. Full automation leaves them free to impose infinite cost with no guarantee of anything.
Applicants are using AI too. I've heard from people who hire/post jobs that they gets hundreds to low thousands of applications, and maybe 5% of them have any relevant experience. The problem is the breakdown of trust is costing all of us.
> Applicants are using AI too. I've heard from people who hire/post jobs that they gets hundreds to low thousands of applications, and maybe 5% of them have any relevant experience.
This happened before "AI" too. When all it takes is clicking an "apply now" button on LinkedIn some desperate people will spam any job they see.
I recall seeing one where you had to send a specific payload to an https endpoint to apply (or it might have been an automated screen immediately after the application was submitted). Forcing potential candidates to briefly open the curl manpage seemed like a similarly elegant solution to me. I doubt it works as well in the era of LLMs though.
similarly, i remember at least one organization (pre-Songtradr Bandcamp, i think) who didn't publish some of its open technical roles anywhere except in HTML comments on their website. they only wanted to attract folks who liked to poke around and look under the hood.
Snail mail has started to break down in the USA. I remember when I was a child letters always took 3 days to be delivered. Now I've sent letters to family members that took more than a week to arrive. I imagine that makes it hard for a candidate to plan or align interviews.
As far as I can tell it costs almost 10 times as much to send a letter certified mail (or any other option with tracking). And it means I can't just use a regular stamp, I have to go to the post office or use a third party service like stamps.com and print out a label.
And in some places they are incentivised to do so, as they may need to prove a certain number of applications per-week, or they'll lose unemployment benefits, so they end up applying to all sorts of unsuitable stuff.
Yeah, the playing field isn’t leveled as much as it’s simply on fire and turning into garbage. In a way it’s similar to the eternal September, but on a much broader scale.
At this point, we think using AI and being able to use AI effectively is a skill in and of itself. When you're hired, you'll have access to AI. You'd be expected to be able to use said AI effectively.
So, we still give you a FizzBuzz. You can use AI. Even if we told you not to use AI, we know almost everyone would use AI. But you have to understand the FizzBuzz and be able to explain it to us and make changes to it "live". The amount of people that get weeded out just by having to explain the code they "coded themselves" is staggering (even pre-AI, even on a take home where you had no "OMG I suck at live coding" pressure).
It's been a year since I've actively given out take-homes for hiring, but I'm not sure I agree that everyone will use AI. I designed half the questions to be impossible for current-gen AI to answer without the candidate actually knowing what's going on [0], and only ~1% of candidates who responded did poorly on that half and not the other half (and, if we're worried about LLMs being better than I think, not all that many candidates passed most questions either).
[0] The most reliable strategy I've found for that is choosing questions where the wrong answer is the right answer for some much more common question. Actually spending a few seconds and solving the problem easily lets a human pass, but an LLM with insufficient weights or training data (all of them) doesn't stand a chance.
Thanks for clarifying - I kinda get the idea but would love to see an example for this.
I’ve mostly given up on all of the standard techniques for interviewing sadly, just because “using ai” makes a lot of them trivial, and have resorted to the good old fashioned interview, where I screen for drive, values and root cause seeking, and let people learn tech/frameworks/etc themselves.
But I was wondering, isn’t a take home question still good, if you give a more open ended and ambitious task, and let people vibe code the solution, review the result but ask for the prompt/session as well?
People will be doing that during normal work anyway, so why not test that directly?
One such question (obviously tailored to the role I'm hiring for) is asking whether SoA or AoS inputs will yield a faster dot-product implementation and whether the answer changes for small vs large inputs, also asking why that would be the case.
I typically offer a test with a small number of such questions since each one individually is noisy, but overall the take-home has good signal.
> why not test that directly?
The big thing is that you don't have enough time to probe everything about a candidate, especially if you're being respectful of their time and not burning too much of yours. Your goal is to maximize information gain with respect to the things you care about while minimizing any negative feelings the candidate has about your company.
I could be wrong, but vibe coding feels like another skill which is more efficient to probe indirectly. In your example, I would care about the prompt/session, mostly wouldn't care about the resulting code, and still don't think I would have enough information to judge whether they were any good. There are things I would want to test beyond the vibe coding itself.
In particular, one thing I think is important is being able to reason about code and deeply understand the tradeoffs being made. Even if vibe coding is your job and you're usually able to go straight from Claude to prod, it's detrimental (for the roles I'm looking at) to not be able to easily spot memory leaks, counter-productive OO abstractions, a lack of productive OO abstractions, a host of concurrency issues LLMs are kind of just bad at right now, and so on. My opinion is that the understanding needed to use LLMs effectively (for the code I work on) is much more expensive to develop than any prompt engineering, so I'd rather test those other things directly.
Yes, that's why I said, we have you explain what you "vibe coded" and then also do an actual live coding part where you have to make further changes. Via screen sharing.
The amount of people that can't even navigate "their own" code is astonishing. Never mind explaining what it does or making changes.
For the 95% irrelevant and 5% relevant groups, I wonder what percentage of resumes come in through a third party recruiter.
I get tons of spam that could be generated by even a basic LLM based on public information about me, but for positions that are not a reasonable fit.
Apparently, it is common for such cold calls to come from “recruiters” that are not affiliated with the hiring firm, but are trying to collect some sort of referral bounty.
I have no idea why an HR department would be dumb enough to set up such a pipeline (by actually paying for the third party “service”), but I guess once they have the program in place, they also need an LLM to screen spam applications.
"We saw your profile on github and thought you might be a suitable candidate for our open position at $CRYPTO startup.
PS you must be a US-citizen, and the job is 100% on-site"
Those things seem to be blasted out with no regard for my location - I'm not looking for a developer job anyway - but certainly not one in another country.
Spamming github users seems to be the latest growth hack, and it drives me nuts. I made all my repositories archived when I started getting hit with AI-PRs to review, but I'm reaching a point where I think my life would be easier if I just closed the account.
Unfortunately this is becoming common in countries like India since there is no other option. We are looking for a mid level DevOps and get like 1000s of application. The requirements were clear we need k8s and IaC exp. But when we went to interview, none of them had production level exp. They told the recruiter that they had who didnt have a way to verify it. After 2-3 interviews like that, I had so start giving them Coderbyte assesments like write a k8s manifests, a Dockerfile and logs parsing. Otherwise, you won't be able to hire.
Why don't you just hire people who present as being of normal or better intelligence, and train them on what you need them to do. This is how companies used to do things.
Sometimes you can pay someone who has done this before, and you're both happy. The person is happy that their experience helps them get a job. The company is happy that you get someone with the needed experience.
If I want to hire a driver, I can train someone who does not know how to drive, or hire someone who has experience as a driver. I can do either, but I'd prefer to do the latter in most cases.
But now you're dealing with a hundred applicants who claim to know how to drive, but actually don't. Either because they never learned, or they aren't capable of it.
That's why a screening is needed. If people lie, it won't make me lower my standards.
I'm dealing with this all the time in recruitment. It can be done. People lie all the time or don't read the requirements. You need a way for the ones who really do know how to do the thing you need to demonstrate it to you.
Have you ever been in a situation where you had to hire someone to help you with something? Would you really follow your own advice? Your advice does not make sense for carpenters, cooks, or drivers. Why should it make sense for programmers?
Fortunately whenever I have been involved in hiring, it was someone we knew was qualified or had references from people we trusted.
Years ago, I hired people at closer to entry level. If they had experience that was a bonus but if they didn't we trained them. If they didn't respond to the training, they were let go after a probationary period.
I hate the take homes because companies seem happy to send them out to people who have literally no chance. Sent after they already have a candidate in mind, sent before the resume has been reviewed, sent before the company has invested even a minute talking to you.
So you waste the weekend on this project when you had no chance from the beginning. And the time restrictions they list mean nothing since if you actually stop after x hours, they will just pick the person who spent the whole weekend and did a more complete job.
I got dinged on my Netflix take home 10 years ago because I used the DOM to store state instead of implementing a shadow DOM. Sure, let me just whip that right up.
I've done quite a few interviews and as long as the interviewee maybe said something like "it would be better to use a shadow DOM" and could explain what a shadow DOM is, I would be pretty happy with that
Expecting someone to build a full shadow DOM as part of their interview take home is excessive
Often times people ding you for doing anything different than they're used to, or what they see as "the standard".
The worst is when they basically ask how you'd build their product. Some people can't handle a different answer, even as they're busy hiring you to improve things.
I do think we have to distinguish two things though.
It's not really bad to ask someone to do a design session with them and "build their product with them from scratch" isn't inherently bad. That's actually pretty neat if you ask me.
What's bad is if there's only a single answer and that's whatever they actually built themselves, which might be a pile of thrown together startup poo that was never cleaned up. But you have the same problem with all sorts of "needless trivia" type questions.
And then do you really want to work at a company, where you can't have a proper "pros and cons of different approaches" type of discussion? If you got hired, you'd have those kinds of discussions with them on an ongoing basis. Bad on the company for letting that person do the hiring but they got what they deserved so to speak.
Just to make an analogy:
If they simply ding you for using 4 spaces coz they use 8, that's bad.
If they ask you why you use 4 spaces, they use 8, give them pros and cons and are there any other approaches and what are the pros and cons of those? That's a good interview so to speak. As an interviewer I would give bonus points if the candidate says something like "I used 4 spaces because I thought that's what you guys were probably using coz everyone's moved away from 8 spaces but secretly I love usings tabs and setting tabwidth to what I want but in reality it really really doesn't matter as long as it's consistent across the codebase as humans can get used to almost everything and this one isn't worth fighting over. Linters and formatters exist for a reason".
Linux kernel still uses 8 I believe. IIRC wide indentation+narrow pages were chosen partly to encourage using functions and avoiding deep nested logic.
Not because you use 2 spaces. You can argue 2 spaces and the pros and cons and how horizontal scrolling is an issue. One question back would be for example if that means you have huge run-on files where a single function does everything and that's why you need like 17 levels of indentation and that's why only using 2 spaces for each becomes important to you. And then you'd need to argue how that's better for visibility and what might actually be worse about it. If you can do all that, you're hired (if the rest of the interview goes well :P )
Who still uses 8? Isn't that like a COBOL thing?
That works as a flippant comment when we're joking about code indentation after working together for a while and we get along great. As the one and only answer in an interview, you're out. That's quite disrespectful and no it's not a COBOL thing, I've seen (and used) 8 spaces and argued for tabs or 4 much later than COBOL days. In fact I've never written a single line of COBOL.
Btw, at an old job, some joker developer added or copied 1, and broke the whole testbed. It was quite funny. I came over to the sourcecode hosted in Gitlab, ran my regexes that look for naughty characters. Found it after it ate the devs for half a day.
More than a decade ago I suggested this as a compromise as a joke, but then decided to try it out - ended up liking it more than any other options and have used it for all my personal stuff ever since.
I just recently read about something that requires - hard requirement - 3 spaces for indentation. Most likely read it here on HN. Makes me sick to even think about.
This. No hire if, when asked an open-ended question, the candidate does not namedrop unprompted the components of the company's actual production tech stack. Clearly they're not knowledgeable about the engineering aspects of the job and are just bluffing their way through the interview process.
Often you don't even get to the interview step. One time I had a take home that said you could either do frontend only, backend only, or full stack. I decided to pick the backend only one and complete all of the optional backend tasks to make something pretty well made.
Then they email me back and said the other candidate did the whole thing and they aren't sure if I know how to style a page now because I only completed the backend part.
The inability to get feedback and course-correct is my biggest peeve with take homes.
Is this one of the tests where I just need to throw together a five minute quickie to get over your “can you program” filter? or do you need me to put together something flashy and memorable to show off my ceiling? If o put together my flashy thing, would I get dinged for over-engineering something where a five minute hack solution was good enough?
The last time I was hiring I gave out a take-home test, and I thought it was the opposite of an imposition on candidates' time. I'd be curious to hear your thoughts:
- It was designed to be fast to complete (20min max -- not a huge imposition if being hired is likely, obviously very expensive if you're taking one for every job posting).
- I only gave them out after a resume screen. If you had a 0% chance then I didn't waste your time. If you had enough other proof of abilities then I skipped the take-home.
- Candidates were told that it was designed to be fast and that if they couldn't complete it quickly they were unlikely to be successful interviewing either. They still had the option to spend a lot of time if they thought my assessment of the situation was wrong, but part of the point was to allow candidates to gauge their own abilities and not waste their time interviewing without a chance of being hired.
- I did a lot of work behind the scenes calibrating and re-writing the questions individually and as a whole so that the test score correlated very well with interview performance (most interviews administered by not-me, removing a form of bias that's easy to creep in there).
For every "20 min max" take home assignment, there will be people who are willing to spend 4+ hours doing it to outshine candidates who have jobs, families and lives.
If you want to make it more of a fair consideration of time, consider moving your take home to interviews, that way there isn't a time cost asymmetry. You can enforce your "20 min max" claim this way, you can judge a candidate's performance, thought process and filter out anyone who is LLMing or spending inordinate amounts of time on them.
You will also make a better impression on candidates by investing your time in them in the same way they are with you. Maybe you're hiring kids out of college without experience, but you only have to do so many take home tests before you realize that they're a waste of time, and pass on potential employers who throw them at you, or you learn to just send them your hourly rate for the test.
One other way to keep things true to the “20 min max” is to have a clear objective/scoring rubric. Nothing open ended (data science jobs LOVE handing out open ended data analyses). I need to know that it’s okay to stop and that anything I’m doing would just be overkill.
Live coding during an interview is one of the most oppressive things I’ve witnessed in the industry in general.
There is usually a huge disconnect between someone who knows that “this task should take 20mins” and doing it cold in a super high-pressure environment.
People sweat, panic, brain freeze, and are just plain out stressed.
I’ll only OK something like this if we give out a similar but not the same task before the interview so a person can train a bit beforehand.
I’ve heard it all justified as “we want to see how you perform under pressure” but to me that has always sounded super flimsy - like if this is representative of how work is done at this organisation, then do I want to work there in the first place? And if it isn’t, why the hell are you putting people through this ringer in the first place, just sounds inhumane.
Yea, there's really no way to do an "interview assignment" well.
If you give unlimited amount of time, you're giving an advantage to people with no life who can just focus on your assignment and polish it as if it were a full time job.
If you give a limited amount of time, then you're making the interview a pressure cooker with a countdown clock, giving a disadvantage to people who are just not great at working under minute-to-minute time pressure.
Depends on the purpose. If you treat it as a minimum bar to pass and are up front about and actually adhere to that then anyone spending more than the limit on it is presumably just wasting his own time (and to an extent the company's because the application process continues). It only becomes a problem if instead of an objective pass/fail metric you start gauging other details that would benefit from additional time spent.
> For every "20 min max" take home assignment, there will be people who are willing to spend 4+ hours doing it to outshine candidates who have jobs, families and lives.
I started refusing take-home tests a couple of decades ago, but when I did them, this is 100% what I would have done.
>For every "20 min max" take home assignment, there will be people who are willing to spend 4+ hours doing it to outshine candidates who have jobs, families and lives.
The ones we use have a clear scoring system and prepared inputs - all it matters is the generated output.
you can put a time limit on it from when they start to submit. It's really the only way to solve high volume of unqualified applicants. So much time wasted talking to people who could barely code
Any take home test trivial enough to complete in under 20 minutes could be completed by an AI. The only signal you get from a take home test is whether or not they can submit answers. It doesn't let you know if the candidate is capable of passing the test unassisted.
Take home tests were never a worthwhile signal. Pre-AI, people would search for solutions or have another person complete it.
Cheating is possible in the abstract, but I found a tight correlation between interview and take-home performance. For whatever reason, candidates didn't seem to cheat much.
The AI point is worth diving into a little. This was a year ago, so SOTA was worse, but I didn't find it terribly hard to write questions AI couldn't solve, whose answers you couldn't search for, and which good candidates could solve. The test was a few of those questions and a few which were easier to cheat, and almost nobody had good scores on just the cheatable section.
I don't think that moat will exist indefinitely, but today's AI just isn't very good at a lot of incredibly basic tasks unless the operator has enough outside knowledge to guide it in the right direction (and if a candidate did that I mostly wouldn't care because, by definition, they had the knowledge I was looking for). I use AI a lot, it's great at a lot of things, some even quite complicated, but it was weaknesses, and those are pretty easy to exploit.
Your description of the test and your replies to questions indicate you've come up with a pretty great assessment for the role(s) you hire for. Especially where you mentioned:
> The test was a few of those questions and a few which were easier to cheat, and almost nobody had good scores on just the cheatable section
I also like how you allow/encourage self-assessment, where if a candidate can't do the test in ~20 minutes under zero pressure, they probably won't be a good fit in the role itself.
> but part of the point was to allow candidates to gauge their own abilities and not waste their time interviewing without a chance of being hired.
In my experience this is the wrong game theory. Unemployed people can make job hunting their full time job, so a 20 minute take home doesn't select for "who delivers the highest quality solution in the least amount of time," it selects for "who is the richest applicant who can burn hours on a take home to deliver a higher quality result than people with less time they can afford to spend?"
Also, nobody should ever self-select themselves out of an interview process. Passing a resume review and getting a callback is about 10% likely: for every job hunt, in my experience , candidates get about 10 callbacks for every 100 resume sends. From there, it's about 20% chance to get to final stage, and from there, maybe 50% to get an offer (you're either their first choice or second; if second, your hiring hinges on whether the first choice accepts). Math is right there: once you pass a resume check, in terms of the volume of applications you've sent, it's optimal to spend far more effort into this gig than into firing off ten or twenty more resumes.
Therefore, even if the candidate doesn't think they're a good fit, they should do everything they can to stay in the game, including lying by omission.
After all they might be engaging in imposter syndrome, right? Why assume for the interviewer that your python skills aren't good enough - maybe the interviewer understands perfectly well that you've only used it for scripts and one off tools, but doesn't care because they personally believe your startup experience is more valuable to them and they believe you can up skill! Maybe the take home was designed poorly by someone who was tasked randomly by a lead to shit out a take home, and it's not an accurate indication of what the job would be like. Maybe they sent you the wrong take home? Maybe it's a good take home but you need money so fuck it, if you manage to sneak in despite not being a good fit, you can just bust ass to upskill and make up the difference before anyone notices. Or fuck it twice, it's a shit market and who knows how much longer you'll be able to sell your labor as an engineer, even if you can only fool them for two weeks, that's two weeks of income while you still keep up your job hunt.
People who have really good jobs aren’t applying for jobs below them, so your applicant pool will always be people who are in an equal or worse position than your job.
No one at, Anthropic, for example, is applying to a job at Geico.
I guess, I think there's too large of a pool of companies for anyone to really say what's a meaningfully "equal" or "worse" job. If Geico was opening a new machine learning division to do some really interesting work on the probably shitload of actuarial data they've built over the years, maybe someone from anthropic would be interested in that. Or maybe everyone gets laid off from anthropic because of an AI bubble burst, and now googlers are competing with anthropic people who are competing with walmart labs people, and each hiring team really has no way of knowing who's better than who based on resume alone because they've got fifty in the inbox with FAANG experience.
Also I know plenty of people in startup world who are phenomenal engineers that only have companies I've never heard of on their resumes - startups that for one reason or another simply didn't have a news-grabbing exit.
> If Geico was opening a new machine learning division to do some really interesting work on the probably shitload of actuarial data they've built over the years, maybe someone from anthropic would be interested in that
But they’re not. And they won’t. And that is my point. They’d make a ML Engineer post on LinkedIn and get a bunch of people for whom Geico would be a step up.
There will never be a job opening from Geico that someone at Anthropic would apply to.
That’s my point - your pool will always be people who are in a worse position than your job. Being laid off is a worse position than a job.
You’ll never see Anthropic candidates in a Geico hiring pool, unless they were laid off for being lousy and can’t find anything else.
The market is pretty efficient - people wouldn’t bid for jobs that are worse than their current situation.
> The market is pretty efficient - people wouldn’t bid for jobs that are worse than their current situation.
This still seems like an oversimplification. It's easy to label FAANG, "frontier AI companies," whatever else, but the vast majority of jobs and the vast majority of engineers are in a soup that's maybe able to be split between "startup world" and "enterprise world" but beyond that, difficult to say one is "worse" or "better." And I've worked alongside FAANG people in startup world so, either that isn't a "worse" job and therefore your theory doesn't work because that means it's not really possible to accurately evaluate every single company as objectively worse/better, or, your theory doesn't work because people do apply to "worse" jobs.
> After all they might be engaging in imposter syndrome, right?
GP specifically stated that this was the point of the takehome though. If the person handing it out specifically warns you that struggling with it means you aren't a good fit then if you struggle with it that's not imposter syndrome - you aren't a good fit! Not dropping out at that point is just refusing to acknowledge reality and insisting on wasting everyone's time.
I sent out screens to ~15-25% of resumes (a higher rate for new grads, lower for seniors, not wanting to unnecessarily rule them out just because they didn't have positive evidence of potential success and didn't know how to write a resume, only ruling them out of there was positive evidence they'd be unsuccessful). That amounted to ~100 per position filled. Around half of those completed the take-home. Some of the rest should have self-selected out and didn't, which is something I'd like to improve if I run a take-home again.
You're right the time commitment wasn't equal. Early on I spent much more time than the candidates designing and analyzing the test. Afterward, their 20 minutes would usually take me <5min (often <1min for obvious failures and obvious passes, the average brought up due to time analyzing edge cases).
I did read every submission though. It wasn't wasted time for candidates.
20 minutes max seems fair to me. For context I was once given a 1 week assignment just to be discarded without any feedback. From then on if it takes more than a day I won't do it.
Taking time to figure out if you’re the right fit for the company and the company is the right fit for you is a very good thing. For both parties! Rushed hiring processes increase the chances of you being fired for not being the right fit. Short hiring processes are a massive red flag for me.
True, but it becomes a problem when the entire thing is automated. Because then it's entirely one-way. You spend infinite cost and money, they spend nothing. So, you can't even figure out yourself if you're a good fit. It's entirely in their court.
> Full automation leaves them free to impose infinite cost with no guarantee of anything.
Wow, this is a great way of putting it. It's draining enough to go to third- and fourth-round interviews with other humans. Doing it with a series of AI chat bots would be devastating!
> Getting a lot of applications that don't meet your standard doesn't force you to raise you[r] bar. You still just need someone who meets your standard.
I'm not sure that first sentence true. Let me play Devil's advocate:
What's the primary cause of not being able to find someone who meets your standard when you already get lots of applications? It's that your hiring process is bogged down by the masses of unwanted candidates you must evaluate to find the few wanted candidates in the crowd of applicants. And what's the fix? It's better screening. Which is raising your bar, isn't it? Even if it's only to add cargo-cult screens to your bar, it's making the bar more selective, isn't it? Fewer people clear it, right?
Arbitrary filtering of candidates doesn't reduce the effort that it takes. Let's say 1 out of 1000 of the candidates you see is what you need. The total amount of effort to find the right candidate is still the same. But throwing out half the resumes just doubles the amount of time until you find the candidate you need (you just spread lower effort over a longer time).
On the other hand if you "raise your bar" (let's say you do so by some method that makes it twice as expensive to judge a candidate; twice as likely to reject a candidate that would fit what you need, i.e. doubles your false negative rate; but cuts down on the number of applications by 10x, so that now 1 out of 100 candidates are what you need, which isn't that far off the mark for certain kinds of things), you cut down the effort (and time) you need to spend on finding a candidate by over double.
EDIT: On reflection I think we're mainly talking past each other. You are thinking of a scenario where all stages take roughly the same amount of effort/time, whereas tmorel and I are thinking of a scenario where different stages take different amounts of effort/time. If you "raise the bar" on the stages that take less amount of effort/time (assuming that those stages still have some amount of selection usefulness) then you will reduce the overall amount of time/energy spent on hiring someone that meets your final bar.
I wasn't suggesting arbitrarily removing candidates was a good idea, but simply responding to their specific devils advocate example of applying "cargo cult screens", which would presumably be arbitrary.
I wasn't suggesting arbitrary filtering. That's a straw-man interpretation of what I wrote. Even if a firm cargo-cult copies the screening practices of the big-tech firms, they are going to be much better at selecting good hires than arbitrary filtering would.
And why would this be the case? Maybe the solution is to ban AI from the hiring process. This seems like companies being hoisted by their own petard. This is because they are the ones who drove the hiring market to be this way. They are the ones who started using AI in the hiring process. They are the ones who decided to make applying so much work driving applicants to use AI to survive.
Also, if you are having trouble hiring right now, that is 1000% a skill issue. It is easier to hire good talent right now than ever before. So I have absolutely 0 sympathy for this POV. Go down to your HR department if you want to see who is at fault.
PS You fix it by charging $1 to apply for jobs. Took me all of 30 seconds to figure that one out.
I wouldn't pay anything to a company I'm applying to, but I would gladly send a small amount of money to a charity and show them the relevant bank or cryptocurrency proof if they explain why they need the micropayment. They could present me with a list of 10 or 10000 charities, I'd pick 1 and put "micropayment for applying to company X" in the comment of the payment.
That way I know I'm not giving money to some huge corporation and they know I think applying to their job should at least cost me Y amounts of currency.
And if they waste more than an hour of my time with the hiring process, they could similarly pay a charity some money per hour.
That was neither me nor the company will feel cheated and in the end, no matter how the hiring turns out, a charity will have benefited.
To avoid overhead for many small payments, start a platform where users can buy many credits at once by contributing larger amounts to charity. Then, you burn your credits to apply to companies (or cold message applicants) to show you're not just spraying and praying.
This could also be used for combating spam elsewhere, like posting in forums, comment sections and so on. To preserve privacy, something like zero-knowledge proofs could be utilized. I don't know how the cryptography would work exactly, but if you can't double spend a credit and you can choose whether to keep it anonymous or not, it could work, too. It would be best if for a given credit spent, you could only disclose your identity to the entity you want access to, not the credit issuing entity.
For spam, it seems like the cost of maintaining a forum like the servers are much lower than the cost of the mods that deal with spam. So instead of paying the forum directly, we lower the need for human mods to spend their time. That way we lower resources to the forum indirectly. The credits could be per post or per account creation. I assume the HN mods' time is worth a lot more than the servers and power HN runs on.
Also, we won't have the issue that PoW and other proofs-of-X's have of being easier to do on some devices, but harder on others (like the power and time it takes to run PoW on a beefy desktop with AES-NI vs an on old phone).
But we'll still have the issue with different standards of living in different places making the credits more or less expensive for the user subjectively. Companies hiring worldwide could require different amounts of credits for applicants from different countries, but for forums this wouldn't work.
A solution to that could be issuers giving credits for local volunteering work. Clean up some garbage from the shore and get a credit regardless of whether you're in the USA or Bangladesh. But if you want to prevent credits from being traded (do we? idk) and, at the same time, have some amount of privacy, how would you do it?
But now you'd have to make sure that credit issuers all over the world only issue credits for real charity-like work. And who's to say how to value picking up garbage vs volunteering at an animal shelter vs donating 1$ to a charity.
It's interesting to think about this, even though I don't have any resource to implement anything like that.
(X) Requires immediate total cooperation from everybody at once
- for the specific forums, jobs and other things that may use something like this
Specifically, your plan fails to account for
(X) Public reluctance to accept weird new forms of money
- if the credits are treated as money
(X) Armies of worm riddled broadband-connected Windows boxes
- that will always be an issue, but I doubt it's too relevant here
(X) Extreme profitability of spam
- if someone spends a credit for spam and they think it's worth it, it might be an issue. But most spam wouldn't be worth it, IMHO, especially if it will be deleted from a forum, anyway.
and the following philosophical objections may also apply:
(X) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
- well, yeah :)
(X) Sending email should be free
- this isn't about email, but I don't necessarily like having to pay to post. However, lots of forums will remain free, as not everyone will use this idea if it's implemented. And some forums have paid accounts now, anyway.
(X) Why should we have to trust you and your servers?
- why should we trust the credit system - important question, as we haven't thought out how it could be gamed or abused.
> They are the ones who started using AI in the hiring process
Aren't you ignoring the reports of companies receiving thousands of ChatGPT-written resumes, bots sending applications, and interviews with applicants being live coached by AI?
I wouldn't be surprised if eventually hiring becomes heavily dependent on personal referrals. That way you know you're at least dealing with a real person and not a bot, a North Korean trying to infiltrate your company, or someone who isn't even authorized to work in your country.
The problem is that spambots don’t care how big the company is. I know folks that advertised local Office Manager positions for tiny companies, and got hundreds of totally unqualified and unrelated rèsumès, and that was before AI was common.
The “good” news, was, that it was pretty easy to bin the spam.
… needing to pay for postage hardly stops the spam I receive in my own mail. Even the most trivially absurd stuff, like "install rooftop solar" — I don't own a roof.
In the end companies don't need to hook up to the sewer pipe that floods applications. What worked in past was (heaven forbid) technical hiring manager looking at resumes, etc and reaching out to clearly qualified candidates. Not hr 20-somethings with humanities degrees. Sorry
All companies attempt to give the same interviews, just have one centralized organization give two programing questions and two system design questions and some kind of proof once you pass it.
You filter every one that can't pass the interview in the first place, you get a better interview experience, and just focus on experience
Lots of people get through engineering school but are terrible engineers. Interviews are important (and difficult... Not many people are good interviewers!)
Professional certifications have a terrible reputation for good reason. You are perhaps too young to know why this is a silly idea. But its been tried and it failed spectacularly.