Hacker Newsnew | past | comments | ask | show | jobs | submit | thdc's commentslogin

I only go through job listings directly (though some listings may say email this person with your resume and I'm including that), and the response rate has always been low for me. I'm pretty strict with requirements and the kind of work I'm looking for. To cover the past 5 years or so:

In 2019, I submitted 400+ applications and had only 4 or 5 responses which eventually converted into 1 job. I hear the market was hot then.

In 2021, I submitted around 40 applications with 3 responses where I had 2 interviews, and 1 job offer (through HN whoishiring!) that I accepted; stopping the process with the 3rd company at that point.

Now I've been looking for 2 months, and have so far sent around 15 or so applications with 1 interview that I did not pass.

I understand that networking and referrals are basically key nowadays, but I won't do that based on my values - I think it's unfair to be prioritized based on who you know over skills - this is a hill I will die on (or at least leave my profession over).

Furthermore, I do have a solid work profile (open source, personal site, blog with mostly technical posts, etc.) but am not willing to associate my real life identity. Not because it's inappropriate, but because I value privacy.


It's probably alternating the comparisons.

Compare [0, 1] [2, 3] [4, 5] ... in parallel and swap if necessary, compare/swap [1, 2] [3, 4] [5, 6] ... in parallel, then go back and forth until no more swaps are made - second element in pair is always greater/less than the first.

That does suggest that the theoretical ideal number of threads is n / 2 ignoring cores, though you'll also want to consider things like cache line size, cache coherency, and actual problem size so it's probably less.

At the end of the day, the important thing would be to actually benchmark your attempts and seeing how it scales with processors/problem size and checking the isoefficiency.

I think it was a bad question.


Knowing to DRY there depended on business knowledge that the original author did not have.

While they were wrong in this case, I would say it was a reasonable move to not DRY based on the code pattern itself at the time. And that's the big difference imo - DRYing based strictly on the structure of code vs business processes.


But this implies that you have to guess when and where to DRY, which basically implies that there's no good way but via experience and domain knowledge!

That's not what people want to hear - they want a silver bullet; a set of criteria for which DRY'ing could be determined from the onset!


I feel like I've been in this situation for the past year - the place I'm working at had a large culture shift for the worse.

I would change jobs, but I hate interviewing and everything else in the process, so instead I work at my standards and stopped trying to impose them on others.

I'm not satisfied at work, but personal projects and activities help fill the void (as a software engineer). I'm never sure if a down period is temporary or not, so I'll always tough it out for a bit.

I started looking for a new position very recently, though, since it's been long enough.


> I'm not satisfied at work

I feel this somewhat but I’ve realized it’s somewhat seasonal and more a matter of perspective.

I’ve had good times and bad times at work and they just come and go. During the good times I double down on my work. I put in more because I get more out. During the bad times I focus on personal projects. I do the job as a professional but don’t waste time trying to knock it out of the park when I know I won’t.

I just look for fulfillment where it comes naturally and don’t try to squeeze it out where it doesn’t.


Well said, well balanced.


> I would change jobs, but I hate interviewing and everything else in the process

Networking helps. Who did you work with in the past that meets your standards... Get in touch, ask them where they're working and if they know of any openings.

Often times you skip at least some screening BS, you may also get more of the interview time set for them convincing you to join, rather than trying to see if they want you. Depends on the place and the strength of recomendation.


My satisfaction in my personal life is definitely what keeps me happy to grind away at my workplace (which isn't really too bad, but the will to take risks to actually improve seems "thin").

It can't last forever, though, so there's an inevitable reckoning on the horizon.


I like to think it went like this

1. Interviewer: If you're a good software engineer, you can answer basic algorithmic questions.

2. Interviewees: Practice algorithmic questions so you appear to be a good software engineer.

3. Interviewer: People are just studying leetcode to get jobs, what can we do? Ask harder leetcode questions.

4. Other companies: Let's copy them since they're successful.

In short, the questions used to be reasonable until people specifically prepared for them. No one knew what to do about it so they just raised the difficulty, which made it even more unfair for people who don't specifically prep.


FizzBuzz as an example of 2/3: people used to occasionally talk about how interviewees were just memorizing the answer, and when they tweaked it slightly (like adding a 3rd number), a bunch of them could no longer solve it.


“if a flag of truth were raised we could watch every liar rise to wave it”

I heard this lyric at a formative time, and I’ve seen it be proven true many times. Including tech interviews. People continually seek out those signals that imply knowledge and experience and even shared culture, but those signals inevitably become too small (smaller = quicker and easier to weed people out) and then they become the very things that people practice in order to look like they have the knowledge, experience, or shared culture they need in order to get through the doors and secure the opportunities.

Then those signals get burned and the cycle starts again (in fact, in my experience the cycles concurrently overlap).


It's actually great on the hiring side — you can skip all that bullshit and because tryhards are all prepping obscure CS questions just having a conversation about technical topics has become signal again. Measure something people aren't trying to game and you get a better assessment, go figure.


Yes, I explicitly do the opposite and have the most pragmatic exercises, questions.

Another thing is an exercise for system design where hyperscaling is not required and the thing is actually quite simple. Many who have specifically prepared by leetcoding and reading "cracking the coding interview" 10 times over will naturally overengineer everything trying to fit this exercise to those book patterns dropping all common sense, all the while not having actually built anything meaningful.

I think these people will mostly try to rest and vest anyway. Truly passionate people will pass since they have actually built something and will understand the exercise.


When I get a system design question I always tell the interviewer "I'd just run it on a single server with an SQLite backend, that will be plenty for the median software service and you haven't told me any numbers that suggests this needs more" and then it turns out they wanted it to run at the scale of WhatsApp.


For what it's worth if I ever had a candidate give me this answer on a systems design problem I'd probably immediately stop evaluating them and start selling them on the role.


Usually it is expected that you ask yourself what is the expected traffic, but I usually answer with a number that technically one box could handle.


"So what scale are we talking about? A few million monthly users? So like hackernews? I would use a single server... "


You really need two in physically different locations (ideally ASes) with some form of failover, assuming you want a reasonable guaranteed uptime


Does 99.995% [1] of hackernews sound reasonable enough to you?

The reality is a lot of systems (especially simple ones) run perfectly fine on a single server with next to no downtime and all the additional redundancies we introduce also add additional points of failures and without the scale that makes these necessary you might actually end up reducing your availability.

[1] https://hn.hund.io/


> Does 99.995% [1] of hackernews sound reasonable enough to you?

How did you come up with that number? I looked at the link and just one of the outages listed on January 10 was 59 minutes. That alone makes the uptime worse than 99.99% for the entire year before it was halfway through January.

(99.995% means at most 26.3 minutes of downtime per year. See https://en.wikipedia.org/wiki/High_availability#Percentage_c...)


it's the 30 day uptime statistic displayed on the website I linked.

if you click the history it says 98.452% over 365 days.


That’s a pretty low level. That’s lower than my pi hole. But I wouldn’t consider my pihole to be anything other than best endeavours (99.1%). Two would be fine, but there are common points of failure which would limit the solution


I can't see how hackernews is 99.995% if I get at least 30-40 "Can't serve your request" error pages a year.


HN is on a pair of servers.


The second one is just a standby though and not in another region. And if I recall correctly dang mentioned at an outage that the failover to the standby is manual. But I'm not sure.

https://news.ycombinator.com/item?id=16076041


This so much this! Experience is such a wonderful thing.


We've run 375,000 self-service employees ERP system (so much heavier single transaction than HN) on a single (large:) db2/aix box with no downtime over last 8 years. That's well within published specs for that hardware / software combo.

Yes we do have a DR box in another data centre now, in case of meteorite or fire.

This used to be the norm. A single hardware / software box CAN be engineered for high uptime. It's perfectly fine when we choose to go the other way and distribute software over number of cheap boxes with meh software, but I get pet-peeved when people assume that's the ONLY way :).


I’m paranoid about environmental situations so always have a failover, but then my DR plans include events such as “Thames barrier fails”.


Add a static caching layer and you’re ready for traffic spikes.


The highest correlation I see to success in my field is a background of PC gaming. They tend to do better on interviews in the technical regard, all the flashy certificates go out the window of you cant tell me what you would do if a computer wont post.


absolubtely! One of the best jobs I ever worked at asked me what CPU I had in my computer at home, I talked about how i'd built my own pc and the parts that went into it and why, and they (much) later when I was settled at the role said they could see I was a good candidate from that point onward. I think its useful to show a curiosity about your tools, about the boundaries of your world, which is useful when things arent going to plan


Do you really think it made you a better candidate, or did this person just feel a connection due to a similarity in interests or in your approach to computing? Was it even relevant for the job?


I find basic pc troubleshooting skills to be highly relevant to working in a data center. I learned about isolation testing and minimum config when I was like 13 trying to play MMOs, these things come relatively naturally to me due to exposure at a young age. That interest has prevailed well into my adulthood now, having no formal education I run circles around the relatively disinterested comptia kids.

As to being a better candidate, there is little I doubt less* than simple observations I can make at work. We could split hairs over causality, but there is a clear distinction between the people who go home to delid their CPU and the people who have devoted their time to certs instead.

Of course there will always be those sages that dont really care much for videogames and have transcended the street knowledge, those people 1. Have better jobs to work 2. Are harder to find with little benefit.


> due to a similarity in interests or in your approach to computing?

It's hard to say for sure - both of these things are also part of being a good candidate and working well in a team.

But I do think that this experience is both because of and forms part of who I am, a certain troubleshooting & optimizing mindset and curiosity with machines. Is it strictly necessary, or will all people with this shared hobby/past be this way? I don't know. I do think its a useful or good fuzzy signal, to be best used alongside other signals.

Was it relevant for the job? Not for the job description. There were times when I used related experience to solve problems or smooth things over though, such as figuring out why a QA engineer's setup was bluescreening (faulty ram), or in having familiarity with tools built into windows for performance profiling and debugging memory & storage problems with programs


Sadly, we can’t defer to stack overflow for interview success like we can with code. GPT will/may help break it up but until we stop with the sociology questions and get back to technical delivery, we’ll continue to see people try to game a system and then try to game that system they gamed. It’s real life NPM.


Here's an older submission about using ChatGPT to de-obfuscate the more basic methods:

https://news.ycombinator.com/item?id=38150096

Some comments claim that it can break some of the more complex techniques presented in the article. I've tried it a few times myself with varying results that tend towards not working.


Feels like ChatGPT has to show up in every Hacker News thread now.


I use TikZJax https://tikzjax.com/ (wasm tikz).

It works well, but you have to figure out the markup and dynamically styling the images are difficult; For example, to make darkmode work, I have to apply css filters over the generated svgs.

It also doesn't show anything if javascript is not enabled, so I duplicate the contents into a noscript tag as part of my site's "build" process so users can at least know a tikz diagram is supposed to be there.

I have an entire custom build process though, so that might be why it was straightforward for me to incorporate it.


Ah interesting, I'll check it out. I figured this had to exist! It might be ideal --- I'm on a github-pages Jekyll site and I like how simple that is, but it means I can't do anything server-side at all, not even making custom Jekyll plugins.

How big does the resulting binary get?

edit: oh, looked at the demo on https://tikzjax-demo.glitch.me/ and it seems like it is just a couple MB. Not bad.


Yeah around 1.5mb transferred but less of an issue with caching of course.

You also won't have something nice like $$ or \[ \] and will have to put the

    <script type="text/tikz">
        \begin{tikzpicture}
            ...
        \end{tikzpicture}
    </script>
tags directly in your markdown, if that even works.


Jekyll at least has an <% include %> tag that can introduce html into a markdown document, so I can probably use that. Tbd. The $$ is awkward though.


I like to say that users includes the people working with (using) your code in the future. It changes the definition of user compared to the normal usage, but I think it's a good point.


You can change the definition but you change the fact that those "users" aren't paying you.


Every coworker I’ve had who thinks this way has left a minefield of gotchas and inscrutable interdependencies for the unfortunate developers who come after.

Yeah they “got it done” but we spend 80% of our time fighting fires and the 20% left on new development takes ten times longer than it ought to because zero thought or care was put into anything other than “it works for me”.

This to me is the difference between engineers and programmers. Programmers can get something done and out the door, but engineers can build something that is easy to iterate on and easy to reason about.


> Every coworker I’ve had who thinks this way has left a minefield of gotchas and inscrutable interdependencies for the unfortunate developers who come after.

I mean, then they weren't good engineers? Nobody said that approach is good.

But I've also seen enough for my share of engineers that knowingly write buggy code that eventually blows up in someone's face because that code was simpler and turned out elegant that way. Code simplicity and reality don't always go hand in hand. The startup graveyard is filled with businesses with otherwise great engineers that lost sight of the customer's actual experience.


It’s also littered with the bodies of companies that failed to keep up with their early initial development speed because their development team cranked out two years’ worth of “whatever works” and walled themselves into a corner.

In my experience, that happens way more often than teams failing to produce value because they’ve spent eons polishing something to perfection.

On the other hand, our industry’s culture of not taking the time for anything to be built a little better means we have an enormous number of seemingly-experienced engineers who lack the understanding of how to write well-built software even if they are given the time. Which leads to individuals concluding that time spent cleaning things up is a waste because they end up with something worse and more complicated afterward. So they don’t invest in learning this skill, and the cycle repeats.


You are correct that good code does not translate directly into revenue, but it affects it indirectly e.g. through ease of future development, maintenance, and fixes.

If the thing being written is not going to be updated at all, then, sure, quality is not important.


It is worse than them not paying you. "You" (the company) are paying them. That means you want to minimize the amount of time they spend on the software without getting further returns of some sort.


The point I was making was that if your product is good, even if the source code is terrible, you can still keep selling it as-is and keep making money. You won't be able to improve it easily, but at least the current state is producing value. By contrast, if your product is bad, then you're going to miss out on revenue until you fix that, no matter how awesome your codebase is.


Email obfuscation has a lot more techniques than what would be encountered in a text response, so I find the title too broad.

I'd be interested to see attempts to extract emails from pages that utilize javascript/css. For example, I have at least two implementations of email obfuscation on my personal website:

1. for non-js users, I have a collection of elements in a noscript tag where a subset are hidden or shown based on some css which uses a combination of pseduo classes and selectors to only show the (full) email after some page interaction

2. for js users, I run some pretty trivial javascript (string manipulation to build b64 encoded string, then decode) to modify the dom after certain events fire


It's very important to me because it's an instance of one of my values - keeping something borrowed in the same or a better condition. For example, we're "borrowing" the earth/environment from future generations.

Of course, we could have a lot of discussion over what "better" means and if making X better is worth it at the cost of Y and so on, and I understand that the criteria, definitions, and interpretations could vary between people. For me, the focus should be on sustainability.

I can't have much impact on the overall direction of things as an individual, but I still try to do my part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: