Hacker Newsnew | past | comments | ask | show | jobs | submit | sakex's commentslogin

Levels in big tech are just a way to keep you motivated. You'll work harder to get a promo.

In the end it doesn't matter, you'll make more money by either leaving or getting a retention offer.


I'd be surprised if they didn't scale it up.


There are new things being tested and yielding results monthly in modelling. We've deviated quite a bit from the original multi head attention.


Maybe add the date to the title, because it's hardly new at this point


...or in 2020 (the year of the article).


What about MRI? Just had one. Sorry if it's a stupid question, I don't know much about this


MRI's themselves produce no cancer risk as they're not ionizing radiation. There's SOME questions about SOME of the dyes used for SOME MRI procedures, but those are usually used in situations where the alternative is worse - so do it.


MRI doesn't use ionising radiation so it's a stretch. Most likely cause would have to be some toxic effect of the contrast dye (as opposed to any sort of ionising radiation), but no compelling evidence exists for that.


MRI's magnetic field is not strong enough. CT scans use Röntgen radiation, and that's known to cause ionization (the waves can displace electrons), which --in DNA-- potentially causes cancer.


Depends. MRI itself is safe, but they often add "contrast" which is known to cause cancer (I'm not clear on if there is more than one choice for contrast though, or if they all cause cancer). Of course contrast is mostly used when they looking at a something - likely a tumor that might or might not be cancer to decide how to treat it - in that case does it matter that your long term cancer prospects go up when without it your short term prospects are not good.


There is no compelling evidence that MRI contrast agent causes cancer. Gadolinium (the stuff that’s in the contrast agent) can deposit in the body, e.g. in the brain, but if this even has any consequences is still unclear. Nonetheless there is some nice research going on how to drastically reduce the amount of contrast agent needs to be administered through image postprocessing.


Hmm. When I check a few years ago what looked like authortive people said it was - I will admit to not being an expert though.


citation?


You're safe


The anomaly here are AI researchers


> They won’t notice those extra 10 milliseconds you saved

Depends what you're doing. In my case I'm saving microseconds on the step time of an LLM used by hundreds of millions of people.


Beauty of scale. Saving ten milliseconds a hundred times is just a second. But do it a billion times and you've shaved off ~4 months.


If you work at Google or whatever else is popular or monopolistic this week.

In most real jobs those ten milliseconds will add up to what, 5 seconds to a minute?


There is probably a non-linear function of how slow your software is to how many users will put-up with it.

Those 10 ms may quite well mean the difference between success and failure... or they may be completely irrelevant. I don't know if this is knowable.


There is. But what the OP is doing is not that, it's "scaling". Which probably makes sense for whatever they're working on*. For the other 99% of projects, it doesn't.

* ... if they're at ClosedAI or Facebook or something. If they're at some startup selling "AI" solutions that has 10 customers, it may be wishful thinking that they'll reach ClosedAI levels of usage.


It's not really clear to me that the OP is talking about hardware costs. If so, yeah, once you have enough scale and with a read-only service like an LLM, those are perfectly linear.

If it's about saving the users time, it's very non-linear. And if it's not a scalable read-only service, the costs will be very non-linear too.


The point you're missing is that people were making the same kind of comments about Amazon and Uber not too long ago


Don't rewrite history. Amazon had a million times more books than my shitty local library. ChatGPT is at best the equivalent of a junior that you have to supervise all the time and replaces all your thoughts. It's a very different scenario unless those LLMs can improve very fast which I doubt. And when they reach a senior level, the damage will already be done.


I was laid off from Google in January last year alongside 150 people in my extended team. I managed to find a different team in Gemini, so now I'm part of Deepmind. I have very conflicting feelings because on one hand I really enjoy the work, the team, and the absolute genius of people I get to talk to; but on the other hand, I have some resentment for being so inhumanely laid off, I am sad for the people in my team who were not as lucky as me, and I know it can happen again any time.


> I have some doubt about #2. Weren't Big Tech companies paying senior engineers $300K+ - in 2025-adjusted dollars - back in 2013?

Yes but big tech got bigger. Google had a 4th of its current workforce for instance, Meta a 10th, etc. It got much easier to get into those companies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: