MRI's themselves produce no cancer risk as they're not ionizing radiation. There's SOME questions about SOME of the dyes used for SOME MRI procedures, but those are usually used in situations where the alternative is worse - so do it.
MRI doesn't use ionising radiation so it's a stretch. Most likely cause would have to be some toxic effect of the contrast dye (as opposed to any sort of ionising radiation), but no compelling evidence exists for that.
MRI's magnetic field is not strong enough. CT scans use Röntgen radiation, and that's known to cause ionization (the waves can displace electrons), which --in DNA-- potentially causes cancer.
Depends. MRI itself is safe, but they often add "contrast" which is known to cause cancer (I'm not clear on if there is more than one choice for contrast though, or if they all cause cancer). Of course contrast is mostly used when they looking at a something - likely a tumor that might or might not be cancer to decide how to treat it - in that case does it matter that your long term cancer prospects go up when without it your short term prospects are not good.
There is no compelling evidence that MRI contrast agent causes cancer. Gadolinium (the stuff that’s in the contrast agent) can deposit in the body, e.g. in the brain, but if this even has any consequences is still unclear. Nonetheless there is some nice research going on how to drastically reduce the amount of contrast agent needs to be administered through image postprocessing.
There is. But what the OP is doing is not that, it's "scaling". Which probably makes sense for whatever they're working on*. For the other 99% of projects, it doesn't.
* ... if they're at ClosedAI or Facebook or something. If they're at some startup selling "AI" solutions that has 10 customers, it may be wishful thinking that they'll reach ClosedAI levels of usage.
It's not really clear to me that the OP is talking about hardware costs. If so, yeah, once you have enough scale and with a read-only service like an LLM, those are perfectly linear.
If it's about saving the users time, it's very non-linear. And if it's not a scalable read-only service, the costs will be very non-linear too.
Don't rewrite history. Amazon had a million times more books than my shitty local library. ChatGPT is at best the equivalent of a junior that you have to supervise all the time and replaces all your thoughts. It's a very different scenario unless those LLMs can improve very fast which I doubt. And when they reach a senior level, the damage will already be done.
I was laid off from Google in January last year alongside 150 people in my extended team. I managed to find a different team in Gemini, so now I'm part of Deepmind. I have very conflicting feelings because on one hand I really enjoy the work, the team, and the absolute genius of people I get to talk to; but on the other hand, I have some resentment for being so inhumanely laid off, I am sad for the people in my team who were not as lucky as me, and I know it can happen again any time.
In the end it doesn't matter, you'll make more money by either leaving or getting a retention offer.