That honestly doesn't seem too bad. Zelda 1 is relatively large but it reuses a lot of assets and honestly probably doesn't have that much text. (More than a Mario but way less than a Dragon Warrior.)
> Sakurai, director of the game, is bullish on AI for game development [2]
Your source is based on machine translation, and professional translators pushed back on the interpretation that he is enthusiastic about genAI. Apparently it came across more as resignation that AAA developers may be forced to resort to genAI in order to sustain the endless scope creep and content bloat endemic in the AAA space, which has led to the current absurdity of it taking an entire decade to make a new GTA game.
It's a pity Brooker didn't have some residual IP control so it could have been republished elsewhere. I honestly think it was a little masterpiece that deserved to be saved.
I am not an expert but as long as the video is playable by the browser (x264 - Chrome apparently supports the most formats) and the same duration (05:12:14) it should work.
Writing the license is the easy part, the challenge is in making it legally actionable. If AI companies are allowed to get away with "nuh uh we ran it through the copyright-b-gone machine so your license doesn't count" then licenses alone are futile, it'll take lobbying to actually achieve anything.
My point is that you could write the most theoretically bulletproof license in the world and it would count for nothing under the precedent that AI training is fair use, and can legally ignore your license terms. That's just not a problem that can be solved with better licenses.
You could also list plenty of horror stories where people went to medical professionals and got screwed over. There is this myth that people can go to doctors and get perfect attention and treatment. Reality is far from that
There’s the concept of “personal advocacy” when receiving healthcare. Unfortunately, you’ll only get the best outcomes if you continually seek out treatment with diligence and patience.
But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow.
"But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow."
It doesn't delegitimize the whole industry. It points out real problems. A lot of patients are not given enough attention and don't get the correct treatment because the doctors didn't listen but rushed through things.
I was criticizing the rhetoric, not the sentiment. I’m skeptical of an argument when it flies too close to what I associate with irrationality and pseudoscience, especially considering what’s happened in medicine over the past 5 years.
The “myth [of] perfect attention and treatment” is an easy strawman for grifters and conmen to take advantage of: see RFK Jr.
How do you measure productivity? Profit per employee has never been higher, probably, as PE and other rent-seeking leeches (residency caps) have wrapped their fingers around the throat of the industry.
Positive outcomes per patient is probably also higher, due to research and technology advances. So many lives saved that would have been written off just a decade or two ago (e.g. spina bifida).
But I agree with you that there’s a hypothetical universe where seeking healthcare as an American doesn’t suck, I just don’t know if “productive” is the right word to describe it.
Yes, there's been a tension between personal advocacy and the system for a long time. Doctors roll there eyes when a patient mentions they self diagnosed on WebMD. LLM's will accelerate self diagnosis immensely. This has the potential to help patients, but it is just a starting point. Of course, it should be verified from actual trained doctors.
A big part of the legal implications of LLMs and AI in general is about accountability.
If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.
If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.
Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?
The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.
Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.
> They are held less accountable compared to other professions.
I have no idea what other professions you’re talking about. Doctors are the only professionals where it’s common for multi million dollar judgements to be awarded against individuals. In may cases, judgements larger than their malpractice insurance limits.
Take a doctor working alone overnight in the ER. They are responsible for every single thing that happens. One of the 4 NPs that they are supposed to have time to supervise while they are stuck sedating a kid for ortho to work on makes a mistake—the doctor is the one that’s getting sued. A nurse misinterprets an order and gives too much of something, doctor is getting sued. Doesn’t matter if it’s their fault or not. Literally ever single one of the dozens of patients that comes in with a runny nose or a tummy ache, or a headache is their responsibility and could cost them their house. And there are far too many patients for them to actually supervise fully. They have to trust and delegate, but in practice they are still 100% on the hook for mistakes. For accepting this responsibility they might get $10 per NP patient that they supervise.
Healthcare professionals also occasionally face criminal prosecution for mistakes at a level that wouldn’t even be me a career in other professions.
> Lawsuits against medical professionals are difficult in many cases impossible for the average person to win
Malpractice attorneys operate on contingency, so they’re more accessible to the average person than most kinds of attorneys. It’s one of the many reasons healthcare is so expensive in the US.
It’s harder for a doctor to get fired for saying showing up late to work than it is for a cook at McDonald’s I guess, but compared to other professionals? I’ve seen software engineers regularly skip through companies leaving disasters in their wake for their entire careers. MBAs regularly destroy companies, lawyers and finance bros get away with murder, and police officers literally get away with murder.
The only profession that faces anywhere near the accountability that doctors do that I can think of might be civil engineers.
"…a 60-year-old man who had a “history of studying nutrition in college” decided to try a health experiment: He would eliminate all chlorine from his diet…"
You can see already that this can easily go sideways. This guy is already exploring the nether regions of self-medication.
It would be ideal if LLMs recognized this and would not happily offer up bromine as a substitute for chlorine, but I suspect this guy would have greedily looked for other shady advice if LLMs had never existed.
No, there's a difference between radically changing your diet and changing up your stretch/strength routine.. you don't just "end up" like one of them, you can evaluate that the downside risk of the latter is much lower and try it safely while recognizing that an extreme diet might not be so safe to try without any professional guidance.
The compression tricks used in standalone Javascript demos are significantly more cursed. The meta is to concatenate a compressed binary payload and a HTML/JS decompression stub in the same file, abusing the fact that HTML5 parsers are required to be absurdly tolerant of malformed documents. Nowadays it's done using raw DEFLATE and DecompressionStream, but before that was available they would pack the payload into the pixels of a PNG and use Canvas to extract the data.
https://developer.mozilla.org/en-US/docs/Web/Security/Defens...
reply