Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny how all the negative uses to which something like this might be put are regulated or criminalized already - if you try to scam someone, commit libel or defamation, attempt widespread fraud, or any of a million nefarious uses, you'll get fined, sued, or go to jail.

Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?

This idea that AI is somehow special, that they absolutely must monitor and censor and curtail usage, that they claim total responsibility for the behavior of their users - Anthropic and OpenAI don't seem to realize that they're the bad guys.

If you build tools of totalitarian dystopian tyranny, dystopian tyrants will take those tools from you and use them. Or worse yet, force your compliance and you'll become nothing more than the big stick used to keep people cowed.

We have laws and norms and culture about what's ok and what's not ok to write, produce, and publish. We don't need corporate morality police, thanks.

Censorship of tools is ethically wrong. If someone wants to publish things that are horrific or illegal, let that person be responsible for their own actions. There is absolutely no reason for AI companies to be involved.



> Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?

Would you want DuPont to check the toxicity of Teflon effluents they're releasing in your neighbourhood? That's insane. It's people's responsibility to make sure that they drink harmless water. New tech is always amazing.


Yes, because we know a.) that the toxicity exists and b.) how to test for it.

There is no definition of a "safe" model without significant controversy nor is there any standardized test for it. There are other reasons why that is a terrible analogy, but this is probably the most important.


It's called Overton window what's politically acceptable. Unlike toxicity, it is fully subjective.

https://en.m.wikipedia.org/wiki/Overton_window


I don't see how that analogy works, especially so as in your attempt to make a point you have DuPont as the explicit actor in the direct harm, and the people drinking the water aren't even involved... like, I do not think anyone disagrees that DuPont is responsible in that one.

I also, to draw a loose parallel, think that Microsoft should be responsible for the security and correctness of their products, with potentially even criminal liability for egregiously negligent bugs that lead to harm for their users: it isn't ever OK to "move fast and break things" with my personal data or bank account. But like, that isn't what we are talking about constantly with limiting the use cases of these AI products.

I mean, do I think OpenAI should be responsible if their AI causes me to poison myself by confidently giving me bad cooking instructions? Yes. Do I think OpenAI should be responsible if their website leaks my information to third parties? Of course. Depending on the magnitude of the issue, I could even see these as criminal offenses for not only the officers of the company but also the engineers who built it.

But, I do not at all believe that, if DuPont sells me something known to be toxic, that it is DuPont's responsibility to go out of their way to technologically prevent me from using it in a way which harms other people: down that road lies dystopian madness. If I buy a baseball bat and choose to go out clubbing for the night, that one's on me. And like, if I become DuPont and make a factory to produce Teflon, and poison the local water with the effluent, the responsibility is with me, not the people who sold me the equipment or the raw materials.

And, likewise, if OpenAI builds an AI which empowers me to knowingly choose to do something bad for the world, that is not their problem: that's mine. They have no responsibility to somehow prevent me from egregiously misusing their product in such a way; and, in fact, I will claim it would be immoral of them to try to do so, as the result requires (conveniently for their bottom line) a centralized dystopian surveillance state.


Well, humans do understand such thing as scale.

C4 and nuke are both just explosives, and there are laws in place that prohibit exploding them in the middle of the city. But the laws that regulate storage and access to the nukes and to C4 are different, and there is a very strong reason for that.

Censorship is bad, everyone agrees on that. But regulating access to technology that has already proven that it can trick people into sending millions to fraudsters is a must, IMO. And it'd better be regulated before in overthrows some governments, not after.


Is this a "guns don't kill" argument ?

Microsoft Word and Excel aren't generative tools. If Excel added a new headline feature to scan your financial sheets and auto-adjust the numbers to match what's expected when audited, you bet there would be backlash.

And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.

On the general point, our society values intent. Tools can just be tools when their primary purpose is in line with our values and they only behave according to the user's intent. AI will have to prove a lot to match both criteria.


> And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.

I went to high school in a fairly affluent area and I promise you this is not true. If you have money and know how to talk to your doctor, you can get whatever you want. No questions asked.

You can even get prescription methamphetamine - and Walgreens will stock generic for it!


Definitely not if you're a white male under 60 years old. They won't even give you opioids after surgery now because you are "high risk" .

If you're really rich it may be a different story, but any of the "middle class" good luck. And if you do find a doctor with some compassion, they are probably about to retire.


All I can say is that I am speaking from life experience. It sounds like our experiences have been different.


> If you have money and know how to talk to your doctor

That's a decently high bar I think ?

Imagine what you can do if you have money and know how to talk to your local police...


> If Excel added a new headline feature to scan your financial sheets and auto-adjust the numbers to match what's expected when audited

- Sounds like what my accountant already does.


Right, but accountants have qualifications and, more importantly, have to sign their name and accept liability for the accounts they're submitting. That's the part that's missing when "computer says ok".


Your accountant's cooking the books is handmade and a work of art, passed down by generations of accountants before them and they'll proudly stand in front of any auditor to claim their prowess at their craft.


I disagree. Analogous would be how we have very limited regulations on guns, but you can’t just have a tank, fighter jet, or ICBM.

Some tools are a lot more powerful than others and we have to take special care with them.


> Analogous would be how we have very limited regulations on guns

This is strictly limited to the US. In most advanced democracies you need a stack of papers to get even a small handgun.


Right but a gun can be had and presumably a nuclear warhead can’t, so even in countries who call the wrong sport “football” the law takes into account that some tools need to be regulated more than others.


There are private citizens that own and operate all of those things.


Pray, what private citizens operate ICBMs?


> but you can’t just have a tank, fighter jet, or ICBM.

?? You 100% can in the USA it just costs a lot of money.


Operational military ones? Tanks that are basically very dense semis don't really count for this point.


No he’s just doing an aaaaaactually comment. Wouldn’t be HN if someone didn’t.

You cannot own tanks or jets capable of using military ordnance in the US (and I’d wager nearly any country that has anything resembling rule of law). You can own decommissioned ones that are rendered militarily useless.


I can write an erotic fiction about your husband or wife or son or daughter in microsoft word, but it's a little different if I scrape their profiles and turn it into hardcore porn and distribute it to their classmates coworkers isn't it?


But you can do that without using AI, and we have laws (harassment etc.) that apply. So where does AI come into the equation?


You are posting this under a pseudonym. If you did publish something horrific or illegal, it would have been the responsibility of this web site to either censor your content, and/or identify you when asked by authorities. Which do you prefer?


> when asked by authorities

Key point right here.

You let people post what they will, and if the authorities get involved, cooperate with them. HN should not be preemptively monitoring all comments and making corporate moralistic judgments on what you wrote and censoring people who mention Mickey Mouse or post song lyrics or talk about hotwiring a car.

Why shouldn't OpenAI do the same?


It seems reasonable to work with law enforcement if information provides details about a crime that took place in the real world. I am not sure what purpose censoring as a responsibility would serve? Who cares if someone writes a fictional horrific story? A site like this may choose to remove noise to keep the quality of the signal high, but preference and responsibility are not the same.


This website is not a tool - not really.

Your keyboard is.

Censoring AI generation itself is very much like censoring your keyboard or text editor or IDE.

Edit: Of course, "literally everything is a tool", yada yada. You get what I mean. There is a meaningful difference between that translate our thoughts to a digital medium (keyboards) and tools that share those thoughts that others.


A website is almost certainly a tool. It has servers and distributes information typed on thousands of keyboards to millions of screens.


HN is the one doing the distribution, not the user. The latter is free to type whatever it wants, but it is not entitled to have HN distributes his words. Just like a publisher do not have to publish a book he doesn’t want to.


When someone posts on FB, they don't consider that FB is publishing their content for them


Maybe you should talk with image editor developers, copier/scanner manufacturers and governments about the safeguards they shall implement to prevent counterfeiting money.

Because, at the end of the day, counterfeiting money is already illegal.

...and we should not censor tools, and judge people, not the tools they use.


Interestingly, you must know that any printing equipment that is good enough to output realistic banknotes are regulated to embed a protection preventing this use case.

Even more interestingly, and maybe that could help understand that even in the most principled argument there should be a limit: molecular 3d printers able to reproduce proteins (yes, this is a thing) are regulated to recognise a design from a database of dangerous pathogens and refuse to print.


Gimp doesn't have the secret binary blob to "prevent counterfeiting" and there is no flood of forged money

https://www.reddit.com/r/GIMP/comments/3c7i55/does_gimp_have...


Gimp makes printers now?


So guns are ok? How about bombs?


that works for locally hosted models, but if its as a service, openai is publishing those verboten works to you, the person who requested it.

even if it is a local model, if you trained a model to spew nazi propaganda, youre still publishing nazi propaganda to the people who then go use it to make propaganda. its just very summarized propaganda


Does this apply to the spell checker in Office 365 or Google Docs?


Are hunting knives regulated the same way as rocket launchers? Both can be used to kill but at much different intensity levels.


Censorship of tools...

Then let's parents choose when teenagers can start driving.

Also let's legalize ALL drugs.

Weapons should all be available to public.

Etc. Etc.

----

It's very naive to think that we shouldn't regulate "tools"; or that we shouldn't regulate software.

I do agree that on many cases the bad actors who misuse tools should be the ones punished, but we should always check the risk of putting something out there that can be used for evil.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: