Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not even deleted! Actively stolen. No one has yet grappled with the reality that especially in the realm of generative art, these models do not work without a basis of absolutely stressful amounts of stolen artwork to train them. They simply do. Not. Exist. Without that baseline unethical contribution that tons of artists made without their knowledge and certainly without their consent. None of the major figures behind any of this tech will acknowledge this. They make some hand-wavy non-committal gestures towards fair use and "how artists learn" and act like that's an answer, completely discarding tons of points not worth revisiting here because it's frankly been beaten to death already and if you're still fence sitting or pro generative art at this point, it's because you want/need to be to prevent cognitive dissonance in one way or another.

And to be clear: this stuff is SO COOL. The idea of entering prompts and getting at least vaguely unique and interesting output is incredibly cool. The notion of machines that learn, even in a restrictive way as the current models do, is so fascinating. And of course, like any cool and interesting new tech, it was immediately adopted/hijacked by the worst people imaginable, determined to fill Amazon self-publishing with generated hack kids books filled with generated text and generated art, at the lowest cost point possible, sold on the cheap, with the idea that they could make them convincing enough to trick some poor overworked parents into buying them and pocketing the cash. Just grift. Every new technology we get now, as few as the true new innovations they are, are always, always, immediately, co-opted by the worst actors in the space and turned to utter garbage.



Oh please. First of all, how do you steal an idea? We’re talking about pictures. Supposing that you buy into the theory that you can, copyright was created to further the arts and sciences; it’s in the US constitution. The point isn’t to control your work — it’s to live in a richer society. And it’s not even clear that training a model counts as infringement. Being able to recite a quote from a book is different than reproducing the entire book. Artists won’t acknowledge that the same applies to their art.

If you believe that training models on art is stealing, then I’m a master ninja, since I’m the creator of books3. And even Stephen King today came out and said that he’s fine with it:

> Would I forbid the teaching (if that is the word) of my stories to computers? Not even if I could. I might as well be King Canute, forbidding the tide to come in. Or a Luddite trying to stop industrial progress by hammering a steam loom to pieces.

https://www.theatlantic.com/books/archive/2023/08/stephen-ki...

If he’s not worried, why are you?

I take a dim view of people trying to frame researchers as criminals. We’re not. We want to further science. That’s all.

You call me a grifter, but I’ve made roughly a hundred bucks from books3, and that’s because someone found my patron buried under a pile of links and subscribed to it many months ago. Most of my researcher colleagues seem to have similar distaste for wanting to make money. The work is the goal.


> I take a dim view of people trying to frame researchers as criminals. We’re not. We want to further science. That’s all

Criminality and perusing science not only aren't exclusive, they aren't even related. Would you try that same argument if you were stealing physical goods for your experiments?

We have tons of example of unethical and illegal things being done in the name of science. IRBs didn't come into existence because being a scientist wanting to further science automatically makes you moral or justified. I trust I don't need to list the various experiments.

Meanwhile, to return to what King said. That something doesn't worry someone worth more than half a billion dollars with a 5 decade career at the top of their game and iconic name recognition is not an indication that the thing is irrelevant, especially to people in the same line of work.

You can say you don't like copyright, but that's not what you are focusing on.


It is immoral to create a device, from the labors of someone, which is capable of replacing their labor, without compensating them.

Unless they agreed to provide their work for that purpose for free.

Postulate #1: Image generation models would not exist without large amounts of training data from current artists.

Postulate #2: Every major AI company either trained directly on public web scraped datasets or is murky about what they train on.

Theft at scale does not somehow make it not theft. Stealing 1/100th of a penny 10 billion times is still stealing.

And when you repackage the results of that theft in a profit generating machine, and then label it not theft because "it's a whole new thing," you start to sound like a CDO apologist.

And look, I get it -- it's about money.

It's always about money.

You may not be making any off your work, but that's immaterial because lots of huge companies are making obscene amounts of money from doing this (or expect to be in the future).

At the same time, it is an excellent tool. Art without human time! It will eliminate a lot of artist jobs, but everyone as a whole will be better off (because we're swapping human labor for electricity).

However, the currently vogue "artists don't deserve anything" smacks more of "we don't want to share profits during the transition period" than a cohesive moral argument.

We can have an AI future, but we should be honest about what enabled that. And we should compensate those people during the transition.

Hell, AI tax. Paid to everyone who created a work in a scraped dataset. Sunset in 30 years. Done.


I disagree with you, simply for the fact that artists have been learning from one another for thousands of years.

We can see a clear timeline of art and it’s progression throughout human history, and it’s often clear how a later work took inspiration from an earlier period.

Art school teaches techniques and methods pioneered by earlier artists, for the express purpose of their students to know how to incorporate them into their own original work.

Yet, no one is arguing that Van Gogh’s descendants should be paid a small royalty anytime a variation of on of his painting is produced, or even just when a painting in the style of one of his is produced.

Were all visual artwork to disappear from the world and collective human memory today, then the first new pieces produced by artists would look dramatically different - and likely much worse - than they do today.

What AI is doing is no different. Perhaps faster and on a larger scale than how humans learn from one another, but principally it’s the same.


> Perhaps faster and on a larger scale than how humans learn from one another, but principally it’s the same.

I like how you just tucked this at the end there without any introspection on what kind of a paradigm shift that is. If you wanted a "Van Gogh style painting," you'd contract with a painter who specialized in it, and no, his descendants don't get royalties from that (which is an interesting discussion to have, I'm not sure they should, but I haven't thought about it but anyway) but you are paying a human creative to exercise a vision you have, or, from another perspective, perhaps a person goes into creating these style of paintings to sell as a business. Again the idea of royalties isn't unreasonable here but I digress.

Now, with these generative art algorithms, you don't need a person to spend time turning your/their idea into art: you say "I want a picture of a cat in Van Gogh's style" and the machine will make you dozens, HUNDREDS if you want, basically as many as you can stomach before you tell it to stop, and it will do it (mostly) perfectly, at least close enough you can probably find what you're looking for pretty quickly.

Like, if you can't tell why that's a PROBLEM for working artists, I'm sorry but that's clearly motivated reasoning on your part.


I can tell why it’s a problem for working artists. I never suggested otherwise. What I disagreed with was the premise that it’s immoral or inherently wrong. A problem posing a difficulty to a certain group of difficulty doesn’t have any bearing on its morality.


I'm guessing you mean to say "A problem posing difficulty to a certain group of people doesn't have any bearing on it's morality." and that's just... so very gross in terms of ethical statements.

Like just, hard disagree. Undercutting the value by entire factors of a whole profession's labor is incredibly immoral, especially when you couldn't have done it without the help of their previous works. Like... a very non-exhaustive list of problems I would say meet that definition are:

- Generational/racial wealth inequality

- Police brutality

- The victims of the war on drugs

- Exploitation of overseas labor

I don't think we really have anything else to discuss.


> A problem posing a difficulty to a certain group of difficulty doesn’t have any bearing on its morality.

A good point, but I think an all-humans good argument can be made here, not just a specific group.

To sketch, I think we can all agree that the destruction of the human journalism profession negatively impacted public discourse for everyone?

Ergo, the destruction of the human artist profession seems like something we should consider carefully.


Alike in method is not like in output, and it's output that matters.

A human takes ~4-20 years to become a good artist. They can then produce works at a single human rate.

A model takes ~30 days to become a good artist. It can then produce works at an effectively infinite rate, only bounded by how many GPUs and much electricity can be acquired.

These are very different economic constraints and therefore require different solutions.


> These are very different economic constraints and therefore require different solutions.

This is often listed as the reason why it’s ok for human to learn from a prior art, but not for a LLM. The question is why? If the act of learning is stealing, then it is still stealing, no matter how small scale, and every single human on earth has committed it.

The LLM vendor may benefit more than a mere mortal pupil because of the scale and reach. At the same time the LLM may make the prior art more visible and popular and may benefit the original creator more, even if only indirectly.

Also if content creators are entitled to some financial reward by LLM vendors, it is only appropriate that the creators should pay back to those that they learn from, and so on. I fail to see how such a scheme can be set up.


Law exists to benefit humans.

Either directly (outlawing murder) or indirectly (providing for roads and bridges). And well (libraries) or poorly (modern copyright law).

But fundamentally, law benefits people.

Most modern economic perversions are a consequence of taking laws which benefit people (e.g. free speech) and overzealously applying them to non-people entities (e.g. corporations).

So "why [is it] ok for [a] human to learn from a prior art, but not for a LLM"?

Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.

Existing laws aren't the way they are because they encode universal truths -- they're instead the consensus reached between multiple competing interests and intrinsically rooted in the possible bounds of current reality.

"This is a fair copyright system" isn't constant with respect to varying supply and demand. It's linked directly to bounds on those quantities.

E.g. music distribution rights, when suddenly home network bandwidth increased enough to transfer large quantities of music files

Or, to put it another shorter way, the current system and source-blind model output fucks over artists.

And artists are humans. And LLMs are not.


> Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.

Industrialization as we know it would have never happened if we artificially limit progress, just so that people could still have jobs. I guess you could hold the same kind of argument for the copists, when printing became widespread; for horses before the automobile; or telephone operators before switches got automated. Guess what they have become now. Art made by humans can still exist although its output will be marginal compared to AI-generated art.

LLMs are not humans but are used by humans. In the end the beneficiary is still a human.


I'm not making an argument for Ludditism.

I'm making an argument that we need new laws, different than the current ones, which are predicated on current supply limitations and scarcity.

And that those new laws should redirect some profits from models to those whose work they were trained on during the temporary dislocation period.

And separately... that lobotomizing our human artistic talent pool is going to have the same effect that replacing our human journalism talent pool did. But that's a different topic.


For the AI/Robot tax, the pessimistic view is that the legal state of the world is such that such tax can and will be evaded. Now not only the LLMs put humans out of a job because an LLM or a SD model mimicks their work, but the financial gains have now been hidden away in tax havens through tax evasion schemes designed by AIs. And even if through some counter-AIs we manage to funnel the financial gains back to the people, what is now the incentive for capital owners to invest and keep investing in cutting-edge AI, if the profits are now so meagre to justify the investment?


>> I disagree with you, simply for the fact that artists have been learning from one another for thousands of years.

They learn from each other and then give back to each other, and to everyone else, by creating new works of art and inventing new styles, new techniques, new artf-orms.

What new styles, techniques or art-forms has Stable Diffusion created? How does generative AI contribute to the development and evolution of art? Can you explain?


> Being able to recite a quote from a book is different than reproducing the entire book.

Language models can often reproduce an entire book. And image models like midjourney can reproduce lots of copyrighted art more or less flawlessly: https://ceoln.wordpress.com/2022/12/16/some-light-infringeme...


That doesnt make training the models wrong, just using those specific outputs wrong.


It's possible with Google Docs too. The whackiest thing the other day was that I was using the screenshot tool in MacOS and it made a screenshot of copyrighted content. I was flabbergasted. The tool just did it with 4 parameters.


And if you use those reproductions commercially it is almost universally illegal already.


So let’s say in 10 years they have the processing power to generate full movies on demand. OpenAI trains its model on all movies from IMDB. Then I ask ChatGPT to produce a movie « like » avengers but with the faces of all my friends. And I just pay the 20$ fee (or 200$ by then) to OpenAI and watch my movie. Then I can ask to tweak some parts of the story and update the movie in real time. And this for any movie. Given the budget of 220 millions for the one movie I quoted, OpenAI will never give back anything money to them. Or to any movie producer.

Does that seems ok to you ? Today it’s the books and pictures but later it’s going to be anything.

I mean to produce anything it requires time, effort, money. They steal the end result, and produce any variation possible and make money out of it.

Your point would only be valid if OpenAI would be 100% free. It’s not


And then obviously no-one will spend $220 million on a movie anymore because, hey, we can just use generative AI. So I guess subsequent AIs will be based on the outputs of a previous AI? Or will all movies from a certain point onwards be wholly based on a corpus of existing movies used for training? Maybe AI companies will start shooting small film segments in meatspace just for the purpose of training or providing some base input to their models?

The future is gonna be weird, yo.


Isn't that what they are already doing but with human writers? Screenwriting is mostly just a formula. Aside from being cheaper, I don't see any difference in terms of quality.


> Does that seems ok to you ? Today it’s the books and pictures but later it’s going to be anything.

When they're that good, I might finally finish the novel I started writing in… ugh, 2016.

A quote comes to mind, though:

"I say 'your' civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about."


One tangentially related trial would be Pharrell Williams v. Bridgeport Music [0], where Marvin Gaye's family sued Pharrell Williams with the following claim:

> Gaye's family argued that the songs were not merely stylistically similar; instead, they claim that "many of the main vocal and instrumental themes of "Blurred Lines" are rooted in "Got to Give It Up"; namely, the signature phrase, vocal hook, backup vocal hook, their variations, and the keyboard and bass lines" and "the substantial similarities are the result of many of the same deliberate creative choices made by their respective composers."

And they won. I don't agree with the outcome, but I do think it's an interesting benchmark. Obviously, this trial would've never been a thing if "Blurred Lines" wasn't a big hit. Something simlar could apply to a major brand using text-to-image generated material that was strikingly similar to a prominent photographer or artist, if they sued.

I wonder what's going to be the first big case having to deal with this.

[0]: https://en.wikipedia.org/wiki/Pharrell_Williams_v._Bridgepor...


>If he’s not worried, why are you?

Because he already got paid enough to keep him going during the forthcoming drought caused by AI absorbing all the water.


If OpenAI believes that copyright is wrong, why are they keeping their software proprietary?


I agree with you, for what it’s worth. I don’t think it would be a disadvantage for them to open source GPT-4. They’re the ones with the hardware.


Ostensibly because of the risk of misuse.

This may be a losing battle even assuming it's 100% sincere (I'm not interested in debating the assumption).


> First of all, how do you steal an idea? We’re talking about pictures.

We're talking about the fruit of someone else's skilled labor and the long labor that went into developing the skills.

If you're so certain that's not valuable, well, then it shouldn't be any hardship to forgo that entirely and simply figure out some other way to get trained models.

If it is valuable, then maybe it's worth treating it as if it is, both in terms of compensation and determination. Not only for moral accounting but also because of economic feedback: if it's not treated like it's valuable, then it will become less frequent that people can invest time and other resources into doing it.


> how do you steal an idea? We’re talking about pictures

Ideas and artwork are qualitatively different. Artwork, it’s right there in the name. It takes work to create pictures/art. It’s more serious than stealing just ideas, which I agree are economically worthless until executed.


Fun fact: the etymology goes to "ars", as does "artisan" and "artificial".

Similar in German: Künstler, Kunst, Künstliche Intelligenz are all rooted in a single word that means skill/ability/knowledge/recognition.

I suspect that those in 1855 who said "photography can never assume a higher [artistic] ranking than engraving" were basically right: me snapping a photo of a sunset, especially now on a device that adjusts exposure etc. automatically, doesn't feel like it should deserve the same protection as a carefully composed portrait with artfully chosen wardrobe and makeup.


The etymology, while interesting, is trivial hair-splitting in the context of the real issue. I’d be interested to see any court case that has been argued successfully on such grounds.

As for the opinion people from 1855 might have had about AI and artwork, I take them about as seriously as I do their opinions on germ theory, space exploration, racial politics, warfare, psychology, biology, nuclear physics, and many other subjects we have learned more about in the intervening 168 years.


> Artwork, it’s right there in the name

:P

> As for the opinion people from 1855 might have had about AI and artwork

Sure, but it's their opinion of photography, not AI; and implicitly my opinion of AI, not theirs.


OK, your point is well received about the etymological argument, but my real point, right after "artwork", is that yes, even photographs do require work to create. Would you say that Ansel Adams' photographs were effortless to produce? I wouldn't.


>> We want to further science.

Can you point to some scientific result(s) that you have produced?


> Oh please. First of all, how do you steal an idea?

You toss this out as though "Intellectual Property" is this concept you don't understand as (probably) a software developer of some sort, and beyond that, an entire divison of law, to which most companies have entire floors if not entire buildings of lawyers to deal with.

> We’re talking about pictures.

I'm talking about all art in all media. There is generative music, you know. And yes the big ones for now are the text, which has been trained on millions of blog posts, reddit posts, written works creative and otherwise, all, and I will keep saying it, without the permission of the authors by and large and similarly things like MidJourney which in turn were trained on massive imagesets with different specializations, but whose sources included: photographers, illustrators, painters, furries, and likely millions of people drawing in the anime style, *also without their permission.*

> If you believe that training models on art is stealing, then I’m a master ninja, since I’m the creator of books3. And even Stephen King today came out and said that he’s fine with it

I mean, you tell me. You created a thing you stand to profit from (even if indirectly via name recognition) via the use of IP you didn't have the rights to and got permission from ONE affected individual, after the fact. If you want to be ethically in the clear, why not get permission from everyone else? Then you're done. I suspect because a) it will be a substantial amount of work and b) that you know very well most people are not going to be comfortable with their creative output being used to train a machine who aren't already rich, which means a much, much smaller dataset to use.

And if it makes you feel better you can frame as all as zealotous luddites who want to smash your machine, but again, for the record, I do find this interesting. The only part I take issue with is rent-seekers trying to monetize access to them to monied entities, and what said monied entities are going to do with them: which is largely generate spam to sell at whatever price they can manage.

> If he’s not worried, why are you?

I'm not a creative, I'm not worried for me. I just don't like what's about to happen to creatives. I don't like technology being wielded by people who don't have skin in the game, who don't respect the creative process. I frankly find it incredibly offputting how ready and frankly gleeful everyone in my field is to put millions of people to pasture with no plan for how they're to make a living, especially given how hard it is to do so as an artist already.

I just cannot conceive of someone who's like "we should automate creative processes so humans have more time for spreadsheets and time cards" and just... UGH. What in god's name sort of world are we even trying to build anymore!?

> I take a dim view of people trying to frame researchers as criminals. We’re not. We want to further science. That’s all.

I'm sure Oppenheimer did too.

> You call me a grifter,

To be clear, I called users of your model grifters, not you personally.

> but I’ve made roughly a hundred bucks from books3, and that’s because someone found my patron buried under a pile of links and subscribed to it many months ago. Most of my researcher colleagues seem to have similar distaste for wanting to make money. The work is the goal.

That doesn't change the fact that what you guys are building will be used to inflict catastrophic societal harm by people who are not looking to "do the work" as you'd put it. They're looking to maximize profits because it's their contractual obligation. If they can make one designer do the work of a hundred by feeding them a drip of AI generated garbage to tweak to something usable, they will in a heartbeat *AND YOU KNOW THAT.*


> Most of my researcher colleagues seem to have similar distaste for wanting to make money. The work is the goal.

You sure give the profit motive a lot of due for someone who claims to be above it.

Hard work and perseverance is a human instinct that is undermined by antisocial institutions like copyright, which has no precedent absent the barbaric mode of relations we call capitalism.


I believe if you look further and deeper you'll reach the conclusion the real issue is how awful copyright laws are and more importantly, how absurd is the current economic system. Glorified markov chains are not the culprit here.

So the current movement where artists "shame" random joes for using a cool technology has only one possible outcome which is to push said average joes into being politically active in defense of AI. No one seems to want to properly organize and file a class-action, just twitter bickering.


Reminds me a bit of how everybody was cool with pirating software and media back in the day. To help the bad conscience there was a handy narrative of how the record companies were making a killing off of everybody so it was cool not to pay. But in reality it was just technically feasible. That's what enabled the behavior, not a few million Robin hoods coming together to do the right thing.

Similarly I feel creators need to make it impossible to have their work stolen for AI if possible. It will be tough though. They have half the world against them it seems.


I wouldn't equate it with a person pirating a DVD to watch in his own home. It's more like someone pirating a DVD, burning a bunch of copies of it and selling those. Big difference.


It’s too late. The datasets that exist already are already more than enough to train future models.


How does a Human generate their ideas, styles, etc? We absorb everything we come in to contact with too.


Exactly. Preventing the use of some information for training data is like saying we can't go to the gallery or library to see what prior art looks like. Creativity is just having a higher temperature setting and selecting items that trigger a positive response. We are mimetic.

The art world itself has already asked many of these questions 60 years ago, when Warhol made Campbell's soup can art, and then people used that template to turn other brands into art.


https://news.ycombinator.com/item?id=37201326

"I'll keep saying it every time this comes up. I LOVE being told by techbros that a human painstaking studying one thing at a time, and not memorizing verbatin but rather taking away the core concept, is exactly the same type of "learning" that a model does when it takes in millions of things at once and can spit out copyrighted code verbatim."


A human doesn't exist in servers, work for free (well, minus utilities) 24 hours per day and work as long as you want on a project you've given it with zero input. If you can't appreciate the difference between an AI model and a person I think that says more about you than AI.


None of those differences really says anything about morality of having an AI do things, unless you want to argue that the AI suffer from their (mis)treatment.

People sometimes state that The Simpsons is now drawn in sweatshops somewhere in Asia. Assuming that's true, or at least that it's happened at least once for at least some cartoon, does it make any difference at all to the question of where the people directing them get their ideas from or how those workers learned to do their thing?


> None of those differences really says anything about morality of having an AI do things, unless you want to argue that the AI suffer from their (mis)treatment.

I mean, if it was actually AI and not machine learning, that would certainly be a question wouldn't it? That being said, I'm not saying the machine is suffering. I'm saying the people it's replacing cannot possibly hold their own. There is simply no way in hell a person can compete with generative AI, assuming of course you don't need perfection just "good enough." And given how massive components of the modern economy get by on far less than "good enough" I'd say that's a solid reason to be concerned.

> People sometimes state that The Simpsons is now drawn in sweatshops somewhere in Asia. Assuming that's true, or at least that it's happened at least once for at least some cartoon, does it make any difference at all to the question of where the people directing them get their ideas from or how those workers learned to do their thing?

Collaborative creative work among a team of individuals regardless of their location is not the same thing as generative art trained on a dataset it's designed to mimic, come on now. You're grasping at some pretty flimsy straws here.


> if it was actually AI and not machine learning

ML is a subset of AI, not a disjoint set.

> I'm saying the people it's replacing cannot possibly hold their own.

Sure. And? This is the exact same problem with offshoring to sweatshops, and also with all automation from the pottery wheel onwards; and in the industrial revolution this automation happened in a way to cause the invention of Communism.

I'm certainly curious what the modern equivalent to Karl Marx publishes, and how much it will differ from The Communist Manifesto.

> Collaborative creative work among a team of individuals regardless of their location is not the same thing as generative art trained on a dataset it's designed to mimic, come on now.

The argument you're making doesn't appear to care about the differences — it should bite on both, or neither.


> Sure. And? This is the exact same problem with offshoring to sweatshops, and also with all automation from the pottery wheel onwards; and in the industrial revolution this automation happened in a way to cause the invention of Communism.

Yes but crucially: the industrial revolution and automation in general has historically been targeted at things of need, in various stripes: one can argue that the demand for goods, be they food, cellphones, televisions, cars, what have you makes automation extremely helpful: yes there was a period where laborers were replaced and priced out of the market, but many eventually returned. The ability to produce 65" televisions at scale and sell them for cheap enough to let people buy them at scale has led to the prices of televisions cratering, and in the context of products consumed by large portions of the market, this is a desirable and good thing (with caveats).

This breaks down with creative output though. No matter how cheap they are, one person can only consume so many, for example, movies and television shows. Even if generative movies were a thing (I don't think they are yet?) and you could produce them just block by block at scale... there's a ceiling there. People can only consume so many movies. You will saturate that market incredibly quickly, and even that is assuming the market of moviegoers will be interested in an AI movie.

Hell, Disney has already learned this without even needing AI to do it. A big part of their ongoing failure that is their streaming service is they completely and utterly saturated the market for Marvel content. They poured billions into all manner of series and films, the quality has steadily declined, and despite being cheaper and easier to access than most, people have still managed to get sick of it. And however you want to slice it one thing AI cannot overcome is that it cannot create new, novel concepts: it can only remix and recombine existing things into new combinations. This is probably good for hobby tier stuff, but for industry? This is going to crater VERY quickly I believe.

> I'm certainly curious what the modern equivalent to Karl Marx publishes, and how much it will differ from The Communist Manifesto.

I enjoy David Graeber personally.

> The argument you're making doesn't appear to care about the differences — it should bite on both, or neither.

I mean there are definitely things to be said about outsourcing in that conversation, the incentives at play that make these animators willing/interested in learning a style of animation that is not part of their culture, the benefits involved in their education, why they're paid a fraction of what westerners would be for the same work while the product is then sold for the same if not a higher price. I just don't think it's relevant, they're still people and still work within the limitations of people. You're not talking about a widget press that forges 10 widgets with the work of a single operator vs. a blacksmith making them by hand with regard to generative art, you're talking a widget press that uses less resources to produce effectively infinite widgets at incredible paces that are simply unfathomable to the blacksmith, and also somehow the machine stole the souls of millions of blacksmiths to enable it to work.

The metaphors just break down when talking about this, because of the sheer differentials involved.


> No one has yet grappled with the reality that especially in the realm of generative art, these models do not work without a basis of absolutely stressful amounts of stolen artwork to train them. They simply do. Not. Exist. Without that baseline unethical contribution that tons of artists made without their knowledge and certainly without their consent. None of the major figures behind any of this tech will acknowledge this.

Well that's not correct.

Don't get me wrong, I've seen the statements you're probably referring to; but there's also Adobe with one made from licensed and public domain images only.

(You may want to argue, like my literally Communist ex, that under capitalism there can be no such thing as a fair contract between a corporation and a worker; I'm unsure of the power dynamics so won't counter-argue that).


Which part?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: