Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The have added a lot more to it than just change the license

I see that, but that doesn't mean they can ignore the original license.

> The original wording in the license puts arbitrary and unenforceable limits on what the end user can do.

I'm totally with you here. The original license is absurd. Still doesn't mean we can fork the repo and replace the license without (probably) breaking the law. Which, by all means, do. In practice maybe nobody will care. I'm just pointing it out because it's sketchy.

> I would argue the original repo license applies to the weights only and not the code wrapping it.

Most of the clauses only apply to the weights, yes. The first clause in the license applies to the whole repo, though: "All rights reserved by the authors."

> They have also made a way to use real-esrgan/gfpgan and so a license change is practicality required.

I don't see how this is relevant. Even if there is a license conflict, the authors retain control over their source. A license conflict might lead to a damages settlement, or an order to halt distribution. It doesn't magically switch the license by implication.



This repo doesn't distribute the model or weights directly. You are agreeing to the more limited upstream license of the original repo when you download the pre-trained model. If the repo came bundled with the pre-trained model then your concern is valid. In this case it is not.


Look at the "assets" and "scripts" directories. Those contain images and code. Those things were released alongside a LICENSE file. They are now reproduced in the fork you linked, but with the LICENSE file rewritten completely. The original license (Aug. 10) did not grant permission to relicense those scripts. All it said about them was "All rights reserved".

The current license, the one added on Aug. 22, does seem grant explicit permission to sublicense and redistribute what it calls the "Complementary Material" (which includes the source code surrounding the model), however, it has a lot of specific provisions, among which are the requirement that the original copyright notice be reproduced.

Like I said, in practice this might not be a big deal, but forking a repo whose license starts with "All rights reserved" and which does not explicitly grant permission to relicense its code, and then rewriting the license file in your fork, should be a huge red flag. As of the Aug. 22 license the fork might be compliant, but I think it would be a lot safer to include a copy of that license in addition to the new GNU license for the forker's changes. And pre-Aug 22, when this fork was made, it was just flat out ignorant to fork and relicense. You can't just delete "All rights reserved" and paste in a GNU license. Look at the license the forker deleted[0]. Literally all it says is "All rights reseved", and then lists a few things you can't do. There's not a single provision that would make it okay to redistribute at all, let alone modify and relicense it.

0. https://github.com/hlky/stable-diffusion/commit/b4c61769dfa1...


The original license that accounts for most of the code you are concerned with was released with an MIT license. https://github.com/CompVis/stable-diffusion/commit/2ff270f4e... The August 10th update made unenforceable rules about how the pre-trained model can be used or distributed. A new license was added to an entire codebase that was already partially released under the MIT license. So only the new code would have copyright reserved. That alone makes most of your argument a moot point.

The actual scripts being used were committed to the repo on August 21st. https://github.com/hlky/stable-diffusion/commit/1d0036cb6644... And these actual scripts being used don't seem to be modified from the ones that were relicensed with the rights reserved.

The crux of the situation for me is that at no point during the release of the pre-trained model do the authors claim any sort of copyright on the images produced, if the end user is generating on their own hardware. There's no meaningful way to legally enforce the current CreativeML Open RAIL-M or the previous MIT derived license that creates rules about how the output of the software can be used.

That is something that has been confusing me, but I imagine it will get cleared up sooner rather than later.

The additional rules are effectively an acceptable use policy. There is no meaningful legal consequence of breaking acceptable use policy. The most that can be done is that an end user will no longer be allowed to used the pre-trained models. Additionally acceptable use policy has to specify a jurisdiction. In the US, breaking acceptable use policy does not amount to violating the CFAA.

The actual license seems mostly about a way for there to be no way to hold the authors of the models accountable for any illegal activity done by the end users. Which is completely fair and understandable.

This kind of misunderstanding and fear-based approach to reusing code is what holds back progress and seems to be what the authors actively tried to fight by releasing the current repo with a CreativeML Open RAIL-M license.

I believe the intent of the authors is as important as the exact text of the repository. Especially since the cited sources for the foundation this was built of off, x-transformers by lucidrains, OpenAI's ADM codebase, and Denoising Diffusion Probabilistic Model, in Pytorch by lucidrains are all licensed with MIT. Most importantly, the restrictive license is based almost completely on the condition that the end user is using their pre-trained model. If the end-user manages to create and uses their own model from scratch, there is no reason for any part of the original repo license to apply.


> The original license that accounts for most of the code you are concerned with was released with an MIT license.

I'm not sure this is true. Yes, it's in the git history. However, the repo was only made public on Aug. 10, which is the day the license was changed to the proprietary one. That says to me that the intent of the authors was to release the code under the proprietary license. There may have been some internal discussion of releasing it under the MIT license, which is why it had that license file for a long time in the git history, but the day the repo was actually released to the public, it was licensed using the Aug. 10 proprietery license.

> This kind of misunderstanding and fear-based approach to reusing code is what holds back progress...

Fully agree with this.

> ...and seems to be what the authors actively tried to fight by releasing the current repo with a CreativeML Open RAIL-M license.

But disagree with this. That license is a nightmare. It's full of permissive provisions followed by insane, idealistic, overreaching conditions that amount to "ensure nobody you give this to uses it to do anything bad".

> I believe the intent of the authors is as important as the exact text of the repository.

Yes, I agree, which is why I think it's important that the day the repo was released to the public was the same day they changed the license from MIT to the proprietary one. I think they panicked last second and decided they weren't ready to go full FOSS and switch to proprietary while they worked out (what they though was) a better solution. And on Aug 22 they relicensed with the CreativeML license.

I'm sure there's disagreement and discussion going on internally, but I'm not getting the same impression from it all that you seem to be getting. I think if the FOSS people on the inside were winning, the thing would have been made public with an MIT license. Instead we've got this do-no-evil license that talks about remote monitoring and control and transitive responsibility for bad actors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: