Hacker Newsnew | past | comments | ask | show | jobs | submit | inlined's commentslogin

Google had achieved carbon neutrality and committed to wiping out their carbon legacy until AI.


As a user I suffer from not being able to freely use or derive my own work from Microsoft’s


This. People conflate consumer to user. A user in the sense of GPL is a programmer or technical person whom the software (including source) is intended for.

Not necessarily a “user of an app” but a user of this “suite of source code”.


Except really the whole point is it explicitly and actively makes no distinction. Every random user has 100% of the same rights as any developer or vendor.


A completely level playing field. There's probably never been a more perfect free market than that in free software.

It turns out that most people who say that value free market capitalism never really did.


At this point they've contributed a reasonably-fair share of open-source code themselves.

No one benefits from locking up 99.999% of all source code, including most of Microsoft's proprietary code and all GPL code.

No one.

When it comes to AI, the only foreseeable outcome to copyright maximalism is that humans will have to waste their time writing the same old shit, over and over, forever less one day [1], because muh copyright!!!1!

1: https://en.wikipedia.org/wiki/Copyright_Term_Extension_Act


> only foreseeable outcome to copyright maximalism

Nahh, AI companies had plenty of money to pay for access they simply chose not to.


Clearing those rights, which don't actually exist yet, would have been utterly impossible for any amount of money. Thousands of lawyers would tie up the process in red tape until the end of time.


The basic premise of the economy is people do stuff for money. Any rights holder debating with their punishing house or whatever just means they don’t get paid. Some trivial number of people would opt out, but most authors or their estates would happily take an extra few hundred dollars per book.

YouTube on the other hand has permission from everyone uploading videos to make derivative works barring some specific deal with a movie studio etc.

Now there’s a few exceptions like large GPL works but again diminishing returns here, you don’t need to train on literally everything.


Nice. I didn’t know I can now replace my “assertExhaustive” function.

Previously you could define a function that accepted never and throws. It tells the compiler that you expect the code path to be exhaustive and fixes any return value expected errors. If the type is changed so that it’s no longer exhaustive it will fail to compile and (still better than satisfies) if an invalid value is passed at runtime it will throw.


I thought the same thing. I also have an assert function I pull in everywhere, and this trick seemed like it would be cleaner (especially for one-off scripts to reduce deps).

But unfortunately, using a default clause creates a branching condition that then treats the entire switch block as non-exhaustive, even though it is technically exhaustive over the switch target. It still requires something like throwing an exception, which at that point you might as well do 'const x: never = myFoo'.


I still keep my assertNever function because it will handle non-exhaustiveness at runtime.


Is this meant to be a defense of the DNS protocol? I’ve never assumed the meme was that the DNS protocol is flawed, but that these changes are particularly sensitive/dangerous.

At Google we noticed the main cause of outages are config changes. Does that mean external config is dangerous? Of course not! But it does remind you to be vigilant


Mongo also has a good query language and a mongo DB can be seen as an array of documents


It sounds like you’re not at the scale where cloud storage is obviously useful. By the time you definitely need S3/GCS you have problems making sure files are accessible everywhere. “Grep” is a ludicrous proposition against large blob stores


Maybe they’re not using keepalives in their clients causing thousands of handshakes per second?


Yes, they mention this as a 'fix' for connection-related memory usage:

> Disable keep-alive: close the connection immediately after each upload completes.

Very odd idea.


Possibly missing session resumption support compounding the problem.


Doubt that’s on the table unless Microsoft is also sued. Without a joint ruling this wouldn’t be balanced


Doesn't mean we

a) can't hope

b) shouldn't hope


You actually should never return a specific error pointer because you can eventually break nil checks. I caused a production outage because interfaces are tuples of type and pointer and the literal nil turns to [nil, nil] when getting passed to a comparator whereas your struct return value will be [nil, *Type]


It's really hard to reconcile behavior like this with people's seemingly unshakeable love for golang's error handling.


People who rave about Go's error handling, in my experience, are people who haven't used rust or haskell, and instead have experience with javascript, python, and/or C.

https://paulgraham.com/avg.html


Go seems really sensitive to this subject. Maps iterate in order, but one day they said “this is incidental and we said not to rely on it. You do, so we’re breaking it in a minor release” and now maps iterate in order… from a random offset


On the one hand, I never realized that map iteration order was consistent, but it's just the starting point that changes. On the other hand, I guess there's no other way to do it, since a proper shuffle would require O(n) bookkeeping. I suppose you could also flip a coin for going backwards too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: