There's no need to rebutt such a claim, as it's extremely broadly false. The stated level of danger is not comparable, the expectation on effort or time is completely different, the broad negative outcomes of being drafted vs the positive outcomes of a vaccine, none of those are comparable. It would be like saying "rebuke my claim that being drafted is comparable to being asked not to listen to loud music on the bus."
your comment is basically "no" which isn't a great foot hold from which to form rebuttals , i feel like id have to drink different flavors of kool aid to respond adequately , eg if one prioritized national interests over some elderly dying then your point of broad negative vs broad positive outcomes is invalid
Foamcore is truly the world's best "around the house" material for quick projects. An amazing trick not used for simple boxes in the parent post, is to simply score the fold-lines of a box and then fold the foamcore into the shape you desire, instead of cutting all the edges. It makes for very fast and fun crafting.
My favorite activity I ever did with foamcore was to make a to-scale layout of an apartment my wife and I were moving into so that we could experiment with simple to-scale rectangles as stand-ins for furniture to figure out our layout.
No, it's because their model puts dollar values on the labor contributed by non-working adults w/r/t raising children. So in that case, it could be that 1adult1child is slightly higher because of the need to pay for childcare, while the food/insurance/clothing etc of the additional adult in 2adult1child is offset by the fact that the non-working adult will conduct childcare and thus that expense goes away.
But then why is the number higher for 2adult1child (1 working) when compared to 2adult1child(both working). wouldn't child raising costs get added back in once both are working?
> In households with two working adults, all hourly values reflect what one working adult requires to earn to meet their families’ basic needs, assuming the other adult also earns the same.
From the page itself, first paragraph. Double the value under 2 adult (both working) to get the estimated household income.
Put in an area and see for yourself. In general, yes this calculator is closer to what you're describing. For example, Skamania County, a pretty rural county of Washington state with a very low population of 12,000 people, still has a "required living wage" for 1 breadwinner + 1 homemaker + 3 children of $104,292 per year: https://livingwage.mit.edu/counties/53059
Yeah Dallas county Texas, where I live, for family of 4 and 2 working adults is around $105k/year. That seems close, there’s nothing secure about that long term (no room for savings or retirement) but it’s livable.
That's fascinating; the Lavet-type stepping motor acts as an escapement all on it's own by being a very simple stepper motor, so you don't end up needing a miniature version of a classic mechanical escapement, which is what I'd always imagined in my head when thinking about how cheap quartz wall clocks worked.
It seems like it depends on how the authors have configured Vouch. They might completely close the project except to those on the vouch list (other than viewing the repo, which seems always implied).
Alternatively they might keep some things open (issues, discussions) while requiring a vouch for PRs. Then, if folks want to get vouched, they can ask for that in discussions. Or maybe you need to ask via email. Or contact maintainers via Discord. It could be anything. Linux isn't developed on GitHub, so how do you submit changes there? Well you do so by following the norms and channels which the project makes visible. Same with Vouch.
The idea is that sustained and recurring communication would have a cost that quickly drops to zero. But establishing a new line of communication would have a slight cost, but which would quickly drop to zero.
A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.
I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.
Haha yea, I almost didn't post my comment since the original submission is about contributors where a one time "introduction fee" would solve these problems.
I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.
A determinisitic prompt + seed used to generate an output is interesting as a way to deterministically record entirely how code came about, but it's also not a thing people are actually doing. Right now, everyone is slinging around LLM outputs without any trying to be reproducible; no seed, nothing. What you've described and what the article describe are very different.
Yes, you are right. I was mostly speaking in theoretical terms - currently people don't work like that. And you would also have to use the same trained LLM of course, so using a third party provider probably doesn't give that guarantee.
And yet, it is a constantly used decentralized system which does not require content addressing, as you mentioned. You should elaborate why we need content addressing for a decentralized system instead of saying "10MiB limit + spam lol email fell off". Contemporary usage of technologies you've mentioned don't seem to do much to reduce spam (see IPFS which has hard content addressing). Please, share more.
If you think email is still in widespread use because it’s doing a good job, rather than because of massive network effects and sheer system inertia, then we’re probably talking past each other - but let me spell it out anyway.
Email “works” in the same sense that fax machines worked for decades: it’s everywhere, it’s hard to dislodge, and everyone has already built workflows around it.
There is no intrinsic content identity, no native provenance, no cryptographic binding between “this message” and “this author”. All of that has to be bolted on - inconsistently, optionally, and usually not at all.
And even ignoring the cryptography angle: email predates “content as a first-class addressable object”. Attachments are in-band, so the sender pushes bytes and the receiver (plus intermediaries) must accept/store/scan/forward them up front. That’s why providers enforce tight size limits and aggressive filtering: the receiver is defending itself against other people’s pushes.
For any kind of information dissemination like email or scientific publishing you want the opposite shape: push lightweight metadata (who/what/when/signature + content hashes), and let clients pull heavy blobs (datasets, binaries, notebooks) from storage the publishing author is willing to pay for and serve. Content addressing gives integrity + dedup for free. Paying ~1$ per DOI for what is essentially a UUID, is ridiculous by comparison.
That decoupling (metadata vs blobs) is the missing primitive in email-era designs.
All of that makes email a bad template for a substrate of verifiable, long-lived, referenceable knowledge. Let's not forget that the context of this thread isn’t “is decentralized routing possible?”, it’s “decentralized scientific publishing” - which is not about decentralized routing, but decentralized truth.
Email absolutely is decentralized, but decentralization by itself isn’t enough. Scientific publishing needs decentralized verification.
What makes systems like content-addressed storage (e.g., IPFS/IPLD) powerful isn’t just that they don’t rely on a central server - it’s that you can uniquely and unambiguously reference the exact content you care about with cryptographic guarantees. That means:
- You can validate that what you fetched is exactly what was published or referenced, with no ambiguity or need to trust a third party.
- You can build layered protocols on top (e.g., versioning, merkle trees, audit logs) where history and provenance are verifiable.
- You don’t have to rely on opaque identifiers that can be reissued, duplicated, or reinterpreted by intermediaries.
For systems that don’t rely on cryptographic primitives, like email or the current infrastructure using DOIs and ORCIDs as identifiers:
- There is no strong content identity - messages can be altered in transit.
- There is no native provenance - you can’t universally prove who authored something without added layers.
- There’s no simple way to compose these into a tamper-evident graph of scientific artifacts with rigorous references.
A truly decentralized scholarly publishing stack needs content identity and provenance. DOIs and ORCIDs help with discovery and indexing, but they are institutional namespaces, not cryptographically bound representations of content. Without content addressing and signatures, you’re mostly just trading one central authority for another.
It’s also worth being explicit about what “institutional namespace” means in practice here.
A DOI does not identify content. It identifies a record in a registry (ultimately operated under the DOI Foundation via registration agencies). The mapping from a DOI to a URL and ultimately to the actual bytes is mutable, policy-driven, and revocable. If the publisher disappears, changes access rules, or updates what they consider the “version of record”, the DOI doesn’t tell you what an author originally published or referenced - it tells you what the institution currently points to.
ORCID works similarly: a centrally governed identifier system with a single root of authority. Accounts can be merged, corrected, suspended, or modified according to organisational policy. There is no cryptographic binding between an ORCID, a specific work, and the exact bytes of that work that an independent third party can verify without trusting the ORCID registry.
None of this is malicious - these systems were designed for coordination and attribution, not for cryptographic verifiability. But it does mean they are gatekeepers in the precise sense that matters for decentralization:
Even if lookup/resolution is distributed, the authority to decide what an identifier refers to, whether it remains valid, and how conflicts are resolved is concentrated in a small number of organizations. If those organizations change policy, disappear, or disagree with you, the identifier loses its meaning - regardless of how many mirrors or resolvers exist.
If the system you build can’t answer “Is this byte-for-byte the thing the author actually referenced or published?” without trusting a gatekeeper, then it’s centralized in every meaningful sense that matters to reproducibility and verifiability.
Decentralised lookup without decentralised authority is just centralisation with better caching.
reply