Hacker Newsnew | past | comments | ask | show | jobs | submit | jerf's commentslogin

One thing that may help resolve your issue is that while I do agree with the Sam Vimes theory, it is also not guaranteed. There are also scenarios where the $50 boots will last forever, or you can buy $2 boots that will only last five years... but across your entire lifetime will still be cheaper. Or you can account for the fact that you take better care of your stuff than most people and the cheap thing may in fact be fine for a long time. Or you buy the cheap thing twice and maybe in 15 years when you have more disposable income buy the thing that lasts. Or buy the cheap thing and hit the occasional estate sale and eventually find a thing that lasts, but for dirt cheap prices, because you weren't in a hurry because your immediate needs were met and you had the time to wait for a deal.

The meta-lesson of the Vimes theory is really more that you need to think about these things, but it's not guaranteed that the expensive thing will be better in the longterm on a bang-for-the-buck basis. For furniture, there is something to be said for the technique beloved by the just-starting-out set of buying "whatever I scrounged together from garage sales", and there's something to be said for "I outfitted my apartment from Ikea". Yeah, it's cheap and one way or another you're going to pay for that cheapness, but it's so much cheaper than the alternative that as long as you aren't practicing your wrestling moves on the Ikea end tables, you can get a long way with them even if you're replacing them every 10 years.

And, per your last point... at least when you buy cheap, you know you bought cheap. I found myself in need of a dining room table light a few years back. We went to a lighting store and I stood there staring at all the bespoke LEDs that I knew would die and couldn't be replaced, and the multi-thousand dollar lamps that looked nice but I simply couldn't know if they were quality... and ended up buying a $15 dollar extension cord with 5 light sockets on it, bought some light bulbs to put in it, and wrapped the cord around the remains of the previous what-turned-out-to-be-proprietary track lighting. We decorate it for the season with various ribbon things to hide the cords. Because damn it, if it's all just going to fail anyhow, at least I knew I could replace the lights with whatever I wanted, and it cost me less than $100 all in. We've had that for, gosh, I think at least 10 years now, and I've probably cycled the lights at least twice now, but that's probably still under $100 total... all because I simply can't trust the expensive stuff.


The law has a concept of a "carrier" [1], and has the ability to judge whether or not the carrier in question is responsible for what it is carrying.

I'm not making a blanket statement that that means everything is a carrier, because a good chunk of the page I linked is devoted to endless legal nuances and I defer the details of the concept to those who know better. I'm just saying that the law has a well-established concept for this sort of situation, such that it is not the case that just because a third party is involved instantly all protections dissolve. If you really want to dig into the details, that's something an AI that hits the web and digests things would be pretty good at, as long as you're not planning on legal action based on that. Sometimes the hardest part of learning about something is just finding the term for it that lets you dig in.

[1]: https://en.wikipedia.org/wiki/Common_carrier


So, this is slightly off topic, but out of curiousity, what are NPUs good for right this very second? What software uses them? What would this NPU be able to run if it was in fact accessible?

This is an honest, neutral question, and it's specifically about what can concretely be done with them right now. Their theoretical use is clear to me. I'm explicitly asking only about their practical use, in the present time.

(One of the reasons I am asking is I am wondering if this is a classic case of the hardware running too far ahead of the actual needs and the result is hardware that badly mismatches the actual needs, e.g., an "NPU" that blazingly accelerates a 100 million parameter model because that was "large" when someone wrote the specs down, but is uselessly small in practice. Sometimes this sort of thing happens. However I'm still honestly interested just in what can be done with them right now.)


In a nutshell, nodes enable arbitrary programming. This is one of the big success stories for visual programming. Nothing would stop you from doing all that in a text programming language but there's definitely an appeal to the graphical layout when you have modules getting input from half-a-dozen different sources and then outputting to just as many.

In a roundabout way this article captures well why I don't really like thinking in terms of "normal forms", especially as a numbered list like that. The key insights are really 1. Avoid redundancy and 2. This may involve synthesizing relationships that don't immediately obviously exist from a human perspective. Both of those can be expanded on at quite some length, but I never found much value in the supposedly-blessed intermediate points represented by the nominally numbered "forms". I don't find them useful either for thinking about the problem or for communicating about it.

Someone, somewhere writing down a list and that list being blessed with the imprimatur of Academic Approval (TM) doesn't mean it is actually useful... sometimes it just means that it made it easy to write multiple choice test questions. (e.g., "What does Layer 2 of the OSI network model represent? A: ... B: ... C: ... D: ..." to which the most appropriate real-world answer is "Who cares?")


I still see value in the numbering.

Breaking 1NF is essentially always incorrect. You're fundamentally limiting your system, and making it so that you will struggle to perform certain queries. Only break 1NF when you're absolutely 100% certain that nobody anywhere will ever need to do anything even slightly complex with the data you're looking at. And then, probably still apply 1NF anyways. Everyone that ever has to use your system is going to hate you when they find this table because you didn't think of the situation that they're interested in. "Why does this query use 12 CTEs and random functions I've never heard of and take 5 minutes to return 20,000 rows?" "You broke 1NF."

2NF is usually incorrect to break. Like it's going to be pretty obnoxious to renormalize your data using query logic, but it won't come up nearly as frequently. If it's really never going to come up that often in practical terms, then okay.

3NF and BCNF are nice to maintain, but the number of circumstances where they're just not practical or necessary starts to feel pretty common. Further, the complexity of the query to undo the denormalization will not be as obnoxious as it is for 1NF or 2NF. But if you can do it, you probably should normalize to here.

4NF and higher continue along the same lines, but increasingly gets to what feels like pretty arbitrary requirements or situations where the cost you're paying in indexes is starting to become higher than the relational algebra benefits. Your database disk usage by table report is going to be dominated by junction tables, foreign key constraints, and indexes, and all you're really buying with that disk space is academic satisfaction.


> Your database disk usage by table report is going to be dominated by junction tables, foreign key constraints, and indexes, and all you're really buying with that disk space is academic satisfaction.

FK constraints add a negligible amount of space, if any. The indexes they require do, certainly, but presumably you're already doing joins on those FKs, so they should already be indexed.

Junction tables are how you represent M:N relationships. If you don't have them, you're either storing multiple values in an array (which, depending on your POV, may or may not violate 1NF), or you have a denormalized wide table with multiple attributes, some of which are almost certainly NULL.

Also, these all serve to prevent various forms of data anomalies. Databases must be correct above all else; if they're fast but wrong, they're useless.


> Junction tables are how you represent M:N relationships.

Yeah, the problem is that when you get to 4NF+, you're often looking at creating a new table joining through a junction table for a single multi-valued data field that may be single values a plurality or majority of the time. So you need the base table, the junction table that has at least two columns, and the actual data table.

So, you've added two tables, two foreign key constraints, two primary key indexes, potentially more non-clustered indexes... and any query means you need two joins. And data validation is hard because you need to use an anti-join to find missing data.

Or, you can go with an 1:N relationship. Now you have only one more table at the cost of potentially duplicating values between entities. But if we're talking about, say, telephone numbers? Sure, different entities might share the same phone number. Do you need a junction table so you don't duplicate a phone number? You're certainly not saving disk space or improving performance by doing that unless there's regularly dozens of individual records associated to a single phone number.

And if the field is 1:1... or even 90% or 95% 1:1... do you really need a separate table just so you don't store a NULL in a column? You're not going to be eliminating nulls from your queries. They'll be full of LEFT JOINs everywhere; three-valued logic isn't going anywhere.

> Databases must be correct above all else; if they're fast but wrong, they're useless.

Yeah, and if they're "correct" but you can't get it to return data in a timely manner, they're also useless. A database that's a black hole is not an improvement. If it takes 20 joins just to return basic information, you're going to run into performance problems as well as usability problems. If 18 of those joins are to describe fidelity that you don't even need?


Right. But faceting data is also part of what a good database designer does. That includes views over the data; materialisation, if it is justified; stored procedures and cursors.

I've never had to do 18 joins to extract information in my career. I'm sure these cases do legitimately exist but they are of course rare, even in large enterprises. Most companies are more than capable of distinguishing OLTP from OLAP and real-time from batch and design (or redesign) accordingly.

Databases and their designs shift with the use case.


> joining through a junction table for a single multi-valued data field

I may be misunderstanding you, but to me it sounds like you're conflating domain modeling with schema modeling. If your domain is like most SaaS apps, then Phone, Email, Address, etc. are probably all attributes of a User, and are 1:N. The fact that multiple Users may share an Address (either from multiple people living together, or people moving) doesn't inherently mean you have an M:N relationship that you must model with schema. If you were using one of those attributes as an identity (e.g. looking up a customer by their phone number), that still doesn't automatically mean you have to model everything as M:N - you could choose to accept the possibility of duplicates that you have to deal with in application code or by a human, or you could choose to create a UNIQUE constraint that makes sense for 99% of your users (e.g. `(phone_number, deactivated_at)` enforces that a phone number is only assigned to one active user at a time), and find another way to handle the rare exceptions. In both cases, you're modeling the schema after your business logic, which is IMO the correct way to do so.

I apologize if I came across as implying that any possible edge case means that you must change your schema to handle it. That is not my design philosophy. The schema model should rigidly enforce your domain model, and if your domain model says that a User has 0+ PhoneNumber, then you should design for 1:N.

> And if the field is 1:1... or even 90% or 95% 1:1... do you really need a separate table just so you don't store a NULL in a column? You're not going to be eliminating nulls from your queries. They'll be full of LEFT JOINs everywhere; three-valued logic isn't going anywhere.

If the attribute is mostly 1:1, then whether or not you should decompose it largely comes down to semantic clarity, performance, and the possibility of expansion.

This table is in 3NF (and BCNF, and 4NF):

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone VARCHAR(32) NULL
    );
So is this:

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone_1 VARCHAR(32) NULL,
      phone_2 VARCHAR(32) NULL,
    );
Whereas this may violate 3NF depending on how you define a Phone in your domain:

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone_1 VARCHAR(32) NULL,
      phone_1_type ENUM('HOME', 'CELL', 'WORK') NOT NULL,
      phone_2 VARCHAR(32) NULL,
      phone_2_type ENUM('HOME', 'CELL', 'WORK') NOT NULL,
    );
If a Phone is still an attribute of a User, and you're not trying to model the Phone as its own entity, then arguably `phone_1_type` is describing how the User uses it (I personally think this is a bit of a stretch). Similarly, it can be argued that this design violates 1NF, because `(phone_n, phone_n_type)` is a repeating group, even if you've split it out into two columns. Either way, I think it's a bad design (adding two more columns that will be NULL for most users to support a tiny minority isn't great, and the problem compounds over time).

> If it takes 20 joins just to return basic information, you're going to run into performance problems as well as usability problems. If 18 of those joins are to describe fidelity that you don't even need?

The only times I've seen anything close to that many joins are:

1. Recreating a denormalized table from disparate sources (which are themselves often not well-constructed) to demonstrate that it's possible. 2. Doing some kinds of queries in MySQL <= 5.7 on tables modeling hierarchical data using an adjacency list, because it doesn't have CTEs. 3. When product says "what if we now supported <wildly different feature from anything currently offered>" and the schema was in no way designed to support that.

Even with the last one, I think the most I saw was 12, which was serendipitous because it's the default `geqo_threshold` for Postgres.


> Someone, somewhere writing down a list and that list being blessed with the imprimatur of Academic Approval (TM)

One problem is that normal forms are underspecified even by the academy.

E.g., Millist W. Vincent "A corrected 5NF definition for relational database design" (1997) (!) shows that the traditional definition of 5NF was deficient. 5NF was introduced in 1979 (I was one year old then).

2NF and 3NF should basically be merged into BCNF, if I understand correctly, and treated like a general case (as per Darwen).

Also, the numeric sequence is not very useful because there are at least four non-numeric forms (https://andreipall.github.io/sql/database-normalization/).

Also, personally I think that 6NF should be foundational, but that's a separate matter.


"1979 (I was one year old then)."

Well, we are roughly the same age then. Our is a cynical generation.

"One problem is that normal forms are underspecified even by the academy."

The cynic in me would say they were doing their job by the example I gave, which is just to provide easy test answers, after which there wasn't much reason to iterate on them. I imagine waiving around normalization forms was a good gig for consultants in the 1980 but I bet even then the real practitioners had a skeptical, arm's length relationship with them.


> I imagine waiving around normalization forms was a good gig for consultants in the 1980 but I bet even then the real practitioners had a skeptical, arm's length relationship with them.

Real-talk: those consultants are absolutely essential - and are the unsung heroes of so many "organic" database projects that would have gotten started as an Excel spreadsheet on a nontechnical middle-manager's workgroup-networked desktop, which grew over time into a dBase file, then MSAccess/JET, then MSDE or MSSQL Express if they (think) they knew what they're doing, and then if it's the mid-2000s then maybe it'll be moved onto dedicated on-prem Oracle or MSSQL box - but still an RDBMS; I remember in 2014 all the talk was about moving data out of on-prem RDBMS siloes and onto Cloud(TM)-y OLAP clusters (trying to hide the fact they're running stock Postgres) which acted as a source for a Hadoop cluster - all to produce dashboards and visualizations made with the $100k Tableau license your company purchased after their sales guys showed your org's procurement people a good time in Cancun.

None of the evolution and progress described above could have happened if not for the awful DB designs in that initial Access DB - the anti-patterns would be carried through the DB whenever it ascended to the next tier of serious-business-ness, and each and every design-decision made out of innocent ignorance gets gradually massaged-out of the model by the regular and recurring visits by DBA consultants - because (and goddamnit it's true): a depressingly tiny proportion of software people (let alone computer-people) know anything about DB design+theory - nor all the vendor-specific gotchas.

What I still don't understand is how in 2026 - after 30 years of scolding beginners online - that we've successfully gotten greenhorn software-dev people to move away from VBA/VB6's dead-end, PHP's unintentional fractal of bad design, and MySQL's meh-ness - and onto sane and capable platforms like TypeScript, Node, and Postgres - all good stuff; and yet on my home-turf on StackOverflow, I still see people writing SQL-92 style JOINs and CREATE TABLE statements covered in more backticks than my late grandmother's labrador. I honestly have no idea where/when/how all those people somehow learned SQL-92's obsolete JOIN syntax today.

So in conclusion: the evidence suggests that not enough people today truly understand databases well-enough to render expensive DBA consultants irrelevant.


yep. born 1960.

> Also, personally I think that 6NF should be foundational, but that's a separate matter.

I share your ideal, but there exists a slight problem: no RDBMS I'm aware of really facilitates 6NF or DKNF (or even Codd's full relational concept; or newfound essentials like relational-division, and so on...).

There are also genuine ergonomic issues to contend with: pretty-much every RDBMS design and/or administration tool I've used in the past 20 years (SSMS, SSDT, DBeaver, MSAccess (lol), phpMyAdmin, etc) will present the database as a long, flat list of tables - often only in alphabetical order (if you're lucky, the tooling might let you group the tables into logical subfolders based on some kind of 2-part name scheme baked into the RDBMS (e.g. "schemas" in MSSQL).

...which starts being counterproductive when 6NF means you have a large number of tables that absolutely need to exist - but aren't really that significant alone by themselves; but they always need to remain accessible to the user of the tool (so they can't be completely hidden). So you'll turn to the Diagramming feature in your DB GUI, which gives you a broader 2D view of your DB where you can proximally group related objects together - instead of endlessly scrolling a long alphabetical list; and you can actually see FKs represented by physical connections which aids intuitive groking when you're mentally onboarding onto a huge, legacy production DB design.

...but DB diagrams are just too slow to load (as the tooling needs to read the entire DB's schema, design; all objects first before it can give you a useful view of everything - it's just so incredibly grating; whereas that alphabetical list loads instantly.

Sorry I'm just rambling now but anyway, my point is, 6NF is great, but our tooling sucks, and the RDBMS they connect to suck even more (e.g. SQL-92 defined the 4 main CONSTRAINT types seen in practically all RDBMS today (CHECK, FOREIGN KEY, UNIQUE, and DEFAULT); over 30 years later we still have the same anaemic set of primitive constraints; only Postgres went further (with its `EXCEPT` constraint). As of 2026, and almost 40 years since it was defined, no RDBMS supports ASSERTION constraints; wither DOMAIN constraints and a unified type-system that elegantly mediates between named scalars, relations (unordered sets of tuples), queries, and DOMAINs and the rest.

...this situation is maddening to me because so many data-modelling problems exist _because_ of how unevolved our RDBMS are.


DBeaver can show a relationship diagram between tables. It's the main reason I've used it at all.

https://dbeaver.com/docs/dbeaver/ER-Diagrams/


I could have worded my post a bit better - I didn't mean to imply DBeaver only showed a flat list of tables/objects; but DBeaver is hardly unique in having DB diagrams; my point was that every DB-diagram feature/tool/workspace in a DB admin/IDE (like DBeaver, SSMS, SSDT, etc) is necessarily performance-constrained because they need to load _so much_ metadata before they can show an accurate - and therefore useful - picture of the DB - even if it's just a subset of all tables/objects.

What always frustrates me is that when people on here discuss deeply technical and/or meta-aspects of programming (e.g. type theory), it's taken at face value, but the same is not true of databases. They are generally treated as a dumb data store that you can throw anything into, and when someone explains why that's a bad idea, or why an academic concept like normal form is still applicable, it's met with criticism.

Even when it's purely performance-related, it usually gets a shrug, and "it's good enough." Cool, you're wrecking the B+tree, maybe don't do that. It's as if I said, "I'm using an array to store these millions of items that I later need to de-duplicate," and when someone suggests maybe using a set, I dismiss it.


Agreed. In practice I just ask "am I storing the same fact in two places?" & fix it if yes. Never once sat down and thought "let me check if this is in 4NF specifically."

Why shouldn’t we care about layer 2? You can do really fun and interesting things at the MAC layer.

You can do what you do at the MAC layer without any regard for whether or not it is "OSI layer 2", or whether your MAC layer "cheats" and has features that extend into layers 1, or 3, or any other layer. Failing to implement something useful because "that's not what OSI layer 2 is and this is data layer 2 and the OSI model says not to do that" is silly.

To stay on the main topic, same for the "normalization forms". Do what your database needs.

The concepts are just attractive nuisances. They are more likely to hurt someone than to help them.


OSI is particularly obnoxious because layers 5 and 6 don't exist separately in practically any system. Application layer protocols handle them in their own bespoke ways, so we have a software stack consisting of layers 2,3,4,7 like its the pentatonic scale or something.

The levels do the most important thing in computer science, give discrete and meaningful levels to talk/argue about at the watercolour

The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.

If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.

If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.


I model this as "stacked sigmoid curves". I have no reason to believe that any specific technological implementation will be exponential in impact vs sigmoidal.

However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*

* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.

I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.


Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.

Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.

I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.

All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.


>the originator of the term ... rather good science fiction

I guess you are thinking of Vernor Vinge but the term first came up with John von Neumann in the 1950s:

>...on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue


The most interesting factor of the dynamic around things like near singularity is the things that I feel are coupled to it

Basically the ability to reason about first and second order effects

IE, before the cellphone was invented you could have predicted the it, things like star trek envisaged a world of portable communication

What impact the cellphone had was predictable to some people, on the one hand increased convenience of communication as well as the end of making a call and wondering who was going to pick up, which was a definite consideration pre-mobile when you called a place and not a person, now we just assume that when we call someone we'll get them and not their family

The second order effects were less obvious, ease of access to someone meant being always accessible, so now everyone could be contacted whenever someone wanted them, it changed the dynamics of life for many, not to mention the effects of different technologies combining, the personal computer and the mobile phone becoming one in the form of the smartphone gave everyone a computer in their pocket, let alone adding the internet into the mix

Each of these changes were completely unpredictable to the people pre-cellphone, once again, compare modern day trek and the originals

I still vividly remember the moment one of the characters in discovery asked the computer to give her a mirror, the same behaviour of countless people now using the fact that their selfie camera functionally gives them a portable mirror in the form of their phone, that was unpredictable

So that's one form of being unable to predict the future

But there's another interesting dynamic I think, which is each direction of technical development is accelerating, which means that we may soon hit the point that only a subject matter expert will be able to predict or perhaps even be aware of what happens in any particular field, so we may get a period where before we can't predict the future, we may have some strange middle ground where we're constantly surprised by the developments we see around ourselves and when we look into it find this new discovery has been around months or years

I certainly have experienced that once or twice, however I'm wondering if that may become the new normal


> The adoption isn't there yet.

It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.

I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.

My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.

[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]


Moreover the singularity makes this crass assumption that a single player takes all. It seems to ignore a future of many, many AI players, or many, many human + AI players instead.

Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.


> regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans,

AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.

/schitzoposting


> AI isn't one thing though.

What's the single in "singularity" doing then?

My issue is I feel like some people treat intelligence as an integer value and make the crass assumption that "perfect intelligence" beats all other intelligences and just think that's quite a thick way to think about it. A fool can beat an expert over the course of towards infinite hands because they happen to do something unexpected. Everything is a trade off and there's no such thing as perfect, every player has to take risk.


The singularity does no such thing.

well that's certainly cleared it all up.

that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...

and then might overfit the lack of danger in that aftermath, leading to those fragmented humans doing something to overthrow it. For all we know this AI might get bored and decide to make a cure, or turn itself off, or anything really.

We've had enough advancement to change the economy for many decades, but the powers that be have insisted that, despite the lack of need, we continue to toil doing completely unnecessary work, because that's what's required to extend their fiefdoms.

Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.


>Why is everyone so damn obsessed with the singularity?

I don't think most are - it tends to regarded as rather cranky stuff, and a lot of people who use the term are a bit cranky.

Even so AI maybe overtaking human intelligence is an interesting thing in human history.


An interesting thing in AI history. For human history, it’s epochal.

Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity.

And at the same time, we don't take advantage of the intelligence we already have.


>Why is everyone so damn obsessed with the singularity?

Because they are captives (to a system of incentives that is already "superintelligent" in comparison to any individual) who are hoping for salvation (something to make them free against their will; since it is their will which is captured).

Singularity, then, is the point at which the system itself "finally becomes able to imagine what it is like to be a person", and decides to stop torturing people. IMO, this is unlikely to work out like that.


Because it's happening no matter how much you'd rather ignore it or scoff at it.

I don't think the kind of exponential you are looking for (and especially not "the singularity") can manifest until the product (AI) is at a point where it can meaningfully take over the task of improving itself directly.

I would say we have certainly seen a bottleneck in the ability of LLMs to handle any kind of broad abstractions or master the architecture of coding. That is the hinge of why "vibe coding" is as trashy of an approach as it is: the LLM can't cut the mustard on any actual software design.

So they have nothing close to the deep understanding required to improve their own substrate.

They can be exceptionally good at understanding what humans mean when they say things, far better than poking keywords into a google search for example, especially when said keywords are noisy and overloaded. And they can be a very good encyclopedic store of concepts (the more general the idea the less likely they hallucinate it, while the details and citations are far more frequently made up on the spot). But they suck at volition, and at state representation (thanks to those limited context windows) which cuts them off at the knees if they ever have to tenaciously search for anything including performing creative problem solving.

We do have AI models which can get somewhere on theorem proving or protein folding or high level competitive game playing, but those only sometimes even glancingly involve LLMs, and are primarily custom-built amalgams of different kinds of neural networks each trained on specific tasks in their fields.

None of that can directly move the needle on actual AI research yet.


I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.

The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.

Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."

I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."

Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.


Neither! A logistic curve is just an exponential with a carrying capacity - it is still an exponential! There is no reason to believe that AI capability, which grows logarithmically with the handwaved-resources used on it (roughly, this is compute and training data), grows, has grown, or is growing exponentially!

I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.

Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations

What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.


Anyone who believes in materialism should recognize that there is still a lot of room to improve.

Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."

I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.

But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.


> Somewhere around 2005-2007, when people were wondering if the Internet was done

Literally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.


If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.

There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]

Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]

I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.

And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.

[1] https://www.snopes.com/fact-check/paul-krugman-internets-eff...

[2] https://www.wired.com/2007/10/facebook-future/


> If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.

Didn't we only pass 50% of households having a home PC in like... '00 or '01 or something? And I mean just in the US, which was way ahead of the curve.

> Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]

I actually think that's correct... if the smartphone hadn't taken off right after that. The "consumer" Internet and computing, the attention economy, et c., functionally is the smartphone. A desktop computer and even a laptop aren't in use when driving, at the store, at the park, every moment on vacation, et c. It'd still only be nerds lugging computers everywhere if nobody'd managed to make a smartphone that's capable-enough and pleasant-enough-to-use to expand the market beyond the set of folks who might have had a beeper in earlier years (the part of the market Blackberry was addressing). A gigantic proportion of the "GDP of the Internet", if you will, exists because smartphones exist.


I'm reminded of the quote that ATMs didn't unemploy bank tellers, smartphones did. While not owning a laptop may seem inconceivable to us here, smartphones exist as the only connection to the Internet for many.

The interesting question is without Apple and the iPhone, would RIM/Blackberry have "figured it out"? Would we be on 2-way "pagers" with keyboards and stupidly expensive data plans that you have to order separately? Because while the original iphone was a marvel in terms of hardware, I think the bidet contribution was the integration with AT&T for the cellphone plan, which only Steve Jobs had the clout to pull off.


> I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do

Almost definitely professional ragebaiters in Wired or Time or whatever, yeah.


I was also in tech at that time, in fact I worked for Google during that period and people definitely thought that the Internet had reached its peak. So many criticisms back then not about just peak Internet but that all these companies were blowing money on unproven business models, they were unsustainable, unprofitable, it was all just hype.

You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.

And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.

It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.


I agree with the gist of your points, but not much with these two:

>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.

These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.

>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.

We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.


You are very strong on the "slop" bias. Why?

In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.

It is the opposite of slop I am seeing. And that at a lower cost.

Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.


Which company do you work at so we can avoid your migrated endpoints?

All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.

Wtf. You don't even know what the migration was about?

I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.

Your tone is in conflict with the statement that you are curious.

It's because you're deflecting. :)

Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?

Sigh.

Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.

My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.

So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?


I definitely did not say anything about ambiguous endpoints.

The migration was relatively straight forward and could likely have been implemented as automatic code transforms.

What I did say was that it was complex.


Yikes. Have a good one.

One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?

What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?

Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.

This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?

I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.


> Maybe AI will address everything at every level.

I think this is the idea you need to entertain / ponder more on.

I largely agree with you, what I don't agree with is the weighting about the individual elements.

My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.

We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.

In particular, we have dropped the external backoffice tool, so we have a single mono repo.

An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.

Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.

Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.


Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.

I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.

It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!


I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.

Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.

For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.

On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).

I think society is going to be much less tolerant. And that frightens me.


>> Works flawlessly, debt principal down.

I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"

The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.


You are right, and it happens that the output looks decent.

Code idioms, or patterns if you will, is largely our solution.

We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.

In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.

Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.

Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.

It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.


Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.

I take your point but then it makes me think is there no more value in diversity?

[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.

What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit


> Why have any humans at all

Why have humans do work at all? We could have a radically better existence. It would mean that the few at the top of the pyramid lose their privileged position relative to the rest of us, but we could, actually, have that world of abundance for all.

Work in the current sense arguably isn't even desirable

Maybe I've just read too many Culture books.


These are the right questions to ask.

> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.

If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?


It is a part of the smoke testing process right now.

But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.

Migrations are generally not that high risk if you have a code base in alright shape.


Ironically the post saying it is not slop sounds exactly like ai slop.

Too. Many spelling errors for that to be slop...

How would you label the y axis?

> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.

Even using the models we have today, we have revolutionized VFX, video production, and graphics design.

Similarly, many senior software engineers are reporting 2-10x productivity increases.

These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.

These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.

We don't even need new models.


> Similarly, many senior software engineers are reporting 2-10x productivity increases.

But are they making 2-10x compensation compared to before these tools? If not, these tools are not really useful to you, they are useful to your employer. The most shocking thing I find about LLM-assisted development is how gleefully we are just handing all this value over to our employers, simultaneously believing that they are great because we're producing more. Totally bonkers!


> handing all this value over to our employers, simultaneously believing that they are great because we're producing more.

You could turn the table and say that you can now launch your own business with far fewer resources.

Who needs financial capital if you can do it all with solo / small team labor capital?

Gossip Goblin ditched his studio and now a16z is trying to throw him money, which he's turned down. He's turning everyone down.

https://www.youtube.com/watch?v=-Rzl7nUdEs4

Dude is legit talented and doesn't need studio capital anymore.

This is the end of the Hollywood nepotism pyramid, where limited production capital was available to only a handful of directors.

We're kind of at the start of a revolution here. I'd be way more worried if I were Disney or Paramount.

Couldn't you take a sabbatical and end it with a brand new SaaS you own and control? That's entirely within reach now.

The people this is going to hurt are the ICs that don't have a go-getting type personality where they take full-stack ownership: marketing, branding, design, customer relationships, etc. If you can do those things, you're going to be a rock star with total autonomy.

You ought to see what the indie game devs are doing with AI (when they aren't getting yelled at on Steam by the haters). It's legitimately incredible. Game designers are taking on full-stack ownership over the entire experience, and they're making some incredible stuff.


> If you can do those things, you're going to be a rock star with total autonomy.

What percentage of developers can do these things? 1%? 0.1%? 0.01%? A very small percentage of developers have the desire to take on the full-stack, the temperament of good entrepreneurs, the product judgment of good Product Managers and ability of good Project Managers to juggle dependencies and timeframes. What about the rest of them? The remaining 99+% of us are just handing value over to our employers and getting a 5% raise in return--if we're lucky.

So, the fact that a small percentage of rockstar developers can capture the full value of AI-assisted development reinforces the point that a small number of people/businesses are capturing that value. The vast majority of workers are not capturing any value.


So... a tiny fraction of people get to capture the value again, and at even greater environmental (and thus societal) cost than before? Wow, what a world.

"given 10-ish years at least to adapt, we probably can"

Social media would like a word...


We can adapt by shutting down social media. We don't really need that. It's been pretty bad since before the AI wave took off.

We needed a better phone book we ended up in a world where most of our fellow citizens fucking casino.

We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.

A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.


Sentience isn't intelligence.

We are bottom. It's just a start.

We are in era of pre pentium 4 in AI terms.


And you have evidence as basis for this very confident statement... where?

Intuition. It comes from the spiritual awakening and being aware of your consciousness. Only Time will prove what turns out be right.

You worship the AI?

I see AI has great utility and we'll figure out ways to better it. If I had any power, i would run Nuclear Power plants to run AI dafacenters and find other near infinite sources of energy to create deeper and deeper AIs. This level of ai tech is at its infancy, it's evidently clear. People are assuming it will stall soon, and won't go beyond a certain point. I don't believe this at all, I am believing it will go much much fatherer then this

An LLM is never, ever going to find "other near infinite sources of energy". All it can do is predict the next word in an effort to make the user stop prompting it. That's all it does. It does not have the ability to find solutions to the worlds problems.

Weird comparison - The P4 was a major flop out of the gate (rambus anyone?) and at least by any good metric took three revisions (P4c - hypertheading) to make it come out where it should have ahead of its predecessor. The Pentium 3, before it that you are perhaps referring to was the peak of its era. So...it's going downhill right or what are you even saying?

I’m seeing these extremely short but supremely confident hot takes with nothing to back them up on HN more and more these days. It’s like X is leaking.

I think we get a "S3 clone" about once every week or two on the Golang reddit.

It strikes me as a classic case of "we need all the interested people to pull in one project, not each start their own". AI may have made this worse then ever.


I'm pretty sure I set up most of what "Simple S3" using with Apache2 and WebDAV at least fifteen years ago.

Every month there's a post of "I just want a simple S3 server" and every single one of them has a different definition of "simple". The moment any project overlaps between the use cases of two "simple S3" projects, they're no longer "simple" enough.

That's probably why hosted S3-like services will exist even if writing "simple" S3 servers is so easy. Everyone has a different opinion of what basic S3 usage is like and only the large providers/startups with business licensing can afford to set up a system that supports all of them.


It's like "Word/Excel is too bloated, I just need a simple subset!" and each simple subset is subtly different.

`rclone serve webdav` is a superpower!

Or maybe the underlying philosophy is different enough to warrant its own implementation. For example, the Filestash implementation (which I made) listed in the article is stateless and acts as a gateway that proxies everything to an underlying storage. We don't own the storage, you do, via any of the available connectors (SFTP, FTP, WebDAV, Azure, SMB, IPFS, Sharepoint, Dropbox, ...). You generate S3 keys bound to a specific backend and path, and everything gets proxied through. That's fundamentally different to not fit in the mold of other alternatives that mostly assume they own your storage and as a result can not be made stateless by design. That approach has pro and cons on each side

I think it's like NES emulators. It's not that anyone needs one more. It's just that they're fun to make.

They're certainly a rabbit hole, too.

> It strikes me as a classic case of "we need all the interested people to pull in one project, not each start their own".

And every few weeks in the cooking subreddit we get a new person talking about a new soup they made. Just think if we put all 1000 of those cooks in one kitchen with one pot, we'd end up with the best soup in the world.

Anyway, we already have "the one" project everyone can coalesce on, we have CephFS. If all the redditors actually hopped into one project, it would end up as an even more complex difficult to manage mess I believe.


S3 with tree-shaking. i.e. specify the features you need, out comes an executable for that subset of S3 features you desire.

Or like lodash custom builds.

https://lodash.com/custom-builds


Diverse competition is the best way to identify a winning formula, which can then be perfected by a fewer number of players.

Make sure you have a run of govulncheck [1] somewhere in your stack. It works OK as a commit hook, it runs quickly enough, but it can be put anywhere else as well, of course.

Go isn't immune to supply chain attacks, but it has built in a variety of ways of resisting them, including just generally shorter dependency chains that incorporate fewer whacky packages unless you go searching for them. I still recommend a periodic skim over go.mod files just to make sure nothing snuck in that you don't know what it is. If you go up to "Kubernetes" size projects it might be hard to know what every dependency is but for many Go projects it's quite practical to know what most of them are and get a sense they're probably dependable.

[1]: https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck - note this is official from the Go project, not just a 3rd party dependency.


I've got a couple of sweetener-free recipes I use with my soda maker, though I should warn you nobody else I've given them to likes them. But I like them well enough.

One is a couple of squirts of vanilla, a couple of squirts of lemon juice, and a bit of salt. Salt is probably an underappreciated drink ingredient for this sort of thing. It turns out it isn't in your soft drinks just to make you want to drink more. This makes something that is related to cream soda, except for the aspects of cream soda that come from being crammed full of sugar, which I can't do much about.

I also have a mix I keep around made out of 3 tablespoons salt, 1 cup vanilla, 1/2 cup lemon juice, 1/2 cup lime juice, and about 1/3rd cup almond extract. I measure it all (except the salt which I just put in directly) into a single 2 cup Pyrex dish and just sort of eyeball the last 1/3rd cup of almond extract, then funnel it in to a holder. I use McCormick 32 oz vanilla and almond extract for this and order bulk RealLemon and RealLime juice for this from Amazon, and mix it into one of the leftover bottles and keep it around refrigerated. 3 squirts and "whatever dribbles in" as I'm removing the bottle is what I used for one DrinkMate bottle. To taste, as all of this is, of course. If nothing else this is pretty cheap per drink.

You can also mix unsweetened electrolytes in, but you have to wait until after you dilute the mixture with water or it'll react with the lemon & lime juice. Salt you can keep in the mix but not electrolytes in general. It adds a certain body to the mix even if you're not interested in the electrolytes per se, and a single packet of them lasts a long time.

You're not going to go into business selling this stuff, but if you're already drinking unsweetened apple cider vinegar & lemon/lime juice as a beverage flavoring we might just have some compatible tastes here. Carbonation is required, though, otherwise the vanilla and the almond extract don't come through at all.


Thanks, may try this.

I also add the observation that while the dynamic typing languages are all growing in the direction of the statically-typed languages, no statically-typed language (that I know of) is adding a lot of dynamically-typed features. If anything the static languages trend towards more static typing. Which doesn't mean the optimum is necessarily "100% in the direction of static typing", the costs of more static typing do eventually overwhelm the benefits by almost any standard, but the trend is universal and fairly clear.

I kind of think there's room for a new dynamically-typed language that is designed around being fast to execute and doesn't cost such a huge performance multiple right off the top, and starts from day 1 to be multi-thread capable, but on the whole the trend is clearly in the direction of static typing.


> I kind of think there's room for a new dynamically-typed language that is designed around being fast to execute and doesn't cost such a huge performance multiple right off the top, and starts from day 1 to be multi-thread capable, but on the whole the trend is clearly in the direction of static typing.

Other than the "new" qualifier, Lisp supports all of that - SBCL compiles to native code, ecl/gcl compile to C (IIRC), etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: