Hacker Newsnew | past | comments | ask | show | jobs | submit | thmpp's commentslogin

While 'this analysis would not have been possible without LLM', I am not sure the LLM analysis was well reviewed after it has been done. From the obscure/familiar word list, some of the n-grams, e.g. "is resource", "seq size", "db xref" surely happen in the wild (we well know), but I would doubt that we can argue they are missing from the dictionary. Knowing the realm, I would argue none of them are words, not even collocations. If "is resource" is, why not, "has resource"? So while the path is surely interesting, this analysis does miss scrutiny, which you would expect from a high-level LLM analysis.

The very bottom of the slider is there to illustrate where LLM artifacts and Wiktionary noise live — it's not presented as legitimate vocabulary. The slider lets you see the full quality gradient, including where it breaks down.

That's not really mentioned in the article, though. As far as the article is concerned, the right side of that slider is valid-but-possibly-too-rare-to-be-interesting, when in fact it's just garbage. This does not sell the concept well.

You were right — it is now. Thanks

The challenges mentioned on the article are challenges that are known to every larger distributed system. They are not challenges solved or to be solved by functional programming.

But the solutions and tools functional programming provides help to have higher verifiability, fewer side effects and more compile-time checks of code in the deployment unit. That other challenges exists is fully correct. But without a functional approach, you have these challenges plus all others.

So while FP is not a one-size-fits-all solution to distributed systems challenges, it does help solve a subset of the challenges on a single system level.


> They are not challenges solved or to be solved by functional programming.

These challenges can be solved by the usual tools of FP, but this requires each version of the system to be explicitly aware of the data schema varieties used by all earlier versions that are still in use. Then it's a matter of interpreting earlier-versioned data correctly, and translating data to earlier schema varieties whenever they may have to be interpreted by earlier versions of the code.

(It may also be helpful to introduce minimally revised releases of the earlier codes, that simply add some amount of forward compatibility re: dealing with later-released schemas, while broadly keeping all other behavior the same to avoid unwanted breakage. These approaches are not too hard to implement.)


This. The thesis statement is so confused and I cannot tell if it's deliberate clickbait or the author legitimately is that confused about the distinction between program-level language features and program structure and systems design and how they are two closely related but really distinct things.


Literally the second section:

> Here is the central claim: the unit of correctness in production is not the program. It is the set of deployments.

The thesis essentially boils down to: functional programing paradigm, type systems, strong interfaces, etc, are all fantastic tools for ensuring the correctness of a program, but the system is not a program, and so these tools are necessary but not sufficient to ensure the correctness of a distributed application.


This makes no sense at all really.


The distinction is the point and the subject matter of the article.


It’s convoluted because it’s slop. You can’t even be sure the text came out as intended. It is full of GPT style markers that tell me the author wasn’t careful enough in review. Jolting bullets in markdown and asking for a full article is not good enough for publication. It’s today’s version of let me google that for you. No one publishes half ass markdown notes for a reason. Asking LLM to finish it doesn’t cut it.


None of that is remotely true.

If as a developer you want to be seen as someone advancing and taking ownership and responsibility, testing must be part of the process. Sending an untested product or a product that you as a software engineer do not monitor, essentially means you can never be sure you created an actual correct product. That is no engineering. If the org guidelines prevent it, some cultural piece prevents it.

Adding QA outside, which tests software regularly using different approaches, finding intersections etc. is a different topic. Both are necessary.


The problem in big companies is that as a developer, you are usually several layers of people removed from the people actually using the product. Yes you can take ownership and implement unit tests and integration tests and e2e tests in your pipeline, to ensure the product works exactly as you intended. But that doesn't mean it works as management or marketing or the actual user intended.


AWS engineers are trained to use their internal services for each new system. They seem to like using DynamoDB. Dependencies like this should be made transparent.


Ex employee here who built an aws service. Dynamo is basically mandated. You need like VP approval to use a relational database because of some scaling stuff they ran into historically. That sucks because we really needed a relational database and had to bend over backwards to use dynamo and all the nonsense associated with not having sql. It was super low traffic too


Not "like using", they are mandated from the top to use DynamoDB for any storage. At my org in the retail page, you needed director approval if you wanted to use a relational DB for a production service.


Not sure why this is downvoted - this is absolutely correct.

A lot of AWS services under the hood depend on others, and especially us-east-1 is often used for things that require strong consistency like AWS console logins/etc (where you absolutely don't want a changed password or revoked session to remain valid in other regions because of eventual consistency).


> Dependencies like this should be made transparent

even internally, Amazon's dependency graph became visually+logically incomprehensible a long time ago


The best part of the story actually comes at the end, back-and-forth messaging with slack about the bug report


definitely, I'm disappointed with slacks responses. We did a trial and have had some correspondence with their support team which has been excellent to date. So I assumed they were above some of this silicon valley elitism. I'm glad to see this kind of public disclosure. We have been a customer since that initial trial, we stopped using hipchat.


To be fair, most of the bad correspondence was from 2014. Their new representative 'Leigh' appears to be doing excellent work. Also we're still happy users of Slack, I would just never trust them with secrets :-).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: