Hacker Newsnew | past | comments | ask | show | jobs | submit | jackblemming's commentslogin

Anyone else tired of others policing how to speak? I think society as a whole is pretty burnt out on this and it leads to some pretty bad second order effects.

In this case it's not about offense or whatever but effective communication. Seems focused on reviewing prs.

Policing how to speak? What are you talking abiut. You are free to completely ignore the advice. No one will audit you about it and let you fail.

Perhaps some would prefer to “fail” than everyone talk like a politician or corporate PR robot. Perhaps talking authentically and genuinely curates an audience worth having.

Whenever you look closely at what these proof nerds have actually built you typically find… nothing. No offense to them, it’s simply reality.


I see no need to dunk like that. There are ample stories over the years on HN how software orgs without using FM have bugs, waste money, treat people poorly, all leading to canceling software projects before delivering anything to customers. And I'm only mentioning just a very few issues. Software development in corps has many challenges to ROI and customer satisfaction.

I might also point out FM had a nice history of value-add in HW. And we know HW is higher quality than software.


SeL4, a number of mathematical theorems, a bunch of cryptography. You've likely trusted your life to compcert. It's not nothing, but it's admittedly a bit limited.

Formal methods are the hardest thing in programming, second only to naming things and off by one errors.


Leslie Lamport built latex, most of distributed systems such as AWS services depend on formal verification. The job of Science here is to help Engineering with managing complexity and scale. The researchers are doing their jobs


What does LaTeX have to do with TLA+? Also I think "most of distributed systems such as AWS" might be an exaggeration. At least the public known examples of formal verification in AWS are scarce.


I think the implication is that Lamport is a proof nerd, not that LaTeX has a direct relationship to proof software.


AWS talk about it a fair amount, although rarely in a lot of detail.


> is the consensus of many human experts as encoded in its embedding

That’s not true.


Yup, current LLMs are trained on the best and the worst we can offer. I think there's value in training smaller models with strictly curated datasets, to guarantee they've learned from trustworthy sources.


> to guarantee they've learned from trustworthy sources.

i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy".


"Sir, I want an LLM to design architecture, not to debate philosophy."

But really, you leave the curation to real humans, institutions with ethical procedures already in place. I don't want Goole or Elon dictating what truth is, but I wouldn't mind if NASA or other aerospace institutions dictated what is truth in that space.

Of course, the dataset should have a list of every document/source used, so others can audit it. I know, unthinkable in this corporate world, but one can dream.


The Bun team works hard, glad to see it pay off.


This reads like a dogmatic view of someone who hasn’t worked on a project that’s a million plus lines of code where something is always going wrong, and crashing the entire program when that’s the case is simply unacceptable.


> something is always going wrong

I hate this sentence with a passion, yet it is so so true. Especially in distributed systems, gotta live with it.


That doesn’t assign it to the shorthand local variable.


It could return the given value if it doesn't throw, though, which would make using it with assignment trivial.


CS Lewis rolling in his grave rn.


How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.


This was a project I have no doubt was established after the creator had already made up their mind on LLMs and artificial intelligence.


Also, the background suicide rate is not zero. Is this a higher or lower rate?


That’s not craftsmanship, the same way being snobby that a restaurant didn’t garnish a meal isn’t craftsmanship.

And working at uptight places that focus more on the garnish than the actual meal, suck.


I hope you never have a bad string of luck and end up homeless pal, because it can happen to anyone.


What’s the link with the article or my comment? Listening to adults advices can magically avoid being broke? It’s the opposite


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: