Anyone else tired of others policing how to speak? I think society as a whole is pretty burnt out on this and it leads to some pretty bad second order effects.
Perhaps some would prefer to “fail” than everyone talk like a politician or corporate PR robot. Perhaps talking authentically and genuinely curates an audience worth having.
I see no need to dunk like that. There are ample stories over the years on HN how software orgs without using FM have bugs, waste money, treat people poorly, all leading to canceling software projects before delivering anything to customers. And I'm only mentioning just a very few issues. Software development in corps has many challenges to ROI and customer satisfaction.
I might also point out FM had a nice history of value-add in HW. And we know HW is higher quality than software.
SeL4, a number of mathematical theorems, a bunch of cryptography. You've likely trusted your life to compcert. It's not nothing, but it's admittedly a bit limited.
Formal methods are the hardest thing in programming, second only to naming things and off by one errors.
Leslie Lamport built latex, most of distributed systems such as AWS services depend on formal verification. The job of Science here is to help Engineering with managing complexity and scale. The researchers are doing their jobs
What does LaTeX have to do with TLA+? Also I think "most of distributed systems such as AWS" might be an exaggeration. At least the public known examples of formal verification in AWS are scarce.
Yup, current LLMs are trained on the best and the worst we can offer. I think there's value in training smaller models with strictly curated datasets, to guarantee they've learned from trustworthy sources.
> to guarantee they've learned from trustworthy sources.
i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy".
"Sir, I want an LLM to design architecture, not to debate philosophy."
But really, you leave the curation to real humans, institutions with ethical procedures already in place. I don't want Goole or Elon dictating what truth is, but I wouldn't mind if NASA or other aerospace institutions dictated what is truth in that space.
Of course, the dataset should have a list of every document/source used, so others can audit it. I know, unthinkable in this corporate world, but one can dream.
This reads like a dogmatic view of someone who hasn’t worked on a project that’s a million plus lines of code where something is always going wrong, and crashing the entire program when that’s the case is simply unacceptable.
How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.
reply