I posted Bernie's "Conversation with Claude" a while back, and it was just about immediately taken down.
Let's face it Y combinator is mostly AI startups for the next few years, and any anti-AI sentiment is going to hurt the bottom line.
That being said, I disagree with Sanders on a number of points. He wants to stop data center construction. Can't think of a more luddite un-nuanced solution to the "problem"
The real AI danger is not the threat to white collar jobs (which will simply have to evolve), but something we will see roughly 18 months now when Joe Schmo asks Claude Giga Max Supreme 8.0 to help him reduce his taxes, and it hacks into the IRS and deletes everyone's records.
Have you seen how overwhelmingly anti-AI the non-software-engineering world is? (despite the hypocrisy as plenty use a chatbot these days) The resistance in here is pitiful.
I'm lurking in the indie game dev scene, and any mention of using LLMs for anything is downvoted and laughed at.
I'm the one who posted this link, and I think Bernie Sanders is a terible man. But an op-ed by a U.S. Senator in the WSJ about a tech issue seems like proper HN material. It's been un-flagged by the mods.
In Canada, demands are not actually demands. That way if the demand is avoided, there never was a demand to begin with; however, if it was fulfilled, then of course there was always a demand all along.
It's extremely prevalent in Canada as well; almost certainly even more so. It's really a North American thing.
I expect copious downvotes with no actual replies. Then the comment will be flagged by the bot armies so the administration here can preserve its dearly held national identity of being "diverse" while never having elected an ethnic minority PM.
I would expect more downvotes for your needless "I'm going to get downvoted because the sheep hate when wolves tell the truth!" persecution complex. Your comment was at least mildly interesting before you got to that.
I loved this game playing on an Arch Thinkpad in university with budget graphics capability.
The best part is being able to pin locations on the map for your teammates, so we were able to plot the adventures and battlegrounds of a goated unit by naming the pins "Ronant's Triumph," "Ronant's Revenge," "Ronant's Folly," and ultimately "Ronant's Last Stand." Great times with a few beers and the lads.
RIP Ronant, Wesnoth will never see another hero of your like again.
It's because people have discovered that (1) motte and bailey fallacies [0] and equivocation of language (between, e.g. identity and diagnosis) are highly effective rhetorical tools; (2) merely identifying as something ontologically changes the metaphysical structure of reality [1] which confers certain societal benefits.
There is a matrix of is-diagnosed, is-not-diagnosed, identifies-as, does-not-identify-as, which is open to exploitation by those who "identify as" something they are not diagnosed with. Who gets fucked? The people who are diagnosed but do not identify as their diagnosis.
God help me once we start adding another dimension of people who have a condition, but are also not diagnosed and do not identify with it either...
I have terrible news for you. Linguistics is descriptive, not prescriptive. We will torment you with word game playing until such time as you loosen up.
Only two or three weeks from incepting the idea of a token efficient LLM English dialect to seeing it in practice. I just never imagined it to take.... this particular form.....
I've had the thought that English is an efficiency barrier for a while now. Surely there are more information-dense representations of semantic concepts.
Some languages for example have single characters that represent entire ideas/phrases.
Marx begs to differ. By labor theory of value Sisyphus should be the wealthiest man on Earth; unless you smuggle all the complexity and paradoxes of this theory into some ill-defined notion of "socially useful labour" (how does one measure or quantify utility?) of course...
Metamath is fascinating to me in that it is the most "math-like," in terms of being both readable and executable on pen and paper through simple substitution. I've spent a month or two formalizing basic results in it and found it quite fun; unfortunately the proof assistant and surrounding tooling is, archaic, to put it generously. However, the fact that the system still works and that the proof tree is grounded in results from 1994 that still stand to modern day without modification is testament to its design.
Most people seem to be rallying around Lean these days, which is powerful and quite featureful, but with tactics metaprogramming feels more like writing C++ templates instead of the "assembly language of proof" which I liken metamath to, for its "down to the metal" atomization of proof into very explicit steps. Different (levels of) abstractions for different folks.
Once I return to a proper desktop I will probably woodshed myself into Lean for a week or two to get a better handle on it, but for now tactics feel like utter magic when not just chaining `calc` everywhere.
I feel the same. When I first heard about metamath I was blown away at how I could drill down to the base axioms (I had only tried Lean before). Lean also feels too magical for my taste, and I dislike that I don't have a good mental model of its execution under the hood. I care a lot about execution speed as well, and Lean... isn't always fast. It's another reason metamath's design really speaks to me.
You might find metamath0 interesting, its kernel design has a similar focus on simplicity while cleaning up a lot of metamath's cruft: https://github.com/digama0/mm0
EDIT: and feel free to ask any questions about mm0, I don't know a ton about it yet but I have researched it a good deal. I'm hoping to use it more this fall when I take a class on first order logic and set theory!
> Pandas is clearly quite ergonomic for various exploratory interactive analyses, but the API is, imo, awful.
Having previously inherited (and now dispossessed) an un-disentangleable pile of Python, pandas, and SQL hacks reminiscent of a spreadsheet rammed with inscrutable Excel formulae, I have no idea how data scientists collaborate on anything with this technology. It's like when bioinformatics was full of write-only Perl code that was maybe executed successfully once for the purposes of a study or paper, and was kept around for future archaeologists to hopefully one day resuscitate when the need may arise again.
If programmers are expected to just throw garbage like this at the next asshole with the misfortune to have to maintain code that was never designed to be maintained, it's not a surprise that the industry is once again moving towards write-only code, this time produced at scale by LLMs.
It's like we're back to Visual Studio Ultimate slopping out 10k lines of XAML in response to your dragging and dropping in the WYSIWYG. There is a reason nobody does this any more.
reply