Hacker Newsnew | past | comments | ask | show | jobs | submit | adidoit's commentslogin

the ones that stood out were - Nostalgia for the Absolute by George Steiner - Digital Reversal by Andrew Mir - Vita Contemplativa by Byung-chul Han

Ironically this article/blog itself is giving off an AI-generated smell as it's tone and cadence seem very similar to LinkedIn posts or rather output of prompts to create LinkedIn posts.

With studies like these it's important to keep in mind selection effects.

Most of the high volume enterprise use cases use their cloud providers (e.g., azure)

What we have here is mostly from smaller players. Good data but obviously a subset of the inference universe.


This is fantastic. I'm reminded of the Samo Burja thesis that civilization is actually a lot older when we think and ancient civilizations including the Bronze Age were much more advanced than we think.

With better imaging, tooling, and archaeological funding, I'm sure we'll find much more evidence like this

So many countries bronze and ancient ages are underexplored


> ancient civilizations including the Bronze Age were much more advanced than we think.

I think part of the reason people tend to underestimate ancient civilizations is because there is only so much preserved, especially because so much of their culture and knowledge was passed on orally, rather than documented in writings. Even if we come up with more archaeological findings or new technology to analyze it, there’s a limit to how much we can know.

But another culprit in this underestimation is supremacist thinking. For example, there is a tendency to elevate the Abrahamic religions (Judaism, Christianity, and Islam) above others. The older cultures and religions are often described with pejoratives like “pagan”. In many countries, the history that is “worth studying” is seen as only starting a couple thousand years ago. Another aspect is racial supremacist thinking - I think this is still vast even though progress has been made on the issue of race. For example, textbooks and classes tend to not spend much time acknowledging the mathematical and scientific discoveries of the ancient world.

I hope it improves but I also think there are serious social/tribal problems today that will prevent people from exploring all this with genuine curiosity.


> that will prevent people from exploring all this with genuine curiosity.

No one is reposting findings that confirm exactly what archeologists already knew on HN.

Every archeologist wants to be the one that has the dig that revolutionizes the whole field.

The idea that historians and archeologists aren’t curious about the stuff they’re dedicating most of their life to simply doesn’t add up and match with what we know about human beings.

The reason we think what we do (with adjustments for normal human errors), is because that’s the evidence we have.

None of the evidence is secret. If there was some evidence that is being misinterpreted due to Abrahamic biases, there are as many if not more archeologists ans historians from non Abrahamic countries like China, India and most African nations, that have access to the same evidence and could write a paper today about how the evidence is being misinterpreted.


I see the same thinking in philosophy. We know a lot about the great thinkers of the West, from Plato to Aristoteles, to Jesus, to Thomas van Acquin, to Descartes, to Kant, to Hegel, to Nietsche, to Heidegger, to Foucoult, and so on... Its one western-european based lineage. And many of the western philosophers were supremacists indeed. They saw western philosophy as the pinaccle of human thought. The most advanced way of reasoning and understanding . This mindset obviously got them trapped.

But there is much to learn from other philosophies. China is the worlds oldest continuous civilization. Surely there were some great thinkers besides Konfuzius. Same with India. I attended last week a lecture about the Upanishads. And so much of the wisdom in there can be mapped, more or less specifically, to wisdom from Western philosophy. There is an interesting field of study emerging: Comparative Philosophy. ith the aim to bring it all together. (See for instance, https://studiegids.universiteitleiden.nl/courses/133662/comp...).


At-bats have always been the name of the game. That's why if you're going to do something risky make sure you build the safety net


Yeah I think all of the concerns about ARPU and what the ROI from AI will be are not justified given the opportunity if executed well. LLMs contain high intent significant memory. Their usage is exploding.

Getting $200 subscriptions from a small number of whales, $20 subscriptions from the average white-collar worker, and then supporting everything us through advertising seems like a solid revenue strategy


Fascinating that the state-of-the-art in building agentic harnesses for long running agent workflows is to ... "use strong-worded instructions"

Anthropomorphism of LLMs is obviously flawed but remains the best way to actually build good Agents.

I do think this is one thing that will hold enterprise adoption back: can you really trust systems like these in production where the best control you can offer is that you're pleading with it to not do something?

Of course good engineering will build deterministic verification and scaffolds into prevent issues but it is a fundamental limitation of LLMs


This sounds like one of the "Ironies of Automation" as Lisain Bainbridge pointed out several years ago.

The more prevalent automation is, the worse humans do when that automation is taken away. This will be true for learning now .

Ultimately the education system is stuck in a bind. Companies want AI-native workers, students want to work with AI, parents want their kids to be employable. Even if the system wants to ensure that students are taught how to learn and not just a specific curriculum, their stakeholders have to be on board.

I think we're shifting to a world where not only will elite status markers like working at places like McKinsey and Google be more valuable but also interview processes will be significantly lengthened because companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation


> companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation

Speak for yourself, but that's been how many companies have been operating for decades at this point.


Perhaps the credentials will change as these academic institutions become more like dinosaurs and other kinds of institutes will arise which give better markers of ability.


I don't know what AI native folks will look like. To me, it looks like just replacing skilled labors with unskilled labors as opposed to giving humans new skills.

AI to me will be valuable when it's helping humans learn and think more strategically and when they're actually teaching humans or helping humans spot contradiction and vetting for reliable information. Fact checking is extremely labor intensive work after all.

Or otherwise if AI is so good, just replace humans.

Right now, the most legible use of AI is AI slop and misinformation.


The skilled AI users are the people that use it to help them learn and think problems through in more detail.

Unskilled AI users are people who use AI to do their thinking for them, rather than using it as a work partner. This is how people end up producing bad work because they fundamentally don’t understand the work themselves.


This is the right attitude.

GenAI isn't a thinking machine, as much it might pretend to be. It's a theatre kid that's really motivated to help you and memorized the Internet.

Work with them. Let them fill in your ideas with extra information, sure, but they have to be your ideas. And you're going to have to put some work into it, the "hallucinations" are just your intent incompletely specified.

They're going to give you the structure, that's high probability fruit. It's the guts of it that has to be fully formed in the context before the generative phase can start. You can't just ask for a business plan and then get upset when the one it gives you is full of nonsense.

Ever heard the phrase "ask a silly question, get a silly answer"?


why does this feel ai generated? apologies if its not but the style and layout overlap with typical ai output.


I wrote it in obsidian in a note taking style so that's why


The biggest challenge with LinkedIn is the primacy of your linkage to your employer enforces a self-censorship that turns into corporate speak.

The overton window on LinkedIn is actually quite small and because everyone there is really an employee rather than an employer, you get essentially slop that has been easily trained on and therefore is easily generatable by AI. It's just all low perplexity takes.

There's mostly no room for nuance because of the performative takes. Unlike a forum like Hacker News where your identity is almost totally abstracted away, every LinkedIn post is a move in the status game of career visibility.


If you take a stand on tabs -vs- spaces (spaces obviously) or emacs -vs- vi (emacs obviously), you're black listed from half the jobs out there, but they were terrible tabs + vi jobs anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: