Hacker Newsnew | past | comments | ask | show | jobs | submit | rtrgrd's commentslogin

Hyrums law at its finest :D (or D: if you deeply care about correctness)


Very cool - note that lowercase b, l and h are the same


I would suspect that for self hosted LLMs, quality >>> performance, so the newer releases will always expand to fill capacity of available hardware even when efficiency is improved.


All the hedge funds sniping orders right now lol


Low latency starlink orders on hold


Might be hug of death but the load times are horrifically slow.


For me nextcloud has always been worryingly slow even on my instance


Is the markdown rendered once on the server and stored as HTML? Then why is it slow? Or is it rendered for each client, or rendered in the client?


The blog mentions checking each agent action (say the agent was planning to send a malicious http request) against the user prompt for coherence; the attack vector exists but it should make the trivial versions of instruction injection harder


we all love non ascii code (cough emoji variable names)


I thought human preferences was typically considered a noisy reward signal


If it was just "noisy", you could compensate with scale. It's worse than that.

"Human preference" is incredibly fucking entangled, and we have no way to disentangle it and get rid of all the unwanted confounders. A lot of the recent "extreme LLM sycophancy" cases is downstream from that.


I've never used Kagi before and wanted to try: how does Kagi stack up against Brave search?


Kagi results consume brave search among others before returning the result so should be a superset in quality


I assume the high volume of search traffic forces Google to use a low quality model for AI overviews. Frontier Google models (e.g. Gemini 2.5 pro) are on-par, if not 'better', than leading models from other companies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: