Hacker Newsnew | past | comments | ask | show | jobs | submit | cyanf's commentslogin

Why would you want to ssh into a machine that's not yours? That's a violation of the Computer Frauds and Abuse Act, up to 10 years in prison!


I think you're joking, but to clarify -- not personally yours. A misbehaving worker box, an app server in the staging environment, etc. A resource owned by the organization for which you work, where it would not be appropriate for you to customize it to your own liking


When you have permission to do so, it isn’t.


Tragedy of the aggregate.


There are existing solutions for queues in Postgres, notably pgmq.


Despite sentiments around Mojo being negative on HN due to the stack not being OSS, this is the ultimate goal of Modular.

https://signalsandthreads.com/why-ml-needs-a-new-programming...


I listened to that episode, by chance, last week. It was well worth the time to listen.


The blog's title can be misleading here, "we" in this context refers to the Cognition team. I don't work at Cognition, just thought this was interesting.


> On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.

Interesting, this implies that the 1M context servers performs worst at low context. Perhaps this is due to some KV cache compression, eviction or sparse attention scheme being applied on these 1M context servers?


This is due to RoPE scaling.

> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking


The key issue is that their post-mortem never explained what went wrong on two out of three issues.

All I know is that my requests can now travel along three completely different code paths, each on its own stack and tuned differently. Those optimizations can flip overnight, independent of any model-version bump—so whatever worked yesterday may already be broken today.

I really don't get the praise that they are getting for this postmortem, it only made me more annoyed.


Snappiness is the primary reason for using Zed.


The other examples you listed are valid, but A.I tab auto complete is a model & inference issue unrelated to the editor.


It is a feature that they control. Whether it comes from the model, a bad prompt, a bad provider or a bug in their implementation is their responsibility (especially considering you have to pay per-request AI features).


That’s true if we’re evaluating Zed as a product, but the GP is discussing Zed U.I perf specifically.


Idk if 'linux + gpu = problem' is surprising or very relevant either.


I have the same set of requirements you’re describing and Obsidian is perfect.

You can disable the graph feature and never link any notes.


This is both the largest oss model release thus far, and the largest Muon training run.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: