Hacker Newsnew | past | comments | ask | show | jobs | submit | okigan's commentslogin

With all respect to FreeBSD.

It seems weird in 2025/2026 we are still discussing the baseline of getting a storage working.

Feels we’re spending too much time discussing the trees and not enough time getting the forest going: * we need reliable local storage * integrated backup * apps installation / management * remote access and account management * app isolation, reliable updates


Would love to see a snapshot of the GUI as part of the README.md.

Also docker link seems to be broken.


Fixed the package link. Github somehow made it private. I will add a snapshot right now.


Most are overpaying in taxes for what they are getting.

Not to mention single/families without kids and seniors that still pay for school districts.


Fear not - the American school system was built on and holds fast to the supposition that the affluent should be able to avoid any unwanted exposure to the problems of those less fortunate than themselves.


It’s unclear from your description is this Z3 problem or this is the nature of such problems (and it’s a wish).

Are there other tools that do it better or proposal how Z3 would do it?


Ugreen


+1, I replaced my aging DS1812+ with a DXP4800 Plus and I've been quite happy with it.


Ugreen is king, lately.


It took 20 years to acknowledge that pushing eventual consistency to application layer is not worth it for most applications.

Seems the same is playing out out in Postgres with this extension, maybe will take it another 20 years


The idea of active-active is too seductive compared to how hard learning distributed systems is.


It is so seductive that people don’t read the footnotes that explain that active-active does not do what they think it does.


I'd agree. There's so many footguns involved in multi-master setups, that most organisations should avoid this until they're big enough to hire distributed systems engineers to design a proper solution for the company. I personally don't love any of the Postgres multi-master solutions.

You can scale surprisingly far on a single-master Postgres with read replicas.


I'm curious about what you mean here. It sounds like you're saying that applications shouldn't concern themselves with consistency. Can you elaborate?


What’s a better syntax then?


The real question—to which I have absolutely no answer—is not about syntax, it's about concepts: what is a better way to think about higher-dimensional arrays rather than loops and indices? I'm convinced that something better exists and, if it existed, encoding it in a sufficiently expressive (ie probably not-Python) language would give us the corresponding syntax, but trying to come up with a better syntax without a better conceptual model won't get us very far.

Then again, maybe even that is wrong! "Notation as a tool for thought" and all that. Maybe "dimension-munging" in APL really is the best way to do these things, once you really understand it.


Numpy seems somewhat constrained here… it grew out of the matrix ecosystem, and matrices map naturally to two-dimensional arrays (sidenote: it’s super annoying that we have n-dimensional matrices and n-dimensional arrays, but the matrix dimension maps to the width of the array).

Anyway, the general problem of having an n-dimensional array and wanting to dynamically… I dunno, it is a little tricky. But, sometimes when I see the examples people pop up with, I wonder how much pressure could be relieved if we just had a nice way of expressing operations on block or partitioned matrices. Like the canonical annoying example of wanting to apply solve using a series of small-ish matrices on a series of vectors, that’s just a block diagonal matrix…


English. "Write me a Python function or program that does X, Y, and Z on U and V using W." That will be the inevitable outcome of current trends, where relatively-primitive AI tools are used to write slightly more sophisticated code than would otherwise be written, which in turn is used to create slightly less-primitive AI tools.

For example, I just cut-and-pasted the author's own cri de coeur into Claude: https://claude.ai/share/1d750315-bffa-434b-a7e8-fb4d739ac89a Presumably at least one of the vectorized versions it replied with will work, although none is identical to the author's version.

When this cycle ends, high-level programs and functions will be as incomprehensible to most mainstream developers as assembly is today. Today's specs are tomorrow's programs.

Not a bad thing, really. And overdue, as the article makes all too clear. But the transition will be a dizzying one, with plenty of collateral disruption along the way.


Fantastic article.

I don’t use numpy often enough - but this explains the many WTF moments why it’s so annoying to get numpy pieces to work together.


Don’t you need to apply filtering to the frame selection based on scene score?

Otherwise you’d select frames with 0.3, 0.7, 1.0, 0.7, 0.3 - selecting 5 frames instead of 1?

Two pass with sobel filter comes to mind.


Could we have a concise and specific explanation how DSPy works?

All I've seen are vague definitions of new terms (ex. signatures) and "trust me this very powerful and will optimize it all for you".

Also, what would a good way to reason between DSPy and TextGrad?


My understanding is that is tries many variations of the set of few shot examples and prompts and picks the ones that work best as the optimized program.


Textgrad mainly optimizes the prompt but does not inject few shot examples. Dspy mainly optimizes the few shot examples.

At least that's my understanding from reading the textgrad paper recently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: