Hacker Newsnew | past | comments | ask | show | jobs | submit | momojo's commentslogin

This doesn't match my own experience. I dream of the day the stuff I don't find interesting can get automated but again and again I find myself having to do things by hand.

I wonder if this is similar to Chess and Go getting 'solved'. Hard problem spaces that only the biggest brains could tackle. Maybe it turns out creating highly performant, distributed systems with a plethora of unittests is a cakewalk for LLMs, while trying to make a 'simple web app' for a niche microscopy application is like trying to drive around San Francisco.


Great work!

> In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.

This is one of those 'intuitions' that I've also had. However, I haven't found any convincing evidence for or against it so far.

In a similar vein, this is why `reflex`[0] intrigues me. IMO their value prop is "LLM's love Python, so let's write entire apps in python". But again, I haven't seen any hard numbers.

Anyone seen any hard numbers to back this?

[0] https://github.com/reflex-dev/reflex


Reminds me of the Clearview AI controversy[0].

I'm not diminishing the ethics debate, but it's crazy to me how easy it was for two non-technical rich dudes in a garage to build Clearview AI (And before vibe-coding!):

  1. scrape billions of faces from the internet
  2. `git clone` any off the shelf facial-recognition repo
It was just a matter of when.

[0] https://en.wikipedia.org/wiki/Clearview_AI#History


Concerned but given the use can it be stopped ?

Yes. If one knows that someone has their identifiable data without consent it is a problem.

While they are pictures on the internet it is one thing, when you gather them all and put a label with a number then it is problematic.

Remember that FaceApp to make you older, younger etc? Imagine how much data those guys collected?

I know someone who submitted the face of a member of my family without consent. You could not even complain without agreeing with the TOS first


Unfortunately we will see this kind of cases more and more with AI rise. I don't believe it is the only app that could do relevant labeled searching in faces etc.

Am I the only one that finds it amusing that conpanies like Google and Facebook sent Clearview legal letters complaining about scraping data from their sites?

I'm surprised at the lukewarm reception. Admittedly I don't follow the image-to-3D space as much, but last time I checked in, the gloopy fuzzy outputs did not impress me.

I want to highlight what I believe is the coolest innovation: their novel O-Voxel data structure. I'm still trying to wrap my head around how they figured out the conversion from voxel-space to mesh-space. Those two worlds don't work well together.

A 2D analogy is that they figured out an efficient, bidirectional, one-shot method of converting PNG's into SVG's, without iteration. Crazy.


@simonw's successful port of JustHTML from python to javascript proved that an agent iteration + an exhaustive test suite is a powerful combo [0].

I don't know if TLA+ is going to suddenly appear as 'the next language I want to learn' in Stackoverflow's 2026 Developer Survey, but I bet we're going to see a rise in testing frameworks/languages. Anything to make it easier for an agent to spit out tokens or write smaller tests for itself.

Not a perfect piece of evidence, but I'm really interested to see how successful Reflex[1] is in this upcoming space.

[0] https://simonwillison.net/2025/Dec/15/porting-justhtml/ [1] https://github.com/reflex-dev/reflex


I wonder if humans are any different. We don't have LIDAR in our eyes but we approximate depth "enough" with only our 2D input


We also constantly move our heads and refocus our eyes. We can get a rough idea of depth from only a static stereo pair, but in reality we ingest vastly more information than that and constantly update our internal representation in real time.


We don't have 2d input, we have 3d input.

We have two eyes that gives us depth by default.


> But that just describes basically everyone, none of us have no agency, but all of us are also caught up in larger systems we can't opt out of.

But isn't the drama between the billionaire heiress and her starving-artist lover more interesting than the lawyer girlfriend deciding whether she wants to marry her below-average-salary boyfriend?

Or maybe I don't understand your complaint.


Anyone have any thoughts? ARC-AGI (and 2) is pretty much the only benchmark of interest to me anymore, due to its abstract nature.


You mentioned "step change" twice. Maybe a once over next time? My favorite Mark Twain quote is (very paraphrased) "My apologies, had I more time, I would have written a shorter letter".


I thought the repetition was intentional.


Does anyone else feel like they buried the lead?

> Omnilingual ASR was designed as a community-driven framework. People around the world can extend Omnilingual ASR to new languages by using just a few of their own samples.

The world just got smaller


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: