Hacker Newsnew | past | comments | ask | show | jobs | submit | d_watt's commentslogin

Regarding the meta experiment of using LLMs to transpile to a different language, how did you feel about the outcome / process, and would you do the same process again in the future?

I've had some moments recently for my own projects as I worked through some bottle necks where I took a whole section of a project and said "rewrite in rust" to Claude and had massive speedups with a 0 shot rewrite, most recently some video recovery programs, but I then had an output product I wouldn't feel comfortable vouching for outside of my homelab setup.


It depends on the situation. In this case the agent worked only using the reference code provided by Flux's Black Forest Labs which is basically just the pipeline implemented as a showcase. The fundamental way for this process to work is that the agent can have a feedback to understand if it is really making progresses, and to debug failures against a reference implementation. But then all the code was implemented with many implementation hints about what I wanted to obtain, and without any reference of other minimal inference libraries or kernels. So I believe this just is the effect of putting together known facts about how Transformers inference works plus an higher level idea of how software should appear to the final user. Btw today somebody took my HNSW implementation for vector sets and translated it to Swift (https://github.com/jkrukowski/swift-hnsw). I'm ok with that, nor I care of this result was obtained with AI or not. However it is nice that the target license is the same, given the implementation is so similar to the C one.

When I first saw the OP, panic started to set in that I am fucked and Chat-Completions/LLMs/AI/whatever-you wanna-call-it will soon be able to create anything and eat away at my earning potential. And I will spend my elder years living with roommates, with no wife or children because I will not be able to provide for them. But upon reading that you used a reference implementation, I've realized that you simply managed to leverage it as the universal translator apenwarr believes is the endgame for this new technology [1]. So, now I feel better. I can sleep soundly tonight knowing my livelihood is safe, because the details still matter.

[1] https://apenwarr.ca/log/20251120


Nope, that will happen, but it also doesn't mean you're fucked. It just means it's time to move up the value stack.

The fear that lead to the black and white thinking expressed in your comment is the real issue.


This is pretty great. I’ve gone and hacked your GTE C inference project to Go purely for kicks, but this one I will look at for possible compiler optimizations and building a Mac CLI for scripting…

This repo has Swift wrappers, not a rewrite of hnsw.c, which apparently you weren't the only author of.

Thanks,I thought it was a complete rewrite of the same logic and algorithms.

I have a set of prompts that are essentially “audit the current code changes for logic errors” (plus linting and testing, including double checking test conditions) and I run them using GPT-5.x-Codex on Claude generated code.

It’s surprising how much even Opus 4.5 still trips itself up with things like off-by-one or logic boundaries, so another model (preferably with a fresh session) can be a very effective peer reviewer.

So my checks are typically lint->test->other model->me, and relatively few things get to me in simple code. Contrived logic or maths, though, it needs to be all me.


Once we had a slowdown in our application that went unadressed for a couple of months. Using git bisect to binary search across a bunch of different commits and run a perf test, every commit being a "good" historical commit allowed that to be much easier, and I found the offending commit fast.

Ok, I see. This is a use case I did not think about. Worthy of a blog post, I think.

Besides testing for a perf slow down, any other use cases for git bisect + rebase?


I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.

https://github.com/TencentARC/StereoCrafter https://github.com/TencentARC/GeometryCrafter


I would love to see your examples.


OP probably can’t tell if you're being upvoted on this.

I’d be keen too.


Looks like after the AI automation rush last year, the leaderboard has been removed. Makes sense, a little sad that it was needed though.


I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.


I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.


I believe that Everybody Codes has a leaderboard where it starts counting from when you first open the puzzle. So if you're looking for coding puzzles with a leaderboard that one would be fair for you.

https://everybody.codes/events


I've released a templatized local development setup using devcontainers that I've crafted over the last year, that I use on all projects now. This post explains the why and links to the project.


It's potentially the opposite. If you instrument a codebase with documentation and configuration for AI agents to work well in it, then in a year, that agent will be able to do that same work just as well (or better with model progress) at adding new features.

This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.


I wonder how soon (or if it's already happening) that AI coding tools will behave like early career developers who claim all the existing code written by others is crap and go on to convince management that a ground up rewrite is required.

(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)


As the cost of AI-generated code approaches zero (both in time and money), I see nothing wrong with letting the AI agent spin up a dev environment and take its best shot. If it can prove with rigorous testing that the new code works is at least as reliable as the old code, and is written better, then it's a win/win. If not, delete that agent and move on.

On the other hand, if the agent is just as capable of fixing bugs in legacy code as rewriting it, and humans are no longer in the loop, who cares if it's legacy code?


I kinda hate the idea of all that.

But I can see it "working". At least for the values of "working" that would be "good enough" for a large portion of the production code I've written or overseen in my 30+ year career.

Some code pretty much outlasts all expectations because it just works. I had a Perl script I wrote in around 1995-1998 that ran from cron and sent email to my personal account. I quit that job, but the server running it got migrated to virtual machines and didn't stop sending me email until about 2017 - at least three sales or corporate takeovers later (It was _probably_ running on CentOS4 when I last touched it in around 2005, I'd love to know if it was just turned into a VM and running as part of critical infrastructure on CentOS4 12 years later).

But most code only lasts as long as the idea or the money or the people behind the idea last - all the website and differently skinned CRUD apps I built or managed rarely lasted 5 years without being either shut down or rewritten from the ground up by new developers or leadership in whatever the Resume Driven Development language or framework was at the time - toss out the Perl and rewrite it in Python, toss out the Python and rewrite it in Ruby On Rails, then decide we need Enterprise Java to post about on LinkedIn, then rewrite that in Nodejs, now toss out the Node and use Go or Rust. I'm reasonably sure this year's or perhaps next years LLM coding tools can do a better job of those rewrites than the people who actually did them...


Will the cost of AI-generated code approach zero? I thought the hardware and electricity needed to power and train the models and infer was huge and only growing. Today the free and plus plans might be only $20/month, once moats are built I assume prices will skyrocket a order of magnitude or few higher.


> Will the cost of AI-generated code approach zero?

Absolutely not.

In the short term it will, while OpenAI/Anthropic/Anysphere destroy software development as a career. But they're just running the Uber playbook - right now they're giving away VC money by funding the datacenters that're training and running the LLMs. As soon as they've put enough developers out of jobs and ensured there's no new pipeline of developers capable of writing code and building platforms without AI assistance, they will stop burning VC cash and start charging at rates that not only break even but also return the 100x the investors demand.


Author here, you're right, but by definition when you do all of this the Bus Factor has already increased:

> This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.

It's not just about knowledge in someone's brain, just about knowledge persistence.


They're not directly solving the same problem. MCP is for exposing tools, such as reading files. a2a is for agents to talk to other agents to collaborate.

MCP servers can expose tools that are agents, but don't have to, and usually don't.

That being said, I can't say I've come across an actual implementation of a2a outside of press releases...


Perhaps naive to say, but I think there was the briefest moment where your status updates started with "is", feeds were chronological, and photos and links weren't pushed over text, that it was not an adversarial actor to one's wellbeing.


There was an even briefer moment where there was no such thing as status updates. You didn't have a "wall." The point wasn't to post about your own life. You could go leave public messages on other people's profiles. And you could poke them. And that was about it.

I remember complaining like hell when the wall came out, that it was the beginning of the end. But this was before publicly recording your own thoughts somewhere everyone could see was commonplace, so I did it by messaging my friends on AIM.

And then when the Feed came out? It was received as creepy and stalkerish. And there are now (young) adults born in the time since who can't even fathom a world without ubiquitous feeds in your pocket.

Call me nostalgic, but we were saner then.


Unless I’m remembering wrong, posting a public message on someone else’s profile was posting on their wall. Or was it called something else before it was somebody’s wall?


It didn't have a name. It wasn't really a "feature." You just went and posted on their "page" I guess I would call it.

The change to being able to post things on your own page and expecting other people to come to your page and read them (because, again, no Feed) wasn't received well at first.

Keep in mind, smartphones didn't exist yet, and the first ones didn't have selfie cameras even once they did. And the cameras on flip phones were mostly garbage, so if you wanted to show a picture, you had to bring a camera with you, plug it in, and upload it. So at first the Wall basically replaced AIM away messages so you could tell your friends which library you were going to go study in and how long. And this didn't seem problematic, because you were probably only friends with people in your school (it was only open to university students, and not many schools at first), and nobody was mining your data, because there were no business or entity pages.

Simpler, simpler days.


I joined Thefacebook in 2005. The place on your page where posts from other people appeared was called the “wall” then.


Yeah, that's about when it changed. The lack of a wall was a very early situation. I joined in 2004, back when it was only open to Ivy League and Boston-area schools.


It was still acceptable to write on someone else's wall when they came to be called that. You can still do that now I think but it's quite uncommon and how it works is now complicated but settings.


Sure, you could. That wasn't the problem. The problem was that now you could post on your own.

That's what turned it from a method of reaching out and sending messages to specific people when you had something to say to them to a means of shouting into the void and expecting (or at least hoping) that someone, somewhere, would see it and care what you had to say. It went from something actively pro-social to something self-focused.

Blogs and other self-focused things already existed, but almost nobody used them for small updates throughout the day. Why do you think the early joke about Twitter was that it was just a bunch of self-absorbed people posting pictures of their lunch? Nobody knew what to do with a tool like that yet, but the creation of that kind of tool has led to an intensity of self-focus and obsession the world had never seen before.


The wall was released maybe 6 months after Facebook launched. I think it was still called “The Facebook” at the time.


Oh wow, I’d even forgotten about pokes. Thanks for that trip down memory lane.


I made the mistake of sending a Gen Z (adult) friend a poking finger emoji to try to remind him about something.

It wasn't the first time I've had a generational digital (ha) communication failure, but it was the first time I've had one because I'm old and out of touch with what things mean these days!


Still a supported feature! You can find it if you dig around in the menus long enough.


The early, organic days of social networking are always fun. They never would have pulled in billions of users if they started off how they are now.


Couldn't have said it better.

Nothing is a social network anymore.

Everything is a content-consumer a platform now.

People just want to scroll and scroll



My hunch is that instant messaging is slowly taking over that space. If you actually want to connect with people you can without needing much of a platform.


I mean let's be clear on the history and not romanticize anything, Zuck created Facebook pretty much so he could spy on college girls. He denies this of course, but it all started with his Facemash site for ranking the girls, and then we get to the early Facebook era and there's his quote about the "4,000 dumbfucks trusting him with their photos" etc.

There is no benevolent original version of FB. It was a toy made by a college nerd who wanted to siphon data about chicks. It was more user friendly back then because he didn't have a monopoly yet. Now it has expanded to siphoning data from the entire human race and because they're powerful they can be bigger bullies about it. Zuck has kind of indirectly apologized for being a creeper during his college years. But the behavior of his company hasn't changed.


Well they had to grow the userbase before they could abuse it :)


Very true! I was annoyed by the loss of the "is" pattern and basically stopped using Facebook when the chronological feed was removed.


They were stealing your contacts from wherever they could get them. There was never a time when they didn't abuse their users.


After converting many of my projects, and helping a couple startups tool their codebases and teams for using AI agents better, these 5 things are what I now do on every codebase I work on.


What do you think the "mistake" is here?

It seems like you're pointing out a consequence, not a counter argument.


There’s a really common cognitive fallacy of “the consequences of that are something I don’t like, therefore it’s wrong”.

It’s like reductio ad absurdum, but without the logical consequence of the argument being incorrect, just bad.

You see it all the time, especially when it comes to predictions. The whole point of this article is coding agents are powerful and the arguments against this are generally weak and ill-informed. Coding agents having a negative impact on skill growth of new developers isn’t a “fundamental mistake” at all.


Exactly.

What I’ve been saying to my friends for the last couple of months has been, that we’re not going to see coding jobs go away, but we’re going to run into a situation where it’s harder to grow junior engineers into senior engineers because the LLMs will be doing all the work of figuring out why it isn’t working.

This will IMO lead to a “COBOL problem” where there are a shortage of people with truly deep understanding of how it all fits together and who can figure out the line of code to tweak to fix that ops problem that’s causing your production outage.

I’m not arguing for or against LLMs, just trying to look down the road to consequences. Agentic coding is going to become a daily part of every developer’s workflow; by next year it will be table stakes - as the article said, if you’re not already doing it, you’re standing still: if you’re a 10x developer now, you’ll be a 0.8x developer next year, and if you’re a 1x developer now, without agentic coding you’ll be a 0.1x developer.

It’s not hype; it’s just recognition of the dramatic increase in productivity that is happening right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: