Hacker Newsnew | past | comments | ask | show | jobs | submit | speakingmoistly's commentslogin

This is just a manifestation of the same short-sightedness that led to idea of "let's only hire seniors" not thinking too much about how juniors grow and become seniors themselves.

Being more efficient could have meant "same output, less effort", but the obsession with infinite growth warped this into more work. Greed comes in the way of the future we were meant to have.


The way things are going, whether it is or not, I expect someone to push it on their org for a productivity experiment.

"Enterprise-grade compliance" gave me a good chuckle.


To be fair, that's less "cramming AI for the sake of it" and more "people are going to do funky handrolled things, let's make and maintain one thats native to the ecosystem".

I bet there's plenty of internal FastAPI apps duct-taped together serving that exact purpose.


It's a bit odd to see an MIT-licensed work that isn't source-available (unless I missed something, no repository is linked to the PyPi page, and the only relevant repository I could find attached to your account is the benchmark code). It certainly makes it harder to trust benchmarking or quality since it cannot be audited directly - I'd be curious to see if this a Python wrapper for Tokio or if it does something else.

yooo, im sry i thought this post was deleted.

can u check now ive added the repository in the pypi.

appreciate any feedback or criticism, thanks for trying/testing it.


Bold words coming from someone whose success and gravitas largely hinges on the free labour of others and a disregard for intellectual ownership.


Why so?

He's a public persona and was initially picked for a high-visibility position at Mozilla. While he's free to donate to whoever (or more generally, do) as he sees fit, the org is equally free to not want to be associated with that.

In the case of positions that aren't in the public eye, there's more of an argument to be made (although even there, there are lines where you can argue that a contributor doesn't align with the values of the organization depending on what the disagreements are), but if he's meant to represent the group, the expectation of privacy is different.


To be fair, that happening feels more like poor management and mentorship than "juniors are scatterbrained".

Over time, you build up the right reflexes that avoid a one-week goose chase with them. Heck, since we're working with people, you don't just say " fix this", you earmark time to make sure everyone is aligned on what needs done and what the plan is.


I think some folks are very quick to drop rigor and care as "traditional practices" as if we're talking about churning butter by hand. One thing that might be valuable to keep in mind is that LLM tooling might feel like an expert, but generally has the decisionmaking skills of a junior. In that light, the rigor and best practices that were already (hopefully) part of software engineering practice are even more important.

> In traditional development, you review versions carefully. With AI-generated scaffolding, that step is easy to overlook.

If in "traditional development", everything is reviewed carefully, why wouldn't it be when some of the toil is automated? If anything, that's exactly what the time that's freed up by not having to scaffold things by hand should be invested in: sifting through what's been added and the choices made by the LLM to make sure they are sound and follow best practices.


Reviewing generated code actually takes a higher skill level than writing it. A junior who prompted this Next.js app into existence is physically incapable of auditing the security of those imports. And for a senior it's often cheaper to just write it from scratch than to sit there and audit abstract spaghetti generated by Claude


That's very true, but there's room for some nuance. Because of author inexperience, I wouldn't expect audits and reviews to be comprehensive, but I would expect the questioning to take place. In the case of imports, it doesn't take years of experience to verify that the versions added are latest stable and to generally check out release notes / issues. It's far from enough, but it's something.

Also agreed on the cheaper-to-write bit. Trying to redeem piles of slop into something workable is a fool's errand.


I think both points are true in practice. Reviewing AI-generated code can require more experience than generating it, but at the same time some basic checks (dependency versions, release notes, etc.) are still worth doing.

One thing this incident reminded us of is that review is only a snapshot in time. Even if everything looks fine when a PR is merged, new CVEs can appear later and suddenly make previously safe dependencies vulnerable.

That’s why we started treating monitoring and vulnerability checks as part of the platform itself, not just the review process.


Totally agree — AI scaffolding automates work, but best practices like CI/CD and pentesting are still essential. Continuous monitoring is necessary for all commits, and combining it with a dev-like centralized platform ensures every service and endpoint stays safe.


CI/CD and security audits are activities that help with confidence and kicking the tires after development takes place, but the practice that's really needed here is scrutiny and review from the author of the change and from non-author peers while code is being put together. I'd go further and say that if the intent is to produce a production-ready, secure and well-designed and implemented solution, it cannot be vibe-coded.

A prototype that de-risks the design and that gets trashed before implementation begins would be the right place for vibing.


> Under the new policy, Amazon engineers must get two people to review their work before making any coding changes.

I wonder if this is adding human review where there was none, or if this is adding more of it.


Did you identify the kind of performance problems you were solving for? Curious to hear whether the source of the lag is known.

The local / "runs entirely on my machine" claim should probably come with an asterisk: the TUI part is local, but this still relies on an LLM API existing somewhere outside the machine (unless you're running an Ollama instance on the same host).

Nonetheless, this is neat!


Thanks for the feedback. The main performance focus was rendering.

Claude Code and other TUIs (except Codex) use a layer of abstraction over the raw terminal escape sequences.

I directly used `crossterm`, which gave me more control and lower latency.

For example if nothing is going on, I don't render anything in the terminal. Or only render at keypress.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: