Hacker Newsnew | past | comments | ask | show | jobs | submit | prisenco's commentslogin

Understanding memory and using a debugger is hard but I'll take that over telling an AI my grandma will die if it does something wrong.

I use AI as a rubber duck to research my options, sanity-check my code before a PR, and give me a heads up on potential pain points going forward.

But I still write my own code. If I'm going to be responsible for it, I'm going to be the one who writes it.

It's my belief that velocity up front always comes at a cost down the line. That's been true for abstractions, for frameworks, for all kinds of time-saving tools. Sometimes that cost is felt quickly, as we've seen with vibe coding.

So I'm more interested in using AI in the research phase and to increase the breadth of what I can work on than to save time.

Over the course of a project, all approaches, even total hand-coding with no LLMs whatever, likely regress to the mean when it comes to hours worked. So I'd rather go with an approach that keeps me fully in control.


Yeah my guess is that it takes roughly the same amount of time regardless if it's AI agents or hand coding, the time just gets spent in different ways (writing vs reading for example).

My question is why use AI to output javascript or python?

Why not output everything in C and ASM for 500x performance? Why use high level languages meant to be easier for humans? Why not go right to the metal?

If anyone's ever tried this, it's clear why: AI is terrible at C and ASM. But that cuts into what AI is at its core: It's not actual programming, it's mechanical reproduction.

Which means its incapabilities in C and ASM don't disappear when using it for higher-level languages. They're still there, just temporarily smoothed over due to larger datasets.


My small-program success story with genAI coding is pretty much the opposite of your claim. I used to use a bash script with a few sox instances piped into each other to beat-match and mix a few tracks. Couldn't use a GUI... Then came gpt-5, and I wanted to test it anyway. So I had it write a single-file C++ program that does the track database, offline mixing, limiter and a small REPL-based "UI" to control the thing. I basically had results before my partner was finished preparing breakfast. Then I had a lot of fun bikeshedding the resulting code until it felt like something I'd like to read. Some back and forth, pretending to have an intern and just reviewing/fixing their code. During the whole experience, it basically never generated code which wouldn't compile. Had a single segfault which was due to unclear interface to a C library. Got that fixed quickly.

And now, I have a tool to do a (shuffled if I want) beat-matched mix of all the tracks in my db which match a certain tag expression. "(dnb | jungle) & vocals", wait a few minutes, and play a 2 hour beat-matched mix, finally replacing mpd's "crossfade" feature. I have a lot of joy using that tool, and it was definitely fun having it made. clmix[1] is now something I almost use daily to generate club-style mixes to listen to at home.

[1] https://github.com/mlang/clmix


One thing I have been doing is breaking out of my long-held default mode of spinning up a react/nextjs project whenever I need frontend, and generating barebones HTML/CSS/JS for basic web apps. A lot of the reason we went with the former was the easy access to packages and easy-to-understand state management, but now that a lot of the functionality packages used to provide can be just as easily generated, I can get a lot more functionality while keeping dependencies minimal.

I haven't tried C or ASM yet, but it has been working very well with a C++ project I've been working on, and I'm sure it would do reasonably well with bare-bones C as well.

I'd be willing to bet it would struggle more with a lower-level language initially, but give it a solid set of guardrails with a testing/eval infrastructure and it'll get its way to what you want.


Pretty interesting take this. I wonder if there is a minimal state management we could evolve which would be sufficient for LLMs to use while still making it possible for a human to reason about the abstraction. It won't be as bloated as the existing ones we came up with organically however.

I mean, you're basically LLM-washing other people's code, then. All those UI components that other people wrote and at least expected attribution may not be libraries anymore, sure. But you've basically just copied and maybe lightly modified that code into your project and then slapped a sticker on it saying "mine." If you did that manually with open source code, you'd be in violation of the attribution terms almost all the licenses have in common. But somehow it's okay if the computer does it for you?

It is a gray area. What if you took Qt, removed macros, replaced anchoring with css for alignment, took all widget properties out into an entity component system and called it ET, could Trolltech complain? It is an entirely new design and nothing like they built. A ship of Theseus if you will.

The Ship of Theseus has nothing to do with the identity of the parts. That is not in question at all; they are explicitly different parts. The thought experiment is the question of the identity of the whole.

Qt in your example is a part. You're application is the whole. If you replaced Qt with WxWidgets, is your application still the same application?

But to answer your question, to replace Qt with you're own piecemeal code doesn't do anything more to Qt than replacing it with WxWidgets would: nothing. The Qt code is gone. The only way it would ship-of-theseus itself into "still being Qt, despite not being the original Qt" would be if Qt required all modifications to be copyright-assigned and upstreamed. That is absurd. I don't think I've ever seen a license that did anything like that.

Even though licenses like the GPL require reciprocal FOSS release in-kind, you still retain the rights to your code. If you were ever to remove the GPL'd library dependency, then you would no longer be required to reciprocate. Of course, that would be a new version of your software and the previous versions would still be available and still be FOSS. But neither are you required to continue to offer the original version to anyone new. You are only required to provide the source to people who have received your software. And technically, you only have to do it when they ask, but that's a different story.


We used higher level programming languages because "Developer time is more expensive than compute time", but if the AI techbros are right, we are approaching the point where that is not going to be true.

It's going to take the same amount of time creating a program in C as it does in Python.


The premise of your question is wrong. I would still write Python for most of my tasks even if I were just as fast at writing C or ASM.

Because the conciseness and readability of the code that I use is way more important than execution speed 99% of the time.

I assume that people who use AI tools still want to be able to make manual changes. There are hardly any all or nothing paradigms in the tech world, why do you assume that AI is different?


The promise of the original definition of vibe coding was you treating code as disposable, no more valuable than LLVM build cache.

You aren't supposed to make corrections, review it, or whatever.


But the LLM does, and some amount of conciseness/readability will help.

They've got good at C now. I can't speak for ASM.

Here's a C session that I found quite eye-opening the other day: https://gisthost.github.io/?1bf98596a83ff29b15a2f4790d71c41d...


I had it write a new stdlib implementation (wasm<->JS) for a custom web based fantasy console I'm writing.

It did ok at that.

We'll - Doom runs so ok enough for what I wanted anyway.

No, it's not a copy of other WASM stdlib implementations.


If you read on in the post you might be interested in the section titled

Drop Python: Use Rust and Typescript

https://matthewrocklin.com/ai-zealotry/#big-idea-drop-python...


It cuts to training data and effort. A lot of effort has been put in to optimize for python, even down to tokenization.

I pretty much do most of my AI coding in Rust. Although I do still use Python or Typescript where appropriate.

Isn't it also because LLMs are trained on existing software, and the programs we would write in Python or JS have few examples in C ?

Yes. That is what the commenter means by "mechanical reproduction" and "temporarily smoothed over due to larger datasets".

I don't get this. AI coders keep saying they review all the code they push, and your suggestion is to use even harder languages the average vibe coder is unable to understand, all in name of "performance"? Faster code maybe, and exponentially increasing the tech debt and amount of bugs that slips through.

It wasn't even long ago that we thought developer experience and capacity for abstraction (which is easier to achieve in higher level languages) was paramount.


> AI coders keep saying they review all the code they push

Those tides have shifted over the past 6 weeks. I'm increasingly seeing serious, experienced engineers who are using AI to write code and are not reviewing every line of code that they push, because they've developed a level of trust in the output of Opus 4.5 that line-by-line reviews no longer feel necessary.

(I'm hesitant to admit it but I'm starting to join their ranks.)


In the past week, I saw Opus 4.5 (being used by someone else) implement "JWT based authentication" by appending the key, to a (fake) header and body. When asked to fix this, it switched to hashing the key (and nothing else), and appending the hash instead. The "signature" still did not depend on the body, meaning any attacker could trivially forge an arbitrary body, allowing them to e.g. impersonate any user they wanted to.

Do I think Opus 4.5 would always make that mistake? No. But it does indicate that the output of even SotA models needs careful review if the code actually matters.


Because you want to modify those later instead of having read only blob?

Wouldn't the AI that wrote the original code be in a better position to modify it too?

Lets start a space station to squeeze a glass of juice.

That's a very good example.

Which is why I'm more comfortable using AI as an editor/reviewer than as a writer.

I'll write the code, it can help me explore options, find potential problems and suggest tests, but I'll write the code.


Copyleft removes legal obligation but we're free to confer a social obligation.

Could be speed/efficiency was the wrong dimension to optimize for and its leading the industry down a bad path.

An LLM helps most with surface area. It expands the breadth of possibilities a developer can operate on.


When building out a new app or site, start with the simplest solution like the html-only autofilters first, then add complex behavior later.

It's good to know these things exist so there are alternatives to reaching for a fat react component as the first step.


Until your client tells you that it doesn't work in Edge and you find out it's because every browser has its own styling and they are impossible to change enough to get the really long options to show up correctly.

Then you're stuck with a bugfix's allotment of time to implement an accessible, correctly themed combo box that you should have reached for in the first place, just like what you had to do last week with the native date pickers.


Right, don't add complexity until you have to.


I'd argue that adding complexity from the get-go to ensure that all users have a pleasant experience from the get-go is better than simplicity at the expense of some percentage of users.

I think it's important for web devs to spend more than two seconds to think if the complexity is necessary from the get-go though.


When building out a new app or site, which means a percentage of zero users is zero.

[deleted]


Have you no sense of craftsmanship?


It’s great to see practical examples that push us to consider what the platform already offers before adding more layers of complexity.


| self hosting costs you between 30 and 120 minutes per month

Can we honestly say that cloud services taking a half hour to two hours a month of someone's time on average is completely unheard of?


I handle our company's RDS instances, and probably spend closer to 2 hours a year than 2 hours a month over the last 8 years.

It's definitely expensive, but it's not time-consuming.


Of course. But people also have high uptime servers with long-running processes they barely touch.


Very much depends on what you're doing in the cloud, how many services you are using, and how frequently those services and your app needs updates.


I got the right answer but it was so easy I went in with doubt I had done it right.

Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.


> I went in with doubt I had done it right.

And if in your doubt you decided to run it through the interpreter to get the "real" answer, whoops, you're rejected.


That's cheating (even if it just assures you that your answer is correct)


Is it? The page implies it's allowed, but they want people who think running it is "more of a hassle".


Oh right, it seems to be allowed.

I don't know then. I can open up a terminal with a python and paste it really fast, faster than run it in my head.


That doubt is valid. Anyone reading this blog post (or in an interview, given the prevalence of trick interview questions) would know there must be some kind of trick. So, after getting the answer without finding a trick, it would be totally reasonable to conclude you must have missed something. In this case, it turns out the trick was something that was INTENDED for you to miss if you solved the problem in your head. At the end of the day, the knowledge that "I may have missed something" is just part of day to day life as an engineer. You have to make your best effort and not get paralyzed by the possibility of failure. If you did it right, had a gut feeling that something was amiss, but submitted the right answer without too much hemming and hawing, I expect that's typical for a qualified engineer.


I'm a fan of anything that allows me to build with javascript that doesn't require a build step.

Modern HTML/CSS with Web Components and JSDoc is underrated. Not for everyone but should be more in the running for a modern frontend stack than it is.


On the one hand I can see the appeal of not having a build step. On the other, given how many different parts of the web dev pipeline require one, it seems very tricky to get all of your dependencies to be build-step-free. And with things like HMR the cost of a build step is much ameliorated.


I haven't run into any steps that require one, there's always alternatives.

Do you have anything specific in mind?


Anything that uses JSX syntax, for instance.

Any kind of downleveling, though that's less important these days most users only need polyfills, new syntax features like `using` are not widely used.

Minification, and bundling for web is still somewhat necessary. ESM is still tricky to use without assistance.

None of these are necessary. But if you use any of them you've already committed to having a build step, so adding in a typescript-erasure step isn't much extra work.


If there is one thing I don't miss using WebComponents is JSX. lit-html is much, much better.


It's such a lovely and simple stack.

No Lit Element or Lit or whatever it's branded now, no framework just vanilla web components, lit-html in a render() method, class properties for reactivity, JSDoc for opt-in typing, using it where it makes sense but not junking up the code base where it's not needed...

No build step, no bundles, most things stay in light dom, so just normal CSS, no source maps, transpiling or wasted hours with framework version churn...

Such a wonderful and relaxing way to do modern web development.


I love it. I've had a hard time convincing clients it's the best way to go but any side projects recently and going forward will always start with this frontend stack and no more until fully necessary.


This discussion made me happy to see more people enjoying the stack available in the browser. I think over time, what devs enjoy using is what becomes mainstream, React was the same fresh breeze in the past.


I recently used Preact and HTM for a small side project, for the JSX-like syntax without a build step.


I have not written a line of JavaScript that got shipped as-is in probably a decade. It always goes through Vite or Webpack. So the benefit of JS without a build step is of no benefit to me.


Dare to dream and be bold.

Seriously, start a project and use only the standards. You'll be surprised how good the experience can be.


Webcomponents are a pain in the ass to make, though. That is, sufficiently complex ones. I wish there was an easier way.


I've built Solarite, a library that's made vanilla web components a lot more productive IMHO. It allows minimal DOM updates when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.

https://github.com/Vorticode/solarite


It's ok now, at least for me. There are still challenges around theming and styling because of styling boundaries (which makes Web Components powerful, but still). A part of it is about tooling, which can be easier to improve.

Try my tiny web components lib if you want to keep JSX but not the rest of React: https://github.com/webjsx/magic-loop


I find Web Components aren't as much of a pain to write if you ignore the Shadow DOM. You don't need the Shadow DOM, it is optional. I don't think we are doing ourselves too many favors in how many Web Component tutorials start with or barrel straight into the Shadow DOM as if it was required.


They could have better ergonomics and I hope a successor that does comes out but they're really not that bad.


web components need 2 things to be great without external libraries (like lit-html):

- signals, which is currently Stage 1 https://github.com/tc39/proposal-signals

- And this proposal: https://github.com/WICG/webcomponents/issues/1069 which is basically lit-html in the browser


It's a shame Surplus (Adam Haile's, not my succession of it) isn't cited nor is he mentioned, given that at least two of the listed frameworks were heavily and directly inspired by his work. S.js is probably one of the most incredible JavaScript libraries I've used that should be the reference for a signal API, in my opinion.


Svelte has a pretty nice support for this via https://svelte.dev/docs/svelte/custom-elements

It's not a no-build option though.


Agreed on native HTML+CSS+JSDoc. An advantage in my use-cases is that built-in browser dev tools become fluid to use. View a network request, click to the initiator directly in your source code, add breakpoints and step without getting thrown into library internals, edit code and data in memory to verify assumptions & fixes, etc.

Especially helpful as applications become larger and a debugger becomes necessary to efficiently track down and fix problems.


This. Or use ts-blank-space if you prefer TypeScript over JSDoc. That's what we do in https://mastrojs.github.io


TS is worth the build step.


JSDoc is TypeScript.


It is TypeScript in the same way my rear end is the Grand Canyon: they are somewhat isomorphic but one is much less pleasant to look at.


I was already doing that in 2010, with the JSDoc tooling in Eclipse and Netbeans back then.

However I don't get to dictate fashion in developer stacks.


> Modern HTML/CSS with Web Components and JSDoc is underrated.

I've been a front end developer for 25 years. This is also my opinion.


You don't need a build step anymore with TypeScript since Node 24.


I'm referring to client-side javascript.


Why? The half a second for the HMR is taking up too much your day?


No, because layers of abstraction come at a cost and we have created a temple to the clouds piled with abstractions. Any option to simplify processes and remove abstractions should be taken or at least strongly considered.

Code written for a web browser 30 years ago will still run in a web browser today. But what guarantee does a build step have that the toolchain will still even exist 30 years from now?

And because modern HTML/CSS is powerful and improving at a rapid clip. I don't want to be stuck on non-standard frameworks when the rest of the world moves on to better and better standards.


> Code written for a web browser 30 years ago will still run in a web browser today.

Will it? - My browser doesn't have document.layers (Netscape) It seems to still have document.all (MSIE), but not sure it's 100% compatible to all the shenanigans from the pre-DOM times as it's now mapped to DOM elements.


The Space Jam website from 1996 still renders perfectly almost 30 years later.

https://www.spacejam.com/1996/

Those (document.layers and document.all) were both vendor-specific, neither were part of the w3c. I don't recommend ever writing vendor-specific code.

The w3c and standards have generally won so it's easier than ever to write to the standard.


Having all your code go through a multi-step process that spits out 30 different files makes it impossible to know what’s really happening, which I’m uncomfortable with.


Came here to write this exact sentiment. Not everything needs a massive build pipeline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: