I used to work with a guy who double-checked the machine code generated by assembler (he was an old mainframe programmer, and allegedly used to work on a linker for ES EVM).
So, clearly, almost nobody does that anymore. So according to Karpathy's definition, we have all been vibe coding for quite time now. (An aside - if AIs were any good, they would just skip human languages entirely and go straight to binary.)
So I think the "vibe" in vibe coding refers to inputting a fuzzy/uncertain/unclear/incomplete specification to a computer, where the computer will fill in details using an algorithm which in itself is incomprehensible for humans (so they can only "feel the vibes").
Personally, I don't find the fuzziness of the specification to be the problem; on some level it might be desirable, having a programming tool like that. But the unpredictability of the output is IMHO a real issue.
> So, clearly, almost nobody does that anymore. So according to Karpathy's definition, we have all been vibe coding for quite time now.
Because the compiler is deterministic and the cost of getting something better (based on the processor capability) is higher than just going with the compiled version, which has like 99.9% chance of being correct (compiler bugs are rare). It's not vibecoding. It's knowing your axioms is correct, when viewing the programming language as a proof system (which it is). You go from some low level semantics, upon which you build higher level semantics which forms your business rules.
So giving the LLM a fuzzy specs is hoping that the stars will align and your ancestors spirits awaken to hear your prayers to get a sensible output.
I am not sure what you're saying that I am not. Read the vibe coding definition given by grandparent (according to Karpathy) - you can see that compilers already satisfy that definition - we don't read the final code being produced, we just trust it blindly. That's my problem with that definition, and I claim and agree it is more about the predictability/understandability of the output based on the input.
And I agree with the logic part too. I think we could have humans input fuzzy specification in some formal logic that allows for different interpretations, like fuzzy logic or various modal logics or a combination of them. But then you have a defined and understandable set of rules of how to resolve what would be the contradictions in classical logic.
The problem with LLMs is they are using a completely unknown logic system, which can shift wildly from version to version, and is not even guaranteed to be consistent. They're the opposite of where the software engineering, as an engineering discipline, should be going - to formalize the production process more, so it can be more rigorously studied and easier to reproduce.
I think what SW engineering needs is more metaprogramming. (What we typically call metaprogramming - macros - is just a tip of the iceberg.) What I mean is making more programs that study, modify and transform the resulting programs in a certain way. Most of our commonly used tools are woefully incapable of metaprogramming. But LLMs are decent at it, that's why they're so interesting.
For example, we don't publish version modifications to language runtimes as programs. We could have, for example, produce a program that would automatically transform your code to a new version of the programming language. But we don't do that. It is mostly because we have really just started to formalize mathematics, and it will take some time until we completely logically formalize all the legacy computer systems we have. Then we will be able to prove, for instance, that a certain program can transform a program from one programming runtime to another at a certain maximal incurred extra execution cost.
> LLMs is they are using a completely unknown logic system, which can shift wildly from version to version, and is not even guaranteed to be consistent.
It's not unknown. And it's not a logic system. Roughly, it takes your prompt, add it to the system prompts, runs it trough a generative program that pattern-match it to an output. It's like saying your MP3 player (+ your mp3 files) is a logic system. It's just data and it's translator. And having a lot of storage to have all the sounds in the world just means you have all the sounds in the world, not that it's automatically a compositor.
And consistency is the basic condition for formalism. You don't change your axioms, nor your rules, so that everyone can understand whatever you said was what you intended to say.
> What I mean is making more programs that study, modify and transform the resulting programs in a certain way.
That certain way is usually fully defined and spec-ed out (again, formalism). It's not about programming roulette, even if the choices are mostly common patterns. Even casinos don't want their software to be unpredictable.
> Most of our commonly used tools are woefully incapable of metaprogramming.
Because no one wants it. Lisp has been there for ages and only macros have seen extensive use, and mostly as a way to cut down on typing. Almost no one has the needs to alter the basic foundation of the language to implement a new system (CLOS is kinda the exception there). It's a lot of work to be consistent, and if the existing system is good enough, you just go with it.
> we don't publish version modifications to language runtimes as programs
Because patching binary is harzadous, and loading programs at runtime (plugins) is nerfed on purpose. Not because we can't. It's a very big can of worms (we've just seen the crowdstrike incident when you're not careful about it).
Compiler Explorer (godbolt.org) is quite popular. It's not uncommon for anyone working on performance sensitive code to give the compiler output a quick sanity check.
So, clearly, almost nobody does that anymore. So according to Karpathy's definition, we have all been vibe coding for quite time now. (An aside - if AIs were any good, they would just skip human languages entirely and go straight to binary.)
So I think the "vibe" in vibe coding refers to inputting a fuzzy/uncertain/unclear/incomplete specification to a computer, where the computer will fill in details using an algorithm which in itself is incomprehensible for humans (so they can only "feel the vibes").
Personally, I don't find the fuzziness of the specification to be the problem; on some level it might be desirable, having a programming tool like that. But the unpredictability of the output is IMHO a real issue.