They could run fine on the CPU too. But these are mobile devices, therefore battery usage is another significant metric. Dedicated hardware is more energy efficient than general hardware, and GPU in particular is a power-hog.
Exactly. It's the same thing as video or audio encoding and decoding. Sure the CPU could do it, potentially use the GPU, but having actual hardware encoders and decoders for the most common codecs will save a lot of energy.
Not if GPU RAM is a limiter. Which it is for most models.
Unified memory is a serious architectural improvement.
How many GPUs does it take to match the RAM, and make up for the additional communication overhead, of a RAM-maxed Mac? Whatever the answer, it won’t fit in a MacBook Pro’s physical and energy envelopes. Or that of an all-in-one like the Studio.
If the reason the LLM retroactively invents for it's previous mistakes is still useful for getting the LLM to not make that kind of mistake again, then the distinction you're driving at doesn't matter.
The difference is that what most languages compile to is much much more stable than what is produced by
running a spec through an LLM.
A language or a library might change the implementation of a sorting algorithm once in a few years. An LLM is likely to do it every time you regenerate the code.
It’s not just a matter of non-determinism either, but about how chaotic LLMs are. Compilers can produce different machine code with slightly different inputs, but it’s nothing compared to how wildly different LLM output is with very small differences in input. Adding a single word to your spec file can cause the final code to be far more unrecognizably different than adding a new line to a C file.
If you are only checking in the spec which is the logical conclusion of “this is the new high level language”, everyone you regenerate your code all of the thousands upon thousands of unspecified implementation details will change.
Oops I didn’t think I needed to specify what going to happen when a user tries to do C before A but after B. Yesterday it didn’t seem
to do anything but today it resets their account balance to $0. But after the deployment 5 minutes ago
it seems to be fixed.
Sometimes users dragging a box across the screen will see the box disappear behind other boxes. I can’t reproduce it though.
I changed one word in my spec and now there’s an extra 500k LOC to implement a hidden asteroids game on the home page that uses 100% of every visitor’s CPU.
This kind of stuff happens now, but the scale with which it will happen if you actually use LLMs as a high level language is unimaginable. The chaos of all the little unspecified implementation details constantly shifting is just insane to contemplate as user or a maintainer.
Deterministic compilation, aka reproducible builds, has been a basic software engineering concept and goal for 40+ years. Perhaps you could provide some examples of compilers that produce non-deterministic output along with your bad news.
He is a software engineer with a comp.sci masters degree with about 15 years industry experience with primarily C++. Currently employed at a company that you most likely know the name of.
Compilers aim to be fully deterministic. The biggest source of nondeterminism when building software isn't the compiler itself, but build systems invoking the compiler nondeterministically (because iterating the files in a directory isn't necessarily deterministic across different machines).
If you are referring to timestamps, buildids, comptime environments, hardwired heuristics for optimization, or even bugs in compilers -- those are not the same kind of non-determinism as in LLMs. The former ones can be mitigated by long-standing practices of reproducible builds, while the latter is intrinsic to LLMs if they are meant to be more useful than a voice recorder.
Only mostly, and only relatively recently. The first compiler is generally attributed to Grace Hopper in 1952. 2013 is when Debian kicked off their program to do bit-for-bit reproducible builds. Thirteen years later, Nixos can maybe produce bit-for-bit identical builds if you treat her really well. We don't look into the details because it just works and we trust it to work, but because computers are all distributed systems these days, getting a bit-for-bit identical build out of the compiler is actually freaking hard. We just trust them to work well enough (and they do), but they've had three fourths of a century to get there.
Two runs of the same programme can produce different machine code from the JIT compiler, unless everything in the universe that happened in first execution run, gets replicated during the second execution.
That’s 100% correct, but importantly JIT compilers are built with the goal of outputting semantically equivalent instructions.
And the vast, vast majority of the time, adding a new line to the source code will not result in an unrecognizably different output.
With an LLM changing one word can and frequently does cause the out to be so 100% different. Literally no lines are the same in a diff. That’s such a vastly different scope of problem that comparing them is pointless.
No, but will certainly result in a complete different sequence of machine code instructions, or not, depending on what that line actually does, what dynamic types it uses, how often it actually gets executed, the existence of vector units, and so forth.
Likewise, as long as the agent delivers the same outcome, e.g. an email is sent with a specific subject and body, the observed behaviour remains.
The reason this works for compilers is because machine code is so low level that it’s possible for compiler authors to easily prove semantic equivalence between different sets of instructions.
That is not true for an English language prompt like “send and email with this specific subject and body”. There are so many implicit decisions that have to be made in that statement that will be different every time you regenerate the code.
English language specs will always have this ambiguity.
Do these compilers sometimes give correct instructions and sometimes incorrect instructions for the same higher level code, and it's considered an intrinsic part of the compiler that you just have to deal with? Because otherwise this argument is bunk.
Also the exact sequence of generated machine instructions depends of various factors, the same source can have various outputs, depending on code execution, preset hardware, and heuristics.
Sure, but surely you expect a `func add(a, b) { return a + b; }` to actually produce a + b, in whatever way it finds best? And if it doesn't, you can reproduce the error, and file a bug? And then someone can fix that bug?
Ok, but we treat them as bugs that we can reproduce and assume that they are solvable? We don't just assume that it's intrinsic to the compiler that it must have bugs, and that they will occur in random, non-deterministic ways?
And 10 orders is optimistic value - LLMs are random with some probability of solving the real problem (and I think of real systems, not a PoC landing page or 2-3 models CRUD) now. Every month they are now getting visibly better of course.
The „old” world may output different assembly or bytecode everytime, but running it will result in same outputs - maybe slower, maybe faster. LLMs now for same prompt can generate working or non-working or - faking solution.
I say this as someone who believes in a higher being, we have played this game before, the ethereal thing can just move to someplace science can’t get to, it is not really a valid argument for existence.
The burden of proof lies on whoever wants to convince someone else of something. in this case the guy that wants to convince people it likely is not real.
> "The human brain is mutable, the human "soul" is a concept thats not proven yet and likely isn't real."
The soul is "a concept that's not proven yet." It's unproven because there's no convincing evidence for the proposition. By definition, in the absence of convincing evidence, the null hypothesis of any proposition is presumed to be more likely. The presumed likelihood of the null hypothesis is not a positive assertion which creates a burden of proof. It's the presumed default state of all possible propositions - even those yet to be imagined.
We all do witchcraft on a daily basis. I am manipulating light on a sub-microscopic scale to beam words into your retina from across the world. They are right to be distrustful of our ways.
They did, but they fucked it so hard it might actually lose users. They made it so dang obvious. They show you an error message if you send the word Epstein to someone in a private message. Even China's apps know they need to silently delete the censored message to avoid alerting the user.
I heard people are switching to an Australian clone app called Upscrolled? The same way people switched to rednote for a while until tiktok was unbanned the first time.
reply