Hacker Newsnew | past | comments | ask | show | jobs | submit | ChrisGreenHeur's commentslogin

those things could likely just run fine on the gpu though

They could run fine on the CPU too. But these are mobile devices, therefore battery usage is another significant metric. Dedicated hardware is more energy efficient than general hardware, and GPU in particular is a power-hog.

Exactly. It's the same thing as video or audio encoding and decoding. Sure the CPU could do it, potentially use the GPU, but having actual hardware encoders and decoders for the most common codecs will save a lot of energy.

Not if GPU RAM is a limiter. Which it is for most models.

Unified memory is a serious architectural improvement.

How many GPUs does it take to match the RAM, and make up for the additional communication overhead, of a RAM-maxed Mac? Whatever the answer, it won’t fit in a MacBook Pro’s physical and energy envelopes. Or that of an all-in-one like the Studio.


One way to say it that is understandable in modern English and Swedish: She shines with beauty/ Hon skiner av skönhet

the llm has no wants


If the reason the LLM retroactively invents for it's previous mistakes is still useful for getting the LLM to not make that kind of mistake again, then the distinction you're driving at doesn't matter.


well, do you want something useful or something true?

the word why is used to get something true.


same with people, no matter what info you give a person you cant be sure they will follow it the same every time


So is the game of telephone as long as people stop whispering and try to not make stuff up


I may have bad news for you on how compilers typically work.


The difference is that what most languages compile to is much much more stable than what is produced by running a spec through an LLM.

A language or a library might change the implementation of a sorting algorithm once in a few years. An LLM is likely to do it every time you regenerate the code.

It’s not just a matter of non-determinism either, but about how chaotic LLMs are. Compilers can produce different machine code with slightly different inputs, but it’s nothing compared to how wildly different LLM output is with very small differences in input. Adding a single word to your spec file can cause the final code to be far more unrecognizably different than adding a new line to a C file.

If you are only checking in the spec which is the logical conclusion of “this is the new high level language”, everyone you regenerate your code all of the thousands upon thousands of unspecified implementation details will change.

Oops I didn’t think I needed to specify what going to happen when a user tries to do C before A but after B. Yesterday it didn’t seem to do anything but today it resets their account balance to $0. But after the deployment 5 minutes ago it seems to be fixed.

Sometimes users dragging a box across the screen will see the box disappear behind other boxes. I can’t reproduce it though.

I changed one word in my spec and now there’s an extra 500k LOC to implement a hidden asteroids game on the home page that uses 100% of every visitor’s CPU.

This kind of stuff happens now, but the scale with which it will happen if you actually use LLMs as a high level language is unimaginable. The chaos of all the little unspecified implementation details constantly shifting is just insane to contemplate as user or a maintainer.


> A language or a library might change the implementation of a sorting algorithm once in a few years.

I think GP was referring to heuristics and PGO.


That makes sense, but I was addressing more than just potential compiler non-determinism.


Deterministic compilation, aka reproducible builds, has been a basic software engineering concept and goal for 40+ years. Perhaps you could provide some examples of compilers that produce non-deterministic output along with your bad news.


JIT compilers.


Compiler artifact is still deterministic. Clearly not referring to runtime behavior that is input-dependent


The output of the compiler isn't deterministic. It depends a lot on timing and how that affects the profiles.


Account created 11 months ago. They're probably just some slop artist with too much confidence. They probably don't even know what a compiler is.


He is a software engineer with a comp.sci masters degree with about 15 years industry experience with primarily C++. Currently employed at a company that you most likely know the name of.


Compilers aim to be fully deterministic. The biggest source of nondeterminism when building software isn't the compiler itself, but build systems invoking the compiler nondeterministically (because iterating the files in a directory isn't necessarily deterministic across different machines).


If you are referring to timestamps, buildids, comptime environments, hardwired heuristics for optimization, or even bugs in compilers -- those are not the same kind of non-determinism as in LLMs. The former ones can be mitigated by long-standing practices of reproducible builds, while the latter is intrinsic to LLMs if they are meant to be more useful than a voice recorder.


You'll need to share with the class because compilers are pretty damn deterministic.


Only mostly, and only relatively recently. The first compiler is generally attributed to Grace Hopper in 1952. 2013 is when Debian kicked off their program to do bit-for-bit reproducible builds. Thirteen years later, Nixos can maybe produce bit-for-bit identical builds if you treat her really well. We don't look into the details because it just works and we trust it to work, but because computers are all distributed systems these days, getting a bit-for-bit identical build out of the compiler is actually freaking hard. We just trust them to work well enough (and they do), but they've had three fourths of a century to get there.


Not if they are dynamic compilers.

Two runs of the same programme can produce different machine code from the JIT compiler, unless everything in the universe that happened in first execution run, gets replicated during the second execution.


That’s 100% correct, but importantly JIT compilers are built with the goal of outputting semantically equivalent instructions.

And the vast, vast majority of the time, adding a new line to the source code will not result in an unrecognizably different output.

With an LLM changing one word can and frequently does cause the out to be so 100% different. Literally no lines are the same in a diff. That’s such a vastly different scope of problem that comparing them is pointless.


No, but will certainly result in a complete different sequence of machine code instructions, or not, depending on what that line actually does, what dynamic types it uses, how often it actually gets executed, the existence of vector units, and so forth.

Likewise, as long as the agent delivers the same outcome, e.g. an email is sent with a specific subject and body, the observed behaviour remains.


The reason this works for compilers is because machine code is so low level that it’s possible for compiler authors to easily prove semantic equivalence between different sets of instructions.

That is not true for an English language prompt like “send and email with this specific subject and body”. There are so many implicit decisions that have to be made in that statement that will be different every time you regenerate the code.

English language specs will always have this ambiguity.


Do these compilers sometimes give correct instructions and sometimes incorrect instructions for the same higher level code, and it's considered an intrinsic part of the compiler that you just have to deal with? Because otherwise this argument is bunk.


Possibly, hence why the discussion regarding security in JavaScript runtimes and JIT, by completely disabling JIT execution.

https://microsoftedge.github.io/edgevr/posts/Super-Duper-Sec...

Also the exact sequence of generated machine instructions depends of various factors, the same source can have various outputs, depending on code execution, preset hardware, and heuristics.


Sure, but surely you expect a `func add(a, b) { return a + b; }` to actually produce a + b, in whatever way it finds best? And if it doesn't, you can reproduce the error, and file a bug? And then someone can fix that bug?


they in fact do have bugs, yes, inescapably so (no one provides formal proofs for production level compilers)


Ok, but we treat them as bugs that we can reproduce and assume that they are solvable? We don't just assume that it's intrinsic to the compiler that it must have bugs, and that they will occur in random, non-deterministic ways?


Compilers are about 10 orders of magnitude more deterministic than LLMs, if not more.


Currently it’s about closing that gap.

And 10 orders is optimistic value - LLMs are random with some probability of solving the real problem (and I think of real systems, not a PoC landing page or 2-3 models CRUD) now. Every month they are now getting visibly better of course.

The „old” world may output different assembly or bytecode everytime, but running it will result in same outputs - maybe slower, maybe faster. LLMs now for same prompt can generate working or non-working or - faking solution.

As always - what a time to be alive!


Reproductible builds are a thing (that are used in many many places)


I love the 'I may have' :)


made some people angry at me :)


Elaborate please


Just show the concept either is not where it is claimed to be or that it is incoherent.


I say this as someone who believes in a higher being, we have played this game before, the ethereal thing can just move to someplace science can’t get to, it is not really a valid argument for existence.


what argument?


The burden of proof lies on whoever wants to convince someone else of something. in this case the guy that wants to convince people it likely is not real.


The original poster stated

> "The human brain is mutable, the human "soul" is a concept thats not proven yet and likely isn't real."

The soul is "a concept that's not proven yet." It's unproven because there's no convincing evidence for the proposition. By definition, in the absence of convincing evidence, the null hypothesis of any proposition is presumed to be more likely. The presumed likelihood of the null hypothesis is not a positive assertion which creates a burden of proof. It's the presumed default state of all possible propositions - even those yet to be imagined.

In other words, pointing out 'absence of evidence' is not asserting 'evidence of absence'. See: Russell's Teapot and Sagan's Dragon (https://en.wikipedia.org/wiki/Russell%27s_teapot)


It is not the case that x is proven due to having y evidence pointing in its direction. That's not how any of this works.


Would they normally do witchcraft if they did not have those rules?


We all do witchcraft on a daily basis. I am manipulating light on a sub-microscopic scale to beam words into your retina from across the world. They are right to be distrustful of our ways.


TikTok, sadly, is the best hypnotic spell ever made.


The US fucked it a couple of days ago, maybe it isn't any more.


I suspect they'll just replace the old recommendations with new ones.


They did, but they fucked it so hard it might actually lose users. They made it so dang obvious. They show you an error message if you send the word Epstein to someone in a private message. Even China's apps know they need to silently delete the censored message to avoid alerting the user.

I heard people are switching to an Australian clone app called Upscrolled? The same way people switched to rednote for a while until tiktok was unbanned the first time.


Wait, is it witchcraft to use a machine created by witchcraft?

Forever?


at the very least, it's acceptance and support of witchcraft which has at times been plenty to justify execution


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: