Hacker Newsnew | past | comments | ask | show | jobs | submit | estimator7292's commentslogin

Say that again real slow and listen to your words.

The fact that social conventions are arbitrary is wholly irrelevant. Everyone knows, and you are not smart or insightful for pointing it out. Social norms and conventions are arbitrary, evolved constructs. Individually we follow social norms as this is how you get accepted by and participate in society.

Breaking the social norms is generally punished either directly or indirectly. Because human society evolved to favor group cohesion, and acting counter to the rules means you no longer wish to be part of the group.

Please read about social contract theory.


Wholly agree.

I'd also add that it's not about the relevance of the conventions, but one's (in)ability to "read the room" and follow the conventions signals to others how well-adapted (or "aligned") of a human they are.

Not following conventions may be a result of inability to recognize them or outright disrespect and/or unwilling to cooperate, play by rules, honor the social contract, etc.


Ask yourself the same question but replace "social media" with "tobacco"

That seems like a bizarre comparison. Is TikTok high in nicotine?

Have you ever tried quitting smoking?

Easy. I've done it five times in the last three years alone.

I don't think this is what quitting means, or was that part of the joke?

That was the joke: it's not easy to quit smoking.

The more I interact with consteval and the whole template metaprogramming and codegen paradigm, the more I think it's completely inappropriate to shovel into stdlib. I don't think this should even be part of the language itself, but something more like a linter on top of the C++ language.

For most of us it seems you can get good at C++ or metaprogramming. But unless you want to make it your entire career you can't really do both with the same degree of effectiveness.

I really like C++, and I will probably continue using it forever. But really only the very small subset of the language that applies to my chosen field. I'm a "C with classes" kind of guy and templates and constexpr are pretty rare. Hell, half the time I don't even have stdlib on embedded platforms. It's kind of nice, actually.


I am actually glad that more and more of the metaprogramming techniques are built into the language itself, because people are going to try metaprogramming anyways, and their attempts at it are generally less readable without proper compiler support.

Anecdotally, I remember having to review a library written in C++98. It actually worked as promised, but it also did a lot of extremely clever things that were sort of unnecessary if we had just waited for C++11 with type_traits. We got rid of that library later after rewriting all the downstream dependencies.


We find constexpr (and associated templates) essential for when we need to avoid branch penalties. It makes the code so much simpler and cleaner than the alternative. I'm glad the language caters to the needs of everyone, even if any individual person (self included) only uses a little bit of it.

Yeah, I had an opportunity to use it for more or less precisely this case (avoiding a branch) a couple of years ago, and it was a delight to find.

  template<bool condition>
  void method () { .... if (condition) /* constexpr */ { extra_code; } .... }
Which then allows method<true> and method<false> without a runtime branch.

The cardinal question: is the benefit of removing that branch worth the increase in i-cache footprint? I think it depends quite a bit... but also, the speed increases IME from doing this kind of thing can result not merely from the branch removal, but from the code duplication itself. Even if the contents of the branch doesn't directly mention the condition, duplication allows the surrounding code to be separated in the branch predictor, and it's quite common that these conditions will correlate with different branch frequencies in the surrounding. code.

Yep, that was my thinking as well. Two things:

1. this is being used in a method that is widely used in both the <true> and <false> contexts, which I believe means that branch prediction would not be great if it was simply the same instruction sequence in both contexts. I could be wrong about that.

2. the major benefit that I saw was not duplicating the much more substantial code in the "..." sections (before and after the constexpr) and thus making maintainance and continued evolution of the code more reliable.


> is the benefit of removing that branch worth the increase in i-cache footprint?

Like everything else in the world, in general, it depends.

That said, among the limited number of times I've tried this, I don't recall a single case where I felt it would be worth it and it turned out to be detrimental.


I work on a codebase where I am slowly detangling all of the tens of thousands of lines of `if constexpr` templates that were written by a guy who doesn't know how a modern CPU works. It's a bad meme with a very narrow field of beneficial applications. People who think a mispredicted branch is costly are never gonna believe the cost of a page table walk caused by an iTLB miss.

Narrow indeed. If the function is small enough for the i-cache pressure to not matter, it's probably going to get inlined and the condition gets optimized out anyway. If it's big enough, then it's unclear, but microbenchmarks will give you misleading results.

It's only reasonable if you ignore literally everything that capitalism has done in the last 50 years.

Prices can only go up because to do otherwise harms shareholder value. None of the COVID inflation prices have come back down even though supply recovered. Prices continue going up.


People spend less of their income on basic needs than they did 50 years ago.

Housing breaks that somewhat, but on average people consume more of it (larger, higher quality spaces), so it isn't really apples to apples.


They're the same strategy with different branch names.

Reballing isn't too difficult. It's mostly about buying all the special tools and supplies. You need a specific template for each BGA pad layout and jars of the right size solder balls.

Some people also just put solder paste on the PCB with a stencil and reflow that into solder bumps that you then solder the chip to. I find that method suspect but if it works it works


Sure, easy answer: it's simply not a generational thing. What you're observing is your own bias. Prior generations (including yours) get scammed just as often.

It seems like some of us treat tokens as the level of fuel in the code machine. When it runs out, you simply go do something else.

What's wild to me is that there's a whole other segment of people that treat tokens as, I dunno, some kind of malicious gatekeeping to the magical program generator. Some kind of endorphin rush of extracting functional code from a naive and poorly formed idea.

To the former group, the gambling metaphor is flatly ridiculous. The AI is a tool and tokens are your allocation for tool time. To the latter, someone is trying to stifle you and strangle your creativity behind arbitrary limits.

I don't know how to feel about this other than uneasy and worried.


> I don't know how to feel about this other than uneasy and worried.

Stop using "AI" and get better at writing software than the people (read: dummies) who are.


It's already unaffordable, enrollment is dropping, and tuition has only continued to go up to offset lower enrollment.

The incentives are not so naively simple


Population trends are a pretty big factor in enrollment. Most enrollment is by fresh high school grads. And there are significantly fewer of those then there were 10 years ago; you can make up some of that by expanding eligibility and encouraging more young people to go to college (even those that would be better served doing something else), or expanding international admissions (but maybe not in this administration).

Cost is certainly also a factor, but I suspect population is a bigger factor.


Sure, I think it's pretty interesting that given the same(ish) unthinkably vast amount of input data and (more or less) random starting weights, you converge on similar results with different models.

The result is not interesting, of course. But I do find it a little fascinating when multiple chaotic paths converge to the same result.

These models clearly "think" and behave in different ways, and have different mechanisms under the hood. That they converge tells us something, though I'm not qualified (or interested) to speculate on what that might be.


Two things that narrow the “unthinkably vast input data”: 1) You’re already in the latent space for “AI representing itself to humans”, which has a far smaller and more self-similar dataset than the entire training corpus.

2) We’re then filtering and guiding the responses through stuff like the system prompt and RLHF to get a desirable output.

An LLM wouldn’t be useful (but might be funny) if it portrayed itself as a high school dropout or snippy Portal AI.

Instead, we say “You’re GPT/Gemini/Claude, a helpful, friendly AI assistant”, and so we end up nudging it near to these concepts of comprehensive knowledge, non-aggressiveness, etc.

It’s like an amplified, AI version of that bouba/kiki effect in psychology.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: