Eh. It’s written in go. I just built the binary, stuffed it in my repo and continued to use the same version for years. Then when I wanted to update I rewrote my whole theme from scratch, recompiled and now I’m set for years again.
and a convenience symlink ~/.local/bin/hugo, pointing to my "production" version. I can easliy call whichever version I like with hugo<tab><tab>. What am I missing?
It depends on your workflow I guess, but the advantages of having a Hugo version tagged in an image:
- sharing it easily between computers/users (docker pull registry/image:tag)
- having the appropriate binary version embedded in your code through a docker-compose in your repo
- having custom aliases dedicated to hugo included (build/serve/run...)
- using the exact same image in your CI/CD
- not "polluting" your local computer with some more stuff
If it was liquid it would probably blow straight into the bag. In my experience there is quite a big of propulsion there. Enough to overcome gravity here on earth at least and spray dead horizontal.
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
You have misinterpreted my comment. But I concede that I should have written it more clearly.
I divide anti-AI people into two groups. Those who don’t like AI because of what it is, and those who don’t like it because of its impact on society. Naturally there is an overlap.
Luddites were not opposed to the technology. So the comparison to them is only correct for the latter group.
Not talking about LLMs on a forum is not going to change anything in the grand scheme of things. It could be a protest, but I see it more (the feeling I get from the announcement) as a means to protect the forum from being overrun regardless whether AI is ultimately good or bad.
Also note that nowhere in my comment I have stated my position in this argument.
I'm not really convinced there's people who don't like AI "because of what it is". I mean, because of what it is, beyond any social/political considerations.
The only case i know of that is when there was an open letter with Sam Altman and other AI investors calling out the existential danger of AI, which in my view was a way to divert the debate from political questions to hypothetical Matrix/Terminator questions about consciousness and singularity.
really? is it so hard to believe that people dislike AI because it is unreliable, can't be trusted, changes how we work with code, takes the fun out of coding?
i am not worried about social consequences. society can adapt.
i am also not worried about energy use. we have endless clean energy if we can figure out how to use it.
yes, i am worried about society choosing the wrong adaption. that is, i believe we should train everyone to be teachers, doctors scientists, and artists. the stuff that AI should not be doing. but i am not worried about using AI for automation, putting people out of jobs. if we give them the opportunity to learn new jobs and,
IF, AND ONLY IF, we get AI to do it's work with 100% reliability and accuracy.
only then AI will be useful. i have tons of software projects that i'd like to get done. but i can't trust AI to do them for me, because i would spend even more time to verify the results than i would to code it myself.
so yeah, i absolutely don't like AI for what it is, a tool with limited uses that requires me to work in a way i don't want, if i want to benefit from it.
Oh, thank you for clarifying! That is entirely believable, and i'm also one of these people then. I just didn't understand what you meant. I thought you meant people hated AI for being creepy alien tech from scifi movies, not for being unreliable, untrustworthy, etc...
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
Hopefully this does not devolve into ‘nuh-uh’-‘it is too’ but I disagree.
Even putting aside the AI engineering part where you use a model as a brick in your program.
Classic programming is based on assumption that there is a formal strict input language. When programming I think in that language, I hold the data structures and connections in my head. When debugging I have intuition on what is going on because I know how the code works.
When working on somebody else’s code base I bisect, I try to find the abstractions.
When coding with AI this does not happen. I can check the code it outputs but the speed and quantity does not permit the same level of understanding unless I eschew all benefits of using AI.
When coding with AI I think about the context, the spec, the general shape of the code. When the code doesn’t build or crashes the first reflex is not to look at the code. It’s prompting AI to figure it out.
It is not. One version of a compiler on one platform transforms a specific input into an exact and predictable artefact.
A compiler will tell you what is wrong. On top of that the intent is 100% preserved even when it is wrong.
An LLM will transform an arbitrarily vague input into an output. Adding more specification may or may not change the output.
There is a fundamental difference between asking for “make me a server in go that answers with the current time on port 80” and actually writing out the code where you _have to_ make all decisions such as “wait in what format” beforehand. (And using the defaults is also making a decision - because there are defaults)
Compilers have undefined behaviour. UB exists in well defined places.
Even a 100% perfect LLM that never makes mistakes has, by definition, UB everywhere when spec lacks.
Right, they allow for the idea of gradual specification - you can write in broad strokes where you don't care about the details, and in fine detail when you do. Whether the LLM followed the spec or not is mostly down to having the right tooling.
The value is in the imperative, the computer does what you tell it do, The control is very powerful and is arguably a major reason computer technology is as power and popular as it is today. Bits don't generally speaking argue with you the same way analog programming if by electronics or mechanical means did before the transistor.
You can certainly write in imperative or functional but you are still telling the computer what you want. LLM use impercise language can generate loose binding the actual reality people one. They have there use cases too but they have a radically different locus of control. Compilers don't ask you to give up percision either they will do what you tell them to do. AI can do whatever it thinks is the most likely next token which is foundationally different from what we do when we engage in programming or writing in general
If you use an LLM to generate source code you are vibecoding.
You specify the problem in natural language (the vibes) and the LLM spits out source (the code).
Whether you review it or not, that is vibecoding. You did not go through the rigor of translating the requirements to a programming language, you had a nondeterministic black box generate something in the rough general vicinity of the prompt.
Are people seriously trying to redefine what vibecoding is?
No, that is literally vibecoding. Reviewing vibecoded source is just an extra step. It's like saying "I'm not power toolgardening, I use a pair of gardening scissors afterwards." You still did power tool gardening.
As additional proof, the dictionary definition of vibe coding is "the use of artificial intelligence prompted by natural language to assist with the writing of computer code" [1]
It seems like vibecoders don't like the label and are retconning the term.
Both you and the Collins dictionary (merely one dictionary, not an absolute anuthority) are retconning. “Vibe coding”, as originally coined in this tweet, means something more specific: to generate code with LLMs and not really look at the output. The term itself suggests this too: reviewing code is not exactly a vibes-based activity, is it?
That tweet coins the term, we agree there. The activity it describes is using natural language to generate software. Whether you add a review process or not doesn't substantially change that. Sure, Karpathy says he doesn't "read the diffs anymore". Why does he say "anymore"? Clearly he was reading them at some point. If not reading any diffs was a core part of the activity, that wouldn't be the case, the tweet itself clearly outlines that as optional. He's clearly not talking about a core part of the activity.
I think the tweet is pretty clear on its intention for the definition and I’m not interested in arguing about it.
I do think the dictionary definitions, such as they are, are coming from a real place: some people do use the more general definition. And you seem to already know about both definitions. So why argue so belligerently and definitively in the first place? Parent comments you were replying to were obviously using the original definition. Talking about “retconning” is obviously silly given this timeline. Meaning in language is not a race to be the first to make it into a dictionary. It’s a very new phenomenon that new terms make it so quickly into a dictionary at all, and they’re always under review. So maybe factor that into your commentary?
I mean, no, why would it be? There is so, so much to talk about in programming other than AI. Meanwhile, the current HN front page feels like 90% LLM spam: the complete antithesis of what I used to come here for.
I personally can’t wait for no-ai communities to proliferate.
Taking your estimate as a superlative, it would be asinine for the community here to censor AI-targeted discussions in the way I think you'd like to. The same goes for a programming community that censors discussions about LLM programming.
You are basically asking for a brain drain in a field that—like it or not—is going to be crucial in the future in spite of its obvious warts and poor implementation in the present. If that's what you want, be my guest and encourage it; but who's authorized to unilaterally make that decision in a given forum?
In the present case, the moderators for r/programming are. But they're making a mistake by marginalizing the technology that's redefining the practice because people talk about it too much instead of thinking about how to effectively talk about it and then steering the community in that direction.
But that's a full-time job. Which is why I think HN may turn out alright in the long run or a similar community will replace it if it fails to temper the change in the industry.
What this decisions signals to me is that r/programming has been inert for some time. I'm sure plenty of programmers, irrespective of their position on AI, probably find the community rejoicing in their resignation to the technologies influence as their cue to finally exit.
For release notes in particular, I think AI can have value. This is because more than a summary, release notes are a translation from code and more or less accurate summaries into prose.
AI is good at translation, and in this case it can have all the required context.
Plus it can be costly (time and tokens) to both “prompt it yourself” or read the code and all commit logs.
If your process always goes PR -> main and the PR descriptions + commit messages are properly formatted, generating release notes shouldn't be much more than condensing those into a structured list with subheadings.
The problem is that due to the ease these can be made there is also really no reason to make this social. “Why would I look at somebody else’s creations when I can do mine.”
I can see some usage for this use case - "look Morty, I turned myself into a pickle!" - but just like image / meme generators, this is like 10-30 seconds of engagement within a friend circle at best (although some might go viral, but that won't bring in much money for in this case OpenAI).
There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.
I'm not an artist or creative person in any sense. My persona is closer to a settings menu than a colorful canvas.
The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool.
Unfortunately I don't have a solid reference point or checklist for the defining qualities of "good art". And frankly I don't take those who do very seriously. To me art is all about the personal vibes you get from it. So I enjoy Zach London (gossip goblin), Bennet Weisbren, and voidstomper/gloomstomper if you want something to measure with your "real true art" checklist.
They're different impulses. Some want to consume. Others want to create.
TikTok and social media is a strange mix of both, people posting response videos to everything.
Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.
The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
But I guess the desire to create something that others would consume is also different from the desire to simply create.
The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views.
But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings.
Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda.
I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't.
No, the whole horseshit belongs together of course. Just that the AI slop is the logical culmination of the dumbed down pop-culture of the last 15ish years or so.
Some want to consume... content that they don't think they could do in one minute themselves. They want to consume content made by other humans, even if it's still brain-eating algorithmic fodder, but still.
Sora proved it quite clearly. These clips had ZERO value.
> Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist.
I occasionally use Suno to re-imagine songs in different keys, tempos, and genres, and sample them. Most of the output from Suno is slop, but occasionally has a few good bits you can sample, chop up, re-pitch, and create something totally new from, which also has the added benefit of being unrecognizable to rights algorithms and lawyers from major labels.
It's a neat tool for genuine creators, and a crutch for people interested in slop.
Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.
Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost?
How much of your super-awesome bandcamp music is topping charts, selling millions, packing mega stadiums, and is penetrating the zeitgeist so deeply that people around the world are addicted to it?
Maybe, just maybe, I'm not talking about "my" music tastes, but offering commentary on the state of music at a global scale. Weird that this point was so hard to follow!
So true. AI music gens like Suno can't do Paul Shapera works even remotely, but can recreate a lot of pop or EDM music very faithfully. There's just no distance to close, it's already mainstreamly bad.
> Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.
What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now.
I'm talking about modern music. Just because a couple of dweebs on hackernews have "totally amazing underground music" doesn't mean the overall zeitgeist agrees. Regardless of your esoteric music tastes, music by sales and music by charting tells a very different story. And that story is one of replaceable slop.
So find music you like that isn't modern corporate slop. My music right now consists mainly of indie stuff I've found on youtube and daft punk. No plagiarism machine needed, just human-made music
"No plagiarism machine needed, just human-made music"
From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new."
Did Daft Punk put in a lot of effort to remix existing sounds to make their own music? Yes. Did they type "pls make french house electronic music number 1 chart" into a text box? No. Did they also credit original authors? Yes. I've not gone through their whole library, but for example, Edwin Birdsong has songwriting credit for harder, better, faster, stronger
There's this fallacy with AI generation that people think that all you have to do is type "i lik musik pls remake favrite song but better" and you get amazing results.
This is patently untrue.
It's like how if a junior engineer and a principal engineer use claude opus 4.6 they get radically different results. The junior doesn't have the taste or knowledge to know good from bad so the AI oversteers and slop is made. The principal has finely tuned sense of taste and deep knowledge, so they aggressively steer the AI at every step. This is also true in other AI domains.
To be absolutely clear: you can't make good AI music. Try all you want. Try the prompt you just wrote. Show and tell. It's not something you're going to be able to do.
> The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.
For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.
Many of the things on a top #100 list for the last few decades. That includes plenty of "indies" as well as pop.
There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular.
I get that, but you have to pay to create your own.
And on the second part, I somewhat disagree. I mean, yes everyone has a personal preference, but if you bucket all those personal preferences they all fit nicely together (In many buckets).
I think the point of Suno is to make you not search for your specific thing though, and instead produce your own. Searching for niche music has always been a thing. If our goal is to listen for free, we don't care about Suno (or any other way to make music) one bit, it's just another DAW for those making music.
And AI music in general sure has its fans, check out Only Fire for example.
I'm with you here, resonates so much. I'm so fed up with endless subway tunnels, they all look and sound utterly same and boring.
So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels.
Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons.
reply