Huh? Their example could be just reading code in github or reading diffs. You shouldn’t need to pull code into a development environment just so you can GoToDefinition to understand what’s going on.
There’s all sorts of workflows where vim would mog the IDE workflow you’re really excited about, like pressing E in lazy git to make a quick tweak to a diff. Or ctrl-G in claude code.
I wouldn’t be so sure you’ve cracked the code on the best workflow that has no negative trade offs. Everyone thinks that about their workflow until they use it long enough to see where it snags.
... but you do more often that the quick & dirty approach really allows.
I was just watching the Veritasium episode on the XZ tools hack, which was in part caused by poor tooling.
The attacker purposefully obfuscated his change, making a bunch of "non-changes" such as rearranging whitespace and comments to hide the fact that he didn't actually change the C code to "fix" the bug in the binary blob that contained the malware payload.
You will miss things like this without the proper tooling.
I use IDEs in a large part because they have dramatically better diff tools than CLI tools or even GitHub.
> you’ve cracked the code on the best workflow
I would argue that the ideal tooling doesn't even exist yet, which is why I don't believe that I've got the best possible setup nailed. Not yet.
My main argument is this:
Between each keypress in a "fancy text editor" of any flavour, an ordinary CPU could have processed something like 10 billion instructions. If you spend even a minute staring at the screen, you're "wasting" trillions of possible things the computer could be doing to help you.
Throw a GPU into the mix and the waste becomes absurd.
There's an awful lot the computer could be doing to help developers avoid mistakes, make their code more secure, analyse the consequences of each tiny change, etc...
It's very hard to explain without writing something the length of War & Peace, so let me leave you with a real world example of what I mean from a related field:
There's two kinds of firewall GUIs.
One kind shows you the real-time "hit rate" of each rule, showing packets and bytes matched, or whatever.
The other kind doesn't.
One kind dramatically reduces "oops" errors.
The other kind doesn't. It's the most common type however, because it's much easier to develop as a product. It's the lazy thing. It's the product broken down into independent teams doing their own thing: the "config team" doing their thing and the "metrics" team doing theirs, no overlap. It's Conway's law.
IDEs shouldn't be fancy text editors. They should be constantly analysing the code to death, with AIs, proof assistants, virtual machines, instrumentation, whatever. Bits and pieces of this exist now, scattered, incomplete, and requiring manual setup.
One day we'll have these seamlessly integrated into a cohesive whole, and you'd be nuts to use anything else.
There’s so much more iOS apps being published that it takes a week to get a dev account, review times are longer, and app volume is way up. It’s not really a thing you’re going to notice or not if you’re just going by vibes.
The US is generally happy to make ambulances wait in traffic with all other vehicles instead of giving them a dedicated lane that’s shared with buses and/or bikes.
I was using SyncThing, and it worked, but any time you have an Obsidian vault open on two devices, or shortly after another, you're always thinking about if you're going to have to clean up a bunch of sync conflict files later. And that mental overhead is not worth saving $4/mo.
The conflicts are never hard: it's like a git merge conflict where you just take the latest of every conflict block.
I used multiple sync "solutions" (terrible idea, in retrospect); Dropsync, Syncthing, Drivesync, in addition to paying for Obsidian Sync, because I was delusional about "backing up my data". Huge mistake on my part, I've spent many, many, many hours deduplicating worthless "backups". Agree with "just pay for Obsidian Sync".
On the other hand, a worse implementation in the stdlib can make it harder for the community to crystalize the best third-party option since the stdlib solution doesn't have to "compete in the arena".
Go has some of these.
Maybe a good middle-ground is something like Rust's regex crate where the best third-party solution gets blessed into a first-party package, but it is still versioned separately from the language.
> Without naming names, some of those convey less information in 2-3 hours of video than some short form creators do in 2-3 minutes.
Even if you have a 3-minute video that yammers off back-to-back facts to maximize info density, it's still low value because you've only spent 3 minutes with those facts.
There's nothing a 3-minute video can do to compete with a good video 2-3 hour video due to the limitations of the medium.
I would wager a 2-3 hour video of something you find worthless, like celebrity gossip, is preferable to the same thing in TikTok form because at least the longer video challenged you with following a narrative of some length.
When you fill your time with TikTok videos, you're basically regressing to the mental activation normally relegated to babies and toddlers. I think it's fair to demand a little more aspiration of ourselves.
This is one of the most naive things I see people repeat.
The reality is that we're lucky to have mostly-good things at all that align with most of our interests.
Yet people get so comfortable that they start to think mostly-good things are some sort of guarantee or natural order of the world.
Such that if only they could just kill off the thing that's mostly-good, they'll finally get something that's even better (or rather, more aligned with their interests rather than anyone else's).
In reality, mostly-good things that align with most of our interests is mostly a fluke of history, not something that was guaranteed to unfold.
Other common examples: capitalism, the internet, html/css, their favorite part of society (but they have ideas of how it could be a little better), some open-source project they actually use daily, etc.
If only there weren't Android, surely your set of ideals would win and nobody else's.
Agreed that there is a ton of baby in this bathwater.
Also, the open nature of AOSP gave Google its advantage during the early days. Since then, Google has morphed into a company that would likely not make the same decision to create an open-source OS free for others to use and contribute to.
So in the end, what we as consumers actually get, in 2026:
- Google encourages application developers to use hardware attestation to prevent themselves from running on non-blessed, third-party AOSP distributions.
- Google builds basic functionality people care about (including passkeys!) into Play Services, a closed mega-application that happens to require a Google account for most features, and is a moving target for open distributions to mimic.
- Google has closed AOSP contributions to themselves and OEM partners only. AOSP releases are now quarterly source dumps.
- OEMs which traditionally allowed bootloader unlocking (and thus actual ownership of the hardware) have removed it as a matter of policy.
So what exactly is open about Android anymore? Does "source-available OS you can see and not touch" align with your interests? Because it's increasingly not aligned with mine.
There’s all sorts of workflows where vim would mog the IDE workflow you’re really excited about, like pressing E in lazy git to make a quick tweak to a diff. Or ctrl-G in claude code.
I wouldn’t be so sure you’ve cracked the code on the best workflow that has no negative trade offs. Everyone thinks that about their workflow until they use it long enough to see where it snags.
reply