The MBA is an absolutely solid product that is actually sufficient for the large majority of full stack devs. I use it (MBA 15" M3) with a large complex TypeScript code base, and it is fast and amazing at 24GB of ram or more.
PS. The biggest speedup I got this past year (10x) was switching to native TypeScript (tsgo) and native linting (biome or oxlint).
> absolutely solid product that is actually sufficient for the large majority of full stack devs
Worth pointing out that the same thing is true for a $350 windows box. The news here isn't "The M5 Air is a disappointment", it's "Laptops are commoditized and boring".
There are only a couple of relatively niche spaces where things like cpu performance are really the bottleneck right now.
Hell - RPi 5 is perfectly fine for a huge range of development tasks. The 8gb version is very reasonable $125.
Can you find things that these boxes can't do? Absolutely. Do most developers do those things? ehhhh probably not. Especially not in the webdev space.
Would I still pick a nice machine if given the chance? Sure, I have cash to burn and I like having nice laptops (although not Apple...).
But part of the "AI craze" is that hardware genuinely is commoditized, and manufacturers really, REALLY wanted a new differentiating factor to sell people more laptops. There's not much reason to upgrade, especially if the old machine was a decent machine at time of purchase.
I have 8 year old dell XPS laptops that do just fine for modern dev.
> Depends. Are you doing dev on Microsoft's stack, or are you doing dev on all of the other stacks?
You can run docker in WSL better than you can on a Mac. You can run Linux natively on that box, too. "Stacks" is sort of ambiguous (my world is embedded junk, and the answer for using a mac with these oddball USB flashers and whatnot is pretty much "Just No, LOL"), but to claim that the mac is more broadly capable in these spaces when it is clearly less is.... odd.
Macs are popular among the SV set, so macs are strong in whatever the SV set thinks is important (thus "I bought a Mac Mini for OpenClaw!"). And everything else runs on $350 windows garbage.
Makes sense; according to Geekbench, 9955XX has about a 25% lead in multi-core over the base M4, and about a 5% lead in multi-core over the base M5. And more cores, so better for parallel Rust compilation.
I don't get it either. I've rolled out well over a hundred of these in a higher education setting and I have never had one have a hardware issue or needed to retire it other than wanton damage. I still have a ton of M1s in circulation and they are great still. I had to just replace a Dell with only 2.5 years of service, they tend to fall apart.
Snarky but I agree. I dislike how much MacOS changes with each version. My kids have a Linux box (NUC). I wish we could have Linux on a late model Mac Mini
Why is the finder the way it is? Is it actually easier to use than (whatever the normal file browser windows and linux uses is called) if all you ever use is macs?
Most of the other quirks I can work around (though the default alt tab behavior not picking up windows of the same app is an insane default) but the finder is just unusable.
As much as this saddens me I think its because most computer users these days never think about files. Everything we do on a day to day basis exists as database records, either in sqlite databases hidden away in application data directories, or in the databases behind a million SaaS products. Music is done in Apple Music, photos are managed in iPhoto, and so and so forth.
In which way are other GUI “finder-equivalents” better? I’m not invested either way, but I’m quite curious. It would be a great biz opportunity to make an aftermarket replacement if there is huge gap.
The amount of people that know how to and also want to replace their operating system is effectively a rounding error in the consumer electronic market in general.
I like Linux and had Linux laptops before, but can’t comprehend why anyone would go as far as replacing MacOS on an Apple laptop. The OS is just fine, there is nothing superior about Linux Desktop environments. And you can easily run Docker containers for work that needs Linux.
The MB Air M line is a personal contender for best product of all time: Fantastic performance without fans, amazing battery life, high res display and build quality at that price point.
When the M1 came out it was quite frankly unbelievable. And, even after all these years, I still don't see who would beat it across those dimensions.
My M1 Air is going strong as my travel & about-town laptop. It can do everything I do on my vastly more powerful M4 mbp, aside from compile multiple mobile apps simultaneously in less than a minute. Absolutely insane value and anyone who says otherwise has no idea what they are talking about.
> The MBA is an amazing value, and appears to have only gotten slightly cheaper.
Looks to me like the base model went up by $100, no?
The whining is just whining. It's a fine laptop, but it's not significantly improved from the one they shipped a year ago. Add to that the fact that laptops as a whole are well on the way down their commoditization slope and the general HN desire to cheer about Great New Apple Devices, this is for sure a backwards step.
Could be worse. OpenAI is asking for ID verification to use Codex 5.3, through Persona, which was just exposed as doing extremely dodgy surveillance stuff.
They could be considering the new high end display a different product rather than a refresh (for marketing purposes at least).
I recall the XDR being announced alongside the last Mac Pro redesign. No new Mac Pro yet, so maybe they’ll announce the new large display whenever that is announced?
This seems odd to me. I have never seen obfuscation techniques in first party Apple software - certainly not in Espresso or ANECompiler and overall nowhere at all except in media DRM components (FairPlay).
Apple are really the major OS company _without_ widespread use of a first party obfuscator; Microsoft have WarBird and Google have PairIP.
> Apple are really the major OS company _without_ widespread use of a first party obfuscator
You might want to look into techniques like control-flow flattening, mixed boolean–arithmetic transformations, opaque predicates, and dead code injection — Apple uses all of these. The absence of a publicly named obfuscator doesn’t mean Apple doesn’t apply these methods (at least during my time there).
Ever wonder why Apple stopped shipping system frameworks as individual .dylib files? Here’s a hint: early extraction tools couldn’t preserve selector information when pulling libraries from the shared cache, which made the resulting decompiled pseudocode unreadable.
I'm very familiar with CFG flattening and other obfuscation techniques, thanks.
That's interesting; I suppose I must not have touched the parts of the platform that use them, and I've touched a fair amount of the platform.
Again, I _have_ seen plenty of obfuscation techniques in DRM/FairPlay, but otherwise I have not, and again, I am entirely sure the ANE toolchain from CoreML down through Espresso and into AppleNeuralEngine.framework definitely does not employ anything I would call an obfuscation technique.
> Ever wonder why Apple stopped shipping system frameworks as individual .dylib files?
If the dyld cache was supposed to be an obfuscation tool, shipping the tools for it as open source was certainly... a choice. Also, the reason early tools couldn't preserve selector information was selector uniqueing, which was an obvious and dramatic performance improvement and explained fairly openly, for example - http://www.sealiesoftware.com/blog/archive/2009/09/01/objc_e... . If it was intended to be an obfuscation tool, again it was sort of a baffling one, and I just don't think this is true - everything about the dyld cache looks like a performance optimization and nothing about it looks like an obfuscator.
I’m still relatively new to HN, but I continue to find it fascinating when people share their perspectives on how things work internally. Before joining Apple, I was a senior engineer on the Visual Studio team at Microsoft, and it's amazing how often I bump into people who hold very strong yet incorrect assumptions about how systems are built and maintained.
> I suppose I must not have touched the parts of the platform that use them
It’s understandable not to have direct exposure to every component, given that a complete macOS build and its associated applications encompass tens of millions of lines of code. /s
That said, there’s an important distinction between making systems challenging for casual hackers to analyze and the much harder (if not impossible) goal of preventing skilled researchers from discovering how something works.
> Also, the reason early tools couldn't preserve selector information was selector uniqueing
That isn't even remotely how we were making things difficult back then.
I led the SGX team at Intel for a while, working on in-memory, homomorphic encryption. In that case, the encryption couldn’t be broken through software because the keys were physically fused into the CPU. Yet, a company in China ultimately managed to extract the keys by using lasers to remove layers of the CPU die until they could read the fuses directly.
I’ll wrap up by noting that Apple invests extraordinary effort into making the critical components exceptionally difficult to reverse-engineer. As with good obfuscation—much like good design or craftsmanship—the best work often goes unnoticed precisely because it’s done so well.
I'm done here - you go on believing whatever it is you believe...
No it doesn't. Because literally anybody who knows anything about NASA and follows the Space industry in detail has known about most of the issues since 2015 or even in 2011 when this whole new Post-Constellation shit-show started. And many of the problems have been talked about since the day NASA created Artemis. Destin is just more famous then many of the people in nerd forums.
Destin analysis is ok and he makes a number of good points, but it very pro-Alabama (Mafia) inside NASA and contractors since he very clearly is influence by the strong Albama presence and those are the parts of the industry he interacts with.
So Destin misses a huge amount of the relevant puzzle pieces, or he simply doesn't talk about them.
He also simple makes a few assumptions that are fundamentally wrong, namely the different targets of the program. The goal was never to repeat Apollo and landing a few people a few times is totally different from the original goals of Artemis.
PHP is kind of like C. It can be very fast if you do things right, and it gives you more than enough rope to tie yourself in knots.
Making your application fast is less about tuning your runtime and more about carefully selecting what you do at runtime.
Runtime choice does still matter, an environment where you can reasonably separate sending database queries and receiving the result (async communication) or otherwise lets you pipeline requests will tend to have higher throughput, if used appropriately, batching queries can narrow the gap though. Languages with easy parallelism can make individual requests faster at least while you have available resources. Etc.
A lot of popular PHP programs and frameworks start by spending lots of time assembling a beautiful sculpture of objects that will be thrown away at the end of the request. Almost everything is going to be thrown away at the end of the request; making your garbage beautiful doesn't usually help performance.
Would love to read more stories by you toast0 on things you've optimized in the past (given the huge scale you've worked on). Lessons learned, etc. I always find your comments super interesting :)
<3 I always love seeing your comments and questions, too!
Well on the subject of PHP, I think I've got a nice story.
The more recent one is about Wordpress. One day, I had this conversation:
Boss: "will the blog stay up?"
toast0: "yeah, nobody goes to the blog, it's no big deal"
Boss: "they will"
toast0: "oh, ummmm we can serve a static index.html and that should work"
Later that day, he posted https://blog.whatsapp.com/facebook I took a snapshot to serve as index.html and the blog stayed up. A few months later, I had a good reason to tear out WordPress (which I had been wanting to do for a long time), so I spent a week and made FakePress which only did exactly what we needed and could serve our very exciting blog posts in something like 10-20 ms per page view instead of whatever WordPress took (which was especially not very fast if you hit a www server that wasn't in the same colo as our database servers). That worked pretty well, until the blog was rewritten to run on the FB stack --- page weight doubled, but since it was served by the FB CDN, load time stayed about the same. The process to create and translate blog entries was completely different, and the RSS was non-compliant: I didn't want to include a time with the date, and there is/was no available timeless date field in any of the RSS specs, so I just left the time out ... but it was sooo much nicer to run.
Sadly, I haven't been doing any large scale optimization stuff lately. My work stuff doesn't scale much at the moment, and personal small scale fun things include polishing up my crazierl [1] demo (will update the published demo in the next few days or email me for the release candidate url), added IPv6 to my Path MTU Discovery Test [2] since I have somewhere to run IPv6 at MTU 1500, and I wrote memdisk_uefi [3], which is like Syslinux's MEMDISK but in UEFI. My goal with memdisk_uefi is to get FreeBSD's installer images to be usable with PXE in UEFI ... as of FreeBSD 15.0, in BIOS mode you can use PXE and MEMDISK to boot an installer image; but UEFI is elusive --- I got some feedback from FreeBSD suggesting a different approach than what I have, but I haven't had time to work on that; hopefully soonish. Oh and my Vanagon doesn't want to run anymore ... but it's cold out and I don't seem to want to follow the steps in the fuel system diagnosis, so that's not progressing much... I did get a back seat in good shape though so now it can carry 5 people nowhere instead of only two (caveat: I don't have seat belts for the rear passengers, which would be unsafe if the van was running)
Re: PHP vs a rendered index.html … your story brings back fond memories of my college days (around 2001–2002).
I was a full-time student but also worked for the university’s “internet group.” We ran a homegrown PHP CMS (this was before WordPress/Movable Type), and PHP still felt pretty new. Perl was everywhere, but I was pushing PHP because I’d heard Yahoo had started using it.
Around then, the university launched its first online class registration system. Before that it was all phone/IVR. I warned our team lead the web server would melt down on registration day because every student would be hammering refresh at 9am to get the best class times and professors. He brushed it off, so I pre-rendered the login page as a static index.html and dropped it in the web root.
He noticed, got mad (he had built the CMS and was convinced it could handle the load), and deleted my pre-rendered index.html. So young and dumb me wrote a cron job that pinged the site every few minutes, and if it looked down, it copied my static index.html back into the web directory. Since Apache would serve index.html ahead of PHP, it became an instant fallback page.
Sure enough, at 9am the entire university website went down. Obviously orders of magnitude less scale than your FB story (and way less exciting of an event), but for my small university it was brief moment panic. But my little cron job kicked in and at least kept the front door standing.
While I’m not in active day to day development anymore, I do still work in tech and think a lot about ways to avoid computation. And something I’ve learned a lot from reading your posts over the years and my own personal experiences is just how big you can scale when you can architect in a way that “just pushes bits” (eg “index.html”) as opposed to computes/transforms/renders something … and I’m not sure you can ever really learn that expect through real world experience.
Regarding your links, I’ve seen you post about 1 before and have read about it - it looks very cool. I don’t recall seeing 2 or 3 before and look forward to reading more about those. Thanks as always for your insights!
> Regarding your links, I’ve seen you post about 1 before and have read about it - it looks very cool. I don’t recall seeing 2 or 3 before and look forward to reading more about those. Thanks as always for your insights!
So #1 now has dist connection stuff as of a few hours ago. Not super obvious, but you can load two (or more) nodes and call nodes() and see they're connected. Dist connection opens up lots of neat possibilities... but I do need to add an obvious application so it's like actually neat instead of just potentially neat.
#2 is a pretty neat way to diagnose path mtu problems. And I've been seeing people use it and link to it on networking forums all over, even forums in other languages. Which is pretty awesome. Maybe a few links in forums over the past year, but it's always cool to see people using stuff I built mostly for me. :)
#3 is like I dunno, probably not that useful, I think you could do a lot of similar stuff already, but it felt like a tool that was missing... but I also got some feedback that maybe there's other ways to do it already too, so shrug. But pxe booting is always fun.
in all my years doing database tuning/admin/reliability/etc, performance have overwhelmingly been in the bad query/bad data pattern categories. the data platform is rarely the issue
hey don’t forget, that shitty ORM also empowers you to write beautiful, fluent code that, under the hood, generates a 12-way join that brings down your entire database.
The MBA is an amazing value, and appears to have only gotten slightly cheaper.
This is a solid product, that continually receives incremental improvements and delivered at a lower price point (when spec'd out).
reply