Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.
Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.
It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.
The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)
There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…
Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.
async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.
> If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel
Isn't that always true for thread pool-backed parallelism? If only one core is available for whatever reason, then you may have concurrency, but not parallelism.
> Charlie Brown may have been as popular as any character in all of literature
Was he? Maybe this is true inside the US but from outside the US, I've always viewed the character as a peculiarly American artefact – something I was aware of but never really read or watched. This seemed to be reinforced by most major Charlie Brown titles seemingly tied to other American customs like Halloween and baseball.
Snoopy as a character is popular in Japan, but only as a character design - kind of like Hello Kitty. There is zero awareness of any of the shows or really Charlie Brown himself.
I'm Brazilian, in my middle 40s. When I was a little kid my best friend used to carry a blanket around. Neighbors called him "Linus" for years. But I'm confident it was because of the TV show, not the comic strips.
Mac sales are up 12%, year over year. It's Apple's fastest growing hardware category. They're just going to be lower next month (year over year), due to the release cycles being different.
> Even many games that support native linux run better under wine.
The same is often true on macOS, too – running games through CrossOver is often better than the native port. The reality is that there simply aren't enough professional game devs on Linux and macOS platforms to polish that last 20% and make all the difference.
I'm not sure what you're talking about. Any app compiled using LLVM 17 (2023) can use SME directly and any app that uses Apple's Accelerate framework automatically takes advantage of SME since iOS 18/macOS 15 last year.
Benchmarking a processor for "app written by someone who disregards performance" is something you can do, but it's a bit of a pointless exercise; no processor will ever keep up with developers ability to write slow code.
Of course. And these are CPU vector instructions, so the saying "The wider the SIMD, the narrower the audience" applies.
But ultimately with a benchmark like Geekbench, you're trusting them to pick a weighting. Geekbench 6 is not any different in that regard to Geekbench 5 – it's not going to directly reflect every app you run.
I was really just pointing out that the idea that "no" apps use SME is wrong and therefore including it does not invalidate anything – it very well could speed up your apps, depending on what you use.
Yea, I could also blame steam's SD verification system, which just rates compatibility without giving much thought to performance. Cause I'm aware BG3 "works" on SD but walk into an area crowded with NPCs and it becomes an impressionist painting at 10fps
ProtonDB is better for gauging the performance penalty, giving different "medals" in accordance with how good/easily it runs on Linux: https://www.protondb.com/
My guess is that it’s not so much an effort to improve performance (there are other, easier ways to do that and it runs ok as it is) but to experiment with supporting SteamOS as platform in future.
I really struggle to imagine an organisation that shepherds a large and venerable C++ codebase migrating over to Swift.
For all of C++'s faults, it is an extremely stable and vendor-independent language. The kind of organisation that's running on some C++ monolith from 1995 is not going to voluntarily let Apple become a massive business risk in return for marginally nicer DX.
(Yes, Swift is OSS now, but Apple pays the bills and sets the direction, and no one is seriously going to maintain a fork.)
Do you have projects with huge code bases of Obj-C++ in mind?
I guess, some Mac apps? In that case I think most platform independent "guts" would be in C or C++, and the Obj-C++ part is tied to the frameworks, so the devs would have to rewrite it anyway.
Swift's "async let" is parallelism. As are Task groups.
reply