> We gave up the headphone jack. We gave up the microSD card.
Some people might have given it up. I personally own a Sony Xperia phone, and intend to buy another Xperia next year, which will almost certainly still have both. In fact Sony is the one manufacturer that returned to a headphone jack after having removed it for a while. It might be more expensive than the competition, but this is my voting with my wallet.
Worse quality, latency, potential to lose one (or both) earbuds, having to faff with batteries and charging and cases (and charging the charging case) when I can just... plug it in, bam, music in my ears. The knotting is a small price to pay for the improved quality and convenience in every other way.
I have a hot take. Modern computer graphics is very complicated, and it's best to build up fundamentals rather than diving off the deep end into Vulkan, which is really geared at engine professionals who want to shave every last microsecond off their frame-times. Vulkan and D3D12 are great, they provide very fine-grained host-device synchronisation mechanisms that can be used to their maximum by seasoned engine programmers. At the same time a newbie can easily get bogged down by the sheer verbosity, and don't even get me started on how annoying the initial setup boilerplate is, which can be extremely daunting for someone just starting out.
GPUs expose a completely different programming memory model, and the issue I would say is conflating computer graphics with GPU programming. The two are obviously related, don't get me wrong, but they can and do diverge quite significantly at times. This is more true recently with the push towards GPGPU, where GPUs now combine several different coprocessors beyond just the shader cores, and can be programmed with something like a dozen different APIs.
I would instead suggest:
1) Implement a CPU rasteriser, with just two stages: a primitive assembler, and a rasteriser.
2) Implement a CPU ray tracer.
These can be extended in many, many ways that will keep you sufficiently occupied trying to maximise performance and features. In fact to even achieve some basic correctness will require quite a degree of complexity: the primitive assembler will of course need frustum- and back-face culling (and these will mean re-triangulating some primitives). The rasteriser will need z-buffering. The ray-tracer will need lighting, shadow, and camera intersection algorithms for different primitives, accounting for floating-point divergence; spheres, planes, and triangles can all be individually optimised.
Try adding various anti-aliasing algorithms to the rasteriser. Add shading; begin with flat, then extend to per-vertex to per-fragment. Try adding a tessellator where the level of detail is controlled by camera distance. Add in early discard instead of the usual z-buffering.
To the basic Whitted CPU ray tracer, add BRDFs; add microfacet theory, add subsurface scattering, caustics, photon mapping/light transport, and work towards a general global illumination implementation. Add denoising algorithms. And of course, implement and use acceleration data structures for faster intersection lookups; there are many.
Working on all of these will frankly give you a more detailed and intimate understanding of how GPUs work and why they have been developed a certain way, rather than programming with something like Vulkan, spending time filling in struct after struct.
After this, feel free to explore any one of the two more 'basic' graphics APIs: OpenGL 4.6, or D3D11. shadertoy.com and shaderacademy.com are great resources to understand fragment shaders. There are again several widespread shader languages, though most of industry uses HLSL. GLSL can be simpler, but HLSL is definitely more flexible.
At this point, explore more complicated scenarios: deferred rendering, pre- and post-processing for things like ambient occlusion, mirrors, temporal anti-aliasing, render-to-texture for lighting and shadows, etc. This is video-game focused; you could go another direction by exploring 2D UIs, text rendering, compositing, and more.
As for why I recommend starting with CPUs, only to end up back with GPUs again, and one may ask: 'hey, who uses CPUs any more for graphics?' Let me answer: WARP[1] and LLVMpipe[2] are both production-quality software rasterisers; frequently loaded during remote desktop sessions. In fact 'rasteriser' is an understatement: they expose full-fledged software implementations of D3D10/11 and OpenGL/Vulkan devices respectively. And naturally, most film renderers still run on the CPU, due to their improved floating-point precision; films can't really get away with the ephemeral smudging of video games. Also, CPU cores are quite cheap nowadays, so it's not unusual to see a render farm of a million+ cores chewing away at a complex Pixar or Dreamworks frame.
Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.
I would instead suggest two things for power users: installing Windows using autounattend.xml[1], and secondly visiting the mass graves to turn your Windows install into Enterprise (or, if you can wrangle it, get an Education licence from your academic institution/alma mater), which completely gets rid of all consumer-oriented stuff.
To be honest, I don't mind the Windows games. In fact I believe the ones shipped with XP, Vista, and 7 were top-notch. What I mind is games with annoying advertisements in them. I mind when my Weather program is not native and is a glorified web app, also ridden with advertisements.
I'm typing this on my company azure ad integrated windows 11. The system info says it's windows 11 enterprise 25h2.
My start menu still has multiple random xbox crap in there, game bar (what even is that?!), "game mode", "solitaire and casual games". It shows random ads in the weather app. It invites me to do more with a microsoft account, even though the computer is fully azure ad joined and my windows session is an azure ad account with some expensive office365 licence attached.
Before reinstalling the other day for unrelated reasons, I had actually tried to add that account. Turns out it doesn't work with a "work or school" account, it requires the personal one, but it doesn't say it clearly, only that "something went wrong".
I honestly don't see any difference when compared to my personal windows install I use for the occasional game and Lightroom / Photoshop.
Side project(s): Grokking Windows development from the top of the stack to the kernel; everything from Win32, WinUI, WPF, COM, to user- and kernel-mode driver development. It's fun to write drivers in modern C++. Also, massively procrastinated, Vulkan/D3D12 cross-platform game engine written in C++23/26, work-in-progress.
Full time work: GPU driver development and integration for a smartphone series. It's fun to see how the sauce is made.
I use NVIDIA hardware which objectively have superior maximum performance compared to AMD graphics cards. I use HDR high pixel density monitors as well. I like laptops with decent battery life and decent touch pads.
Windows simply offers a cleaner, more well put-together experience when it comes to these edge cases. I have many tiny nitpicks about how Linux behaves, and every time I go back to my Windows Enterprise install it is a breath of fresh air that my 170% scaling and HDR just work. No finagling with a million different environment variables or CLI options. If a program hasn't opted into resolution independent scaling then I just disable it, and somehow the vector elements are still scaled correctly, leaving only the raster elements blurry. Nowadays laptop touch pads feel like they are Macs, which is high praise and a sea change from where Windows touch pads were about a decade ago.
If you strip away all the AI nonsense, Windows is a genuinely decent platform for getting anything done. Seriously, MS Office blows everything else out of the water. I still go back to Word, Excel, and PowerPoint when I want to do productivity. Adobe suite, pro audio tools, Da Vinci Resolve, etc, they just... work. If you haven't programmed in Visual Studio or used WinDbg then you have not used a serious, high-end debugger. GDB and perf are not even in the same league.
As a Windows power user, I want to go back to the Windows 2000 GUI shell, but with all the modernity of Windows 11's kernel and user-space libraries and drivers. I wish Enterprise was the default release, not the annoying Home versions. And I really, really wish Windows was open-sourced. Not just the kernel, but the user mode as well, because the user mode is where a lot of the juice is, and is what makes Windows Windows.
You've misrepresented the situation. Turn up the optimiser to `/O2` and MSVC returns 5 directly, too.
> This function returns 1 if s is "hello". 0 otherwise. I've added a pointless strlen(). It seems like no compiler is clever enough to remove it.
It's funny how sometimes operating at a higher level of abstraction allows the compiler to optimise the code better: https://godbolt.org/z/EYP5764Mv
In this, the string literal "hello" is lowered not merely into a static string, but a handful of integral immediates that are directly inline in the assembly, no label-dereferencing required, and the 'is equal to "hello"' test is cast as the result of some sign extends and a bitwise-xor.
Of course, one could argue that std::string_view::size() is statically available, but then my counter-argument is that C's zero-terminated strings are a massive pessimisation (which is why the compiler couldn't 'see' what we humans can), and should always be avoided.
Some people might have given it up. I personally own a Sony Xperia phone, and intend to buy another Xperia next year, which will almost certainly still have both. In fact Sony is the one manufacturer that returned to a headphone jack after having removed it for a while. It might be more expensive than the competition, but this is my voting with my wallet.
reply