The thing that struck me was Carmack's relentless pursuit of perfection. I can't think of many people who'd describe a single frame of input latency as a cold sweat moment!
When you are making videogames, a (video) frame of latency is a big deal.
When I worked on Guitar Hero and Rock Band, we worried about sub-frame latency (timing is more important when you're hitting a drum than when you're firing a gun).
I still use CRT screens, not because of latency, but because of better contrast and colour reproduction, and the capability to use whatever resolution I want.
I noticed that in new games, and using newer video-cards, there is some kinda weird lag there, like if they were geared on purpose for slow LCDs (there seemly even some variables that you can control on AMD cards, using Windows Registry, or tweaking the Linux driver, related to screen input lag, they are on the "PowerPlay" part of the drivers for some reason though, I couldn't figure yet what they do exactly).
EDIT: Also, I stopped playing music-games almost entirely, I found many of them completely unplayable on my setup, I just can't find the correct settings to make the timing work. The least aggravating one is "Necrodancer" that seemly is really good in calibrating.
There are so many AV setups, we had to leave calibration to the user (we tried with auto-calibration but it could not always be perfect).
The fundamental problem is that there are two independent delays that both depend on your individual system: the delay from the time that the console produces a video frame to the time that the user sees it, and the delay from the time that the console produces a sound to the time that the user hears it. In a beatmatching game, you really need the user's perceptions to be in sync, which means delaying either the video or the audio. Of course, the more you delay one or the other, the more the repercussions you run into.
In a regular video game, it's not a big deal if you fire a gun and hear the shot 50ms later, but in a beatmatching game, that delay is really noticeable.
Many modern games buffer frames to render so the rendering is up to 3 frames behind the simulation (in the UE4 case anyway) . in a multiplayer game, you've got these 3 frames of rendering, plus network latencyt both ways, plus a frame of simulation time on the server, and then (depending on the engine) a possible extra frame of input latency when taking the input from the controller to processing it.
Ah, the times of getting a perfect setup for music games: the combination of a simple TV with low lag (introduced by picture scalers and other processing), and a simplest stereo connected with RCA cables.
Both Rock Band and Guitar Hero allow you to calibrate your display latency to compensate for that - since you're playing songs with static timing that's possible.
I particularly liked the analogy with hardware where you do an operation unconditionally and eventually inhibit the result. Thinking about this, I would compare software written in that fashion with a well-oiled, smooth engine.
From what I've read, dropping frames is enough to fail console certification. Console gaming may be one of the few (non-mission critical) software areas with actual high quality standards. Meanwhile, after an iOS update, I have to swipe up to 5 times to pick up a call on my iPhone 5s...
I wonder how strict they are about? Particularly near the end of the 360/ps3 generation, digital foundry comparisons used to be full of inconsistent frame rates etc
The answer is: it depends. They will silently bend the "TRC" a little if there is a business case for releasing something now and not later.
But most of the requirements center around nitpicks of software polish: Specific words and phrases used to discuss the device, loading screens must not just be a black screen, the game should not crash if the user mashes the optical eject button, etc. These things add a level of consistency but aren't the same as "solid 60hz" or "no input lag". The latter sort of issues can be shipped most of the time, they just impact the experience everywhere.
I'm sure Carmack would have favorable views of such a future. I want to say that he made most (all?) of the id tech engines on Linux, but I can't find a source.
RMS's hell is entirely within closed source software and everyone there calls it "Linux".
No, Carmack isn't a fan of Linux as a gaming system. Look it up. Which is to say, on the consumer end. idtech was coded in NEXTStep workstations until at most quake 3, so he liked Unix for development. IIRC, the linux ports of idtech were done by Dave Taylor, not Carmack. Carmack basically just gave a shrug and signed off on them, because they worked, so why not?
And RMS's hell is one where everyone uses open source to some extent, calls it open source, and has no philosophical reason for using it, only practical ones, and have no qualms about mixing it with closed source software.