What about assertions that are meant to detect bad hardware? I'd think that's not too uncommon, particularly in shops building their own hardware. Noise on the bus, improper termination, ESD, dirty clock signal, etc. -- there are a million reasons why a bit might flip. I wouldn't want the compiler to optimize "obviously wrong" code out anymore then empty loops.
I think if you're in a language that's doing constant-propagation optimizations, you work around that in one of two ways:
1. you drop down to assembly.
2. you use functions that are purpose built to be sequence points the optimizer won't optimize through. E.g., in Rust, for the case you mention, `read_volatile`.
In either case, this gives the human the same benefit the code is giving the optimizer: an explicit indication that this code that might appear to be doing nothing isn't.
I was green with envy, when I saw how fast and smooth a C64 scrolled some text (iirc it was some machine code monitor). My Amstrad CPC464 had no text mode and the Z80A CPU was clearly overwhelmed with shifting the whopping 16KiB RAM of the graphics buffer or even just rendering a line of text.
> It's up to the compiler to decide how many registers it needs to preserve at a call.
But the compiler is bound by the ABI, isn't it? (at least for externally visible entrance points / calls to sub routines external to the current compilation unit)
Er, what? The article describes a compiler for a not-quite-C programming language which fits entirely in 512B. Your project, if I see this correctly, can optionally produce code meant to execute as boot sector.
Both interesting projects, but other than the words 'boot sector', 'C' and 'compiler', I don't see a similarity.
Iirc (it's been a while), Interactive Unix (full?) install required some 40 (forty!) 5 1/4" floppies (I believe 1.2MiB) anno 1992 or so. Linux (SLS) install was (a little later) so much smaller, even with X11 and TeX, as it had shared libraries (somewhat new in the *nix world then).
Sensitivity peak for humans is in cyan (~510nm) only for low-light conditions (night vision / rod cells). In daylight (cone cells) it's green-yellow (555nm).
https://www.giangrandi.ch/optics/eye/eye.shtml
>The eye behaves differently in high or low light conditions: in daylight, for brightness levels above 3 cd/m2 the vision is mainly done by the centre of the retina, we can see colors and the maximum sensitivity is at 555 nm (in the green region). This type of vision is called photopic vision.
That's completely impossible, you would have severe tunnel vision in daylight, if it was true.
There has never been any real evidence that rods stop working in daylight.
I didn't notice overheating, but there are a quite a few different products on the market.
The modest speed (~50MBps at my place) was then ok-ish, but the (variable!) latency of a couple of ms was annoying (it tended to break pacemaker/corosync cluster communication). And every once in a blue moon they stopped working altogether and needed to be un-plugged.
Worst, for someone interested in analogue electronics, they emit (of course) a huge amount of electrical noise into the power lines.
That would have been then already some kind of anachronism. 8MiB RAM was workable (but only barely so with X11) in the early nineties. Late nineties 64MiB or more were common.
Not all functionality is perfectly replicated, so the user experience depends on the application being used.
I use it semi-regularly for quite a number of years and (consequently) various versions of Wine to run (a near current version of) LTspice. That works perfectly as far as I can tell, but it is my understanding that the maintainer of LTspice puts in some effort to assure compatibility with the then current version of Wine.
reply