Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Windows can still run software from the 80's, backwards compatibility has always been a selling point for Windows, so I'd call that a win.


Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.


Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.


> you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.

Or Wine, which is less reliable but funnier.


Do you mean winevdm? https://github.com/otya128/winevdm

Wine itself doesn't run on Windows AFAIK.


> Wine itself doesn't run on Windows AFAIK.

It does, if you use an old enough version of windows that SUA is available :). I never managed to get fontconfig working so text overlapped its dialogue boxes and the like, but it was good enough to run what I needed.


Wine ran sort-of-fineish in WSL v1 and I'm pretty sure it'll run perfectly in WSL v2 (which is just a VM).


True, but at this point you're basically doing Windows-on-Linux-on-Windows. But why not anyway... applications will anyway run way faster than on the hardware they were originally thought for.


The real prize is running Win16 apps on 64-bit Windows.

Mind you, Wine might lose that too ...


and Linux stopped supporting 32bit x86 I think around the same time? (just i386?)


Are you talking about CPU support? I installed a 32 bit program on basic linux mint just the other day. If I really need to load up a pentium 4 I can deal with it being an older kernel.


That's exactly what I mean, I wish Linux was more like NetBSD in its architecture support. It kind of sucks that it is open source but it acts like a corporate entity that calculates profitability of things. There is one very important reason to support things in open source: Because you committed to it, and you can. If there are practical reasons such as lack of willing maintainers (I refuse to believe out of all the devs that beg to have a serious role in kernel maintenance, none are willing to support i386 - if NetBSD has people, so too Linux), totally understandable.

You'd expect Microsoft to support things because it doesn't make money for them anymore or some other calculated cost reason, but Microsoft is supporting old things few people use even when it costs them performance/secure edges.


Well for now the kernel still supports it. And the main barrier going forward is some memory mapping stuff that anyone could fix.

Though personally, while I care a lot about using old software on new hardware, my desire to use new software on old hardware only goes so far back and 32 bit mainstream CPUs are out of that range.


I think eventually 32 bit hardware and software shouldn't be supported. But there are still plenty of both. We shouldn't get rid of good hardware because it's too old, that's wasteful. 16bit had serious limits but 32 bit is still valid for many applications and environments that don't need >3GB~ ram. For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there, that's why they use Arm mostly, and that's why Arm has thumb mode (less instruction width = smaller die size). I'm sure the tiny amounts of money and energy saved by not having that much register/instruction width adds up when talking about billions of devices.

Open source isn't where I'd expect abandonware to happen.


> We shouldn't get rid of good hardware because it's too old, that's wasteful.

Depends on how much power it's wasting, when we're looking at 20 year old desktops/laptops.

> 32 bit is still valid for many applications and environments that don't need >3GB~ ram.

Well my understanding is that if you have 1GB of RAM or less you have nothing to worry about. The major unresolved issue with 32 bit is that it needs complicated memory mapping and can't have one big mapping of all of physical memory into the kernel address space. I'm not aware of a plan to remove the entire architecture.

It's annoying for that set of systems that fit into 32 bits but not 30 bits, but any new design over a gigabyte should be fine getting a slightly different core.

> For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there

I don't think that's right, but correct me if I missed something. A basic 64 bit core is extremely tiny and almost the same size as a 32 bit core. If you're heavy enough to run Linux, 64 bit shouldn't be a burden.


It's very impressive indeed.

Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?


My original VB6 apps (mostly) still run on win11


Hmm. IME VB6 is actually a particular pain point, because MDAC (a hodgepodge of Microsoft database-access thingies) does not install even on Windows 10, and a line-of-business VB6 app is very likely to need that. And of course you can’t run apps from the 1980s on Windows 11 natively, because it can no longer run 16-bit apps, whether DOS or Windows ones. (All 32-bit Windows apps are definitionally not from the 1980s, seeing as the Tom Miller’s sailboat trip that gave us Win32 only happened in 1990. And it’s not the absence of V86 mode that’s the problem—Windows NT for Alpha could run DOS apps, using a fatter NTVDM with an included emulator. It’s purely Microsoft’s lack of desire to continue supporting that use case.)


> It’s purely Microsoft’s lack of desire to continue supporting that use case.

NTVDM leverages virtual 8086 mode which is unavailable while in long mode.

NTVDM would need to be rewritten. With alternatives like DOSBox, I can see why MSFT may not have wanted to dive into that level of backwards compat.


As I’ve already said in my initial comment, this is not the whole story. (I acknowledge it is the official story, but I want to say the official story, at best, creatively omits some of the facts.)

NTVDM as it existed Windows NT (3.1 through 10) for i386 leveraged V86 mode. NTVDM on Windows NT (e.g. 4.0) for MIPS, PowerPC, and Alpha, on the other hand, already had[1] a 16-bit x86 emulator, which was merely ifdefed out of the i386 version (making the latter much leaner).

Is it fair of Microsoft to not care to resurrect that nearly decade-old code (as of Windows XP x64 when it first became relevant)? Yes. Is it also fair to say that they would not, in fact, need to write a complete emulator from scratch to preserve their commitment to backwards compatibility, because they had already done that? Also yes.

[1] https://devblogs.microsoft.com/oldnewthing/20060525-04/?p=31...


ReactOS' NTVDM DLL wll work under XP-10 and it will run some DOS games too.


Wait, what's the story of the sailboat trip? My searches are coming up empty, but it sounds like a great story.


Yeah, I was surprised by the lack of search results when I was double-checking my post too, but apparently I wasn’t surprised enough, because I was wrong. I mixed up two pieces of Showstopper!: chapter 5 mentions the Win32 spec being initially written in two weeks by Lucovsky and Wood

> Lucovsky was more fastidious than Wood, but otherwise they had much in common: tremendous concentration, the ability to produce a lot of code fast, a distaste for excessive documentation and self-confidence bordering on megalomania. Within two weeks, they wrote an eighty-page paper describing proposed NT versions of hundreds of Windows APIs.

and chapter 6 mentions the NTFS spec being initially written in two weeks by Miller and one other person on Miller’s sailboat.

> Maritz decided that Miller could write a spec for NTFS, but he reserved the right to kill the file system before the actual coding of it began.

> Miller gathered some pens and pads, two weeks’ worth of provisions and prepared for a lengthy trip on his twenty-eight-foot sailboat. Miller felt that spec writing benefited from solitude, and the ocean offered plenty of it. [...] Rather than sail alone, Miller arranged with Perazzoli, who officially took care of the file team, to fly in a programmer Miller knew well. He lived in Switzerland.

> In August, Miller and his sidekick set sail for two weeks. The routine was easy: Work in the morning, talking and scratching out notes on a pad, then sail somewhere, then talk and scratch out more notes, then anchor by evening and relax.

(I’m still relatively confident that the Win32 spec was written in 1990; at the very least, Showstopper! mentions it being shown to a group of app writers on December 17 of that year.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: